Conversation between AltLayer, Scroll, and Starknet teams: Shared Sorter and L2 Consensus

AltLayer, Scroll, and Starknet teams discussed the Shared Sorter and L2 Consensus.

Introduction

When we look at the visions and roadmaps of various rollup solutions, we find that almost all rollups have a common goal, which can be condensed into one sentence: to build a technology stack, provide it to the community, solve the problem of blockchain scalability, and ultimately achieve decentralization in operating and governing the technology stack. This leads to the term “decentralized rollup”.

So what exactly is being decentralized? What is the division of labor among the various parts of the Rollup system? Does decentralization mean maximizing the number of participants in the system’s operation? What impact will a centralized sequencer have? How should a shared sequencer and L2 local consensus be designed? What functions are unique to the proofers in ZK-Rollup? What does an open and decentralized proofer network look like? Why do we need zk hardware acceleration? What are some solutions to data availability issues?…

Discussions about decentralized Rollup abound in the community, so ECN has curated a podcast interview series on the topic of “decentralized Rollup”, inviting outstanding founders and researchers in this field to talk about their understanding of decentralized Rollup.

As more and more liquidity flows into Layer2 platforms and more rollup service providers emerge, there are not only general rollup solutions and application-specific rollup chains, but also rollup-as-a-service platforms. Therefore, more and more people are paying attention to the fact that the key role in almost all rollups, the “sequencer”, is still centralized. What are the risks of a centralized sequencer? Is a decentralized sequencer an urgent task?

In the second episode, we invited Yaoqi Jia, founder of AltLayer Network, Toghrul Maharramov, researcher at Scroll, and Abdelhamid Bakhta, Starknet Exploration Lead, to have a roundtable discussion on the topic of decentralized sequencer, giving listeners and readers an understanding of the progress and challenges of decentralized sequencer.

Guests in this episode:

  • Founder of AltLayer Network, Yaoqi Jia, twitter @jiayaoqi

  • Scroll researcher Toghrul Maharramov, twitter @toghrulmaharram

  • Starknet Exploration Lead Abdelhamid Bakhta, twitter @dimahledba

Past

Episode 1: How to Decentralize Rollup?

  • Arbitrum researcher Blockingtrick McCorry

Upcoming

Episode 3: Proof-of-Stake Networks and zk Hardware Acceleration

  • Scroll co-founder Ye Zhang

  • Cysic co-founder Leo Fan

Episode 4: Data Availability and Decentralized Storage

  • EthStorage founder Qi Zhou

Listen

Click to subscribe to the Podcast to learn more:

https://ecnpodcast.fireside.fm/decentralized-rollups-series-2

Youtube:

Xiaoyuzhou:

https://www.xiaoyuzhoufm.com/episode/647ae7c6f81d8df088519ec5

Timestamp

– 00:49 Yaoqi self-introduction

– 01:37 Abdelhamid self-introduction

– 02:50 Toghrul self-introduction

– 04:03 The role of sorter in rollup

– 08:37 Decentralized sorter: improving user experience and solving liveness and censorship issues

– 19:43 How Starknet will decentralize sorter

– 22:59 How Scroll will decentralize sorter

– 26:34 The difference between L2 consensus in Optimistic rollup and zkRollup background

– 32:28 Decentralized sorter of zkRollup needs to consider the prover at the same time

– 36:01 What is based rollup?

– 40:53 The disadvantages of shared sequencer and based rollup, and their application scenarios

– 49:02 What impact will decentralized sorter have on MEV?

Guest Introduction

Yaoqi

I am the founder of AltLayer. AltLayer is building a “Rollup as a Service” platform, where developers only need to click a few buttons and configure some parameters. With our launchBlockingd or control panel, they can quickly release application-specific rollups in minutes. So this is what we are currently trying, providing developers with a common execution environment and functionality. We also offer multiple sequencers, multiple virtual machine systems, and various proof systems for developers to choose from.

Abdelhamid

I work at Starkware and I head up the exploration team. The goal of this team is basically to launch open-source projects, which are research-oriented, but with a focus on engineering. Our goal is to work closely with the community and people from other ecosystems to collaboratively work on open-source projects. One of the projects is Madara, which is actually a Starknet sorter. It is a candidate for the Starknet public network and also for the Starknet application chains and Layer3 sorter. This is also related to what the previous guest said, we are also considering providing rollup as a service, where people can roll out their Starknet application chains and choose different data availability solutions in a modular way. Prior to this, I worked as an Ethereum core developer for four years, mainly responsible for the work on EIP-1559.

Toghrul

I am a researcher at Scroll. My main responsibilities at Scroll are protocol design, bridge design, decentralization, incentives, and things like that. So, if I’m not tweeting about irrelevant stuff, most of the time I’m researching how to make protocols or sorters, verifiers more decentralized, things like that. Like Starkware, one of the things we are researching is the rollup SDK. So, you can release a rollup based on Scroll and use different data availability options in a modular way. We are also considering an option where a rollup released based on the Scroll SDK can use Scroll’s sorter to help them achieve decentralization without having to deal with decentralization issues on their own for each rollup. Of course, there is no final solution at the moment. But, this is also the direction we are researching.

Interview section

Topic 1

Explain what a rollup sorter is?

Abdelhamid

A sorter is a very important component in layer2 architecture, which receives transactions from users and then packages and bundles them into blocks and executes transactions, etc. It’s very critical because it’s responsible for creating blocks, as layer2 is also a blockchain with transaction blocks. A sorter creates these blocks, and then a verifier verifies these blocks.

Yaoqi

As Abdel mentioned, a sorter is a combination of many functions in a blockchain. Therefore, compared to a typical public blockchain, we may actually give the sorter too much responsibility now. It first needs to aggregate all transactions from users and then sort them, either based on gas price or based on first-come, first-served. Then, the sorter needs to execute these transactions. Like now, some Layer2s use EVM (Starware has a different virtual machine), but basically the sorter needs to use a dedicated virtual machine to execute transactions and generate states. Then the transaction reaches a pre-confirmed stage, which means if you see a confirmation time of one or two seconds, or even sub-second, it’s basically a pre-confirmed state completed by the sorter. Then, for most sorters, they also need to upload or publish checkpoints or state hashes to L1. This is a confirmation at the L1 level, which we call data availability. So, sorters actually have many roles in the rollup system. But overall, I think it is the most critical component in the rollup system.

Topic 2

Why is it important to have a decentralized sorter? What are the risks to users and the system if we use a centralized sorter?

Toghrul

First of all, we need to know that currently, except for Fuel V1, there is no real rollup because other rollups still have auxiliary wheels.

However, we can say that once it is classified as a rollup, that is, the multisig is removed, the problem of decentralized sorter becomes a user experience problem, not a security problem. Therefore, when people talk about decentralized L1, the problem is completely different. Because L1 must provide guarantees for sorting and light clients. So if a light client wants to verify that the state is correct, it must trust the validators of L1. But for rollups, this is not the case. Because the sorter only provides temporary sorting, and then it is determined by L1. Moreover, because rollups also guarantee anti-censorship, we do not need decentralization to achieve this.

So there are several reasons why you need a decentralized sorter. First of all, for example, if L1 is slow to finalize (either because your proof of validity submission is too slow, or because it is affected by the optimistic rollup fraud proof challenge period mechanism), you must rely on something to achieve fast transaction confirmation. During this fast confirmation phase, although you can trust that Starkware or Scroll will not cheat you, they represent that there will be no reorganization after a block is confirmed. This is a trust assumption. Decentralization can increase some economic guarantees, etc.

However, based on this, rollups do not have real-time finality guarantees. Essentially, you can force transaction packaging through L1, but it takes several hours to package the transaction. So for example, if there is an oracle responsible for updating and the time is fluctuating, then basically if the oracle updates only after 1 hour or even more, the application in the rollup will be unusable. Essentially, decentralization enables us to provide some stronger real-time anti-censorship guarantees, because in this way, the attacker not only needs to destroy one entity or a few entities, but the entire sorter network.

So for us, decentralization is more of a solution to improve user experience or fix extreme cases of oracle updates, etc., rather than providing basic security guarantees, which is what L1 does.

Abdelhamid:

Yes, about your mention of the decentralized sorter and decentralized L1, it’s not exactly the same issue, and I think that’s very important. Because when we see some criticism of L1 centralized sorters, they don’t look at the trade-offs made by centralized sorters properly.

On that basis, I would like to add something related to user experience and activity. Because when you have only one sorter, you face a greater risk of sorter downtime. Therefore, decentralized sorters also increase resilience and activity on the network. However, even in a centralized context, we have good security measures. Because when you can force-pack transactions through L1, the only difference is the timeline. And having a decentralized sorter gives you fast anti-censorship guarantee as Toghrul mentioned. So, I just want to add that having a decentralized sorter network is important for activity as well.

Yaoqi:

I want to add something. Activity may be the most important thing we need to consider at this stage. There have been cases of airdrops on the most popular L2s, such as Optimism and Arbitrum, where there was downtime for a period of time. Therefore, I think what we need to solve is how to handle thousands of transaction requests per second when we have only one sorter. Even in theory, if you have only one node, it cannot really handle so many requests at the same time. Therefore, we definitely need multiple sorters for activity. Single point of failure is a real obstacle, not only for Web3, but also for Web2.

In addition, there is a problem of censorship. If we have only one sorter, even if we see that it can be run by a team, you still need to prove that the team won’t really censor transactions. Sometimes the bad actor is able to blacklist certain transactions. And in a decentralized sorter system, they can also try to send transactions through other sorters. So, that’s why we’ve been getting a lot of criticism around a single sorter lately.

In addition, there are other issues such as MEV and frontrunning. In centralized sorter systems, especially for DeFi protocols, they may be able to easily check the memory pool. They may not do it in the form of frontrunning, but they have a better chance of chasing transactions and arbitraging.

Many of these issues, for various reasons, are very different between L2 and L1 even though we see a lot of similar issues we face in public blockchains or L1.

Abdelhamid

That’s right, I agree that a decentralized sorter is important. But I also want to say that we all know this is not a simple problem.

Also, because rollup has a very special architecture with multiple entities. So, there will be some trade-offs, and there are some difficulties in how to price transactions because different factors are required to run the network. So, how to price transactions? The hardware requirements for sorters and verifiers are different, and verifiers require a super powerful machine, and so on. Therefore, pricing transactions in a decentralized world is also a very difficult problem, which is why we need time to push forward slowly.

So we will all face these trade-offs, and if we want to decentralize quickly, we may need to have some auxiliary wheels and gradually decentralize, because if we want a perfect architecture directly, it will take several years. So I think we will take a pragmatic approach and try to decentralize gradually. At least that’s our current plan, such as starting with a simple BFT consensus mechanism, and then adding something like another consensus mechanism in the short term. So I just want to say, this is not a simple problem. Because there is a trade-off between development speed and how the architecture is suitable for a decentralized environment.

Topic three

How to decentralize the sorter?

Abdelhamid

There are many features we want to decentralize, and all of these features have different trade-offs.

For example, when decentralizing, you want to introduce some kind of consensus mechanism. And in the consensus mechanism, there are multiple parts. The first is leader election, that is, how to choose who creates the block, and who will be the sorter responsible for creating the block in a given slot or given time. What the Starknet team plans to do is to make as much use of layer1 as possible. That is to say, in our leader election algorithm, we hope to stake in layer1. For example, we have tokens, and the stake will happen in the smart contract on layer1 Ethereum, using it to decide the leader election mechanism. This means that we need some interaction, and the L2 sorter will query the Ethereum smart contract to find out who will be the next leader, and so on. So obviously, some randomness and other things are needed. So this is not a simple problem. But that’s the first part. Then you need a mechanism for the consensus itself. There are multiple options: either longest chain mechanism or BFT, or a mixture of the two. Like Ethereum, it has LMG Ghost and Casper FFG for finality.

Therefore, we may adopt a pragmatic approach and start with BFT. Why? When considering decentralization in layer2, our goal is not to have such a large validator scale as in layer1. For example, in Ethereum, the goal is to have millions of validators participating. In this case, we cannot only use the BFT mechanism because it will be very inefficient and cause very large problems. For example, if there is a problem on the Bitcoin network and it uses BFT, the chain will stop completely. But that’s not the case, the Bitcoin network continues to create blocks, only the final determinism mechanism is attacked.

But in layer2, if the goal is to have several hundred to 1000 validators, starting with the BFT mechanism may be good. Therefore, we have a leader election mechanism, then consensus, and then two other parts, but I won’t say them yet, let other guests continue to supplement. But the other two parts are state updates and proof generation.

Toghrul

First of all, in L2, decentralization is a multifaceted issue, as described by Abdel. Especially in zkRollup, because there are provers and sorters, you have to coordinate between them and decentralize both. This problem is completely different from L1.

Another difference is that in L2, all your designs are to convince the underlying bridging to believe in the correctness of consensus, not to convince any number of other nodes. You should obviously do this, but your main focus should be on bridging itself.

Currently, we are studying two different directions. First, I think, like everyone else, we are studying BFT protocols. This is not very efficient, and there are some problems that need to be solved. We have come up with a rough solution, but it is still not optimal. For example, one of the problems is how do you balance incentives between sorters and provers? Because sorters control MEV, and provers have no chance to touch MEV, the incentive mechanism determines that people should run sorters rather than provers. But in fact, we need more provers than sorters, so how do you balance incentives between the two? This is one of the problems.

The second solution we are studying is another direction. Remember, things may change. New proposals come out every day. For example, there has been a lot of discussion recently about based rollup and how you can completely outsource sorting work to L1. The second direction is basically to give up the sorter completely and use some external builders. For example, Ethereum builders or Flashbots SUAVE, propose blocks for sorting, and then run consensus internally in the provers. The advantage here is that you don’t have to deal with incentive mechanisms because you can basically use PBS within the protocol, which creates a simpler protocol. But the disadvantage is that since we need a lot of provers (because we can parallelize proofs), it is quite difficult to run a classic BFT protocol with them. Therefore, the problem is how do you optimize the existing BFT protocol to run hundreds or even thousands of provers? And this is not an easy question to answer.

Is introducing L2 consensus necessary for a decentralized sorter?
Yaoqi
I can answer this question roughly because we just launched something similar recently. So whether to introduce consensus does not depend on whether we want to. Once you have a lot of sorters or even just nodes, you have to make them reach an agreement. This really depends on your assumptions. If it is a Byzantine assumption, we can use BFT or any existing Byzantine consensus protocol. If it is a non-Byzantine setting, for example, if we only assume that nodes can only be online and offline, and it will not behave maliciously, then we can use the Raft protocol or some other faster consensus protocols. But in any case, if we have a set of sorters or validators, if we want to organize them together and produce blocks over a period of time, then you must have a consensus protocol around them.

So, as Toghrul and Abdel just mentioned, I believe there are many proposals and research topics surrounding how we can implement a decentralized sorting or proof system. So, because we just launched a testnet for multi-sorter rollup system (currently only supporting fraud proofs for optimistic rollup), based on our design and implementation experience, there are some things I can share about the difficulties involved. Just like Toghrul mentioned earlier, the difficulty is not in the consensus protocol itself, the real difficulty is outside of it, such as the proof part. Because if it is a single sorter, you don’t need a lot of nodes. We can treat it as an EVM, a virtual machine. Therefore, just get the transaction and execute, do state transition. The prover is responsible for generating proofs for the state transition of the previous group of transactions. However, if we really run a consensus protocol for the sorters and proofs on the rollup, then we need to introduce additional consensus logic there. On top of this, you also need a proof system for the consensus protocol. Therefore, this will introduce a lot of work for the proof system to generate. Then once you have generated the proof, you can easily verify it on the L1 Ethereum.

So this is why in a sense, when we launched the first multi-sorter testnet, optimistic rollup has some advantages in that regard. Roughly speaking, you can simplify a lot of things, such as not considering the validity proof part. Like us, we basically compared everything with WASM. So in the end, this is a WASM instruction. So by verifying these WASM instructions, it is relatively easy to verify using Solidity code. If we just re-implement all the WASM instruction interpretation on Ethereum.

But overall, the problem is not a single one. If we have the solution to the problem, there will be some other follow-up work that needs to be done accordingly. Of course, there will be MEV problems, such as how to fairly distribute MEV. You can allocate them to all sorters and verifiers based on whether they produce a block or verify a block. But ultimately, this is actually a combination of many problems, not just technical problems, but also economic incentive problems.

And what we need to remember is that the reason for proposing L2 is because we want to greatly reduce gas costs. So we can’t have so many nodes. Even in the generation of evidence, making L2 may be more expensive than L1. So we really need to come up with a balanced way to solve this problem.

Abdelhamid

I want to add a point. First of all, currently optimistic rollups do not have actual permissionless fraud proofs. And every time I have the opportunity, I will continue to emphasize this point, because it should be honest when comparing. So they are not L2 at all. This is the first thing.

Then I want to add something about the asynchrony between sorting and proving, because it is very important. As you said, we want to optimize sorting because it is currently a bottleneck for all solutions. But there is no problem in the context of centralized sorting, because we know that the sorter will only produce value conversions, and we will be able to verify them. But it will be more difficult in the context of decentralized sorting, because perhaps your sorter will be able to produce something that cannot be verified. Then you will need to deal with this problem later.

In the context of centralized sorting, in order to optimize sorting, because we do not need to generate evidence during the sorting process, we can try to perform it at local speed, which is what we want to do. Translate Cairo into a low-level machine language such as LLVM and run it super fast on the sorter. Then you can prove it asynchronously. And the coolest thing about proof is that you can do it in parallel. Large-scale parallelism can be achieved through proof recursion. This is why we will be able to catch up with the speed of the sorter. But it’s hard when it’s decentralized, because we need to ensure that the sorter only produces valid state transitions.

Toghrul

Let me add that I’m not sure how Starknet does it here. But for us, I think this is a general assumption for every zkRollup, if the sorter is decentralized, your proof system must be able to handle invalid batches. So basically, for example, if you submit a batch with an invalid signature, you must be able to prove that the resulting state is equal to the initial state. Therefore, there will be some overhead in any way. This is a question of how to minimize the probability of this happening.

Abdelhamid

Yes, exactly. This is why we introduced Sierra in Cairo 1, to ensure everything is verifiable. This intermediate representation will ensure that every Cairo 1 program is verifiable so we can include a zk-rollup.

What is a Based rollup?

Yaoqi

Based rollup initially came from a blog post, proposed by Justin Drake on the Ethereum forum. His idea was that we could reuse some of Ethereum’s validators to validate the rollup transactions, so we don’t need a separate set of nodes to validate transactions for different rollups. Especially in the future, we will have many rollups, some are generic rollups, and many are application-specific rollups. So in such a case, if we can find a common place like the Ethereum validator pool to validate these transactions, that would be great.

Of course, this is just an idea because it also introduces many technical difficulties. For example, in theory, we can require Ethereum validators to validate rollup transactions, but to bundle or embed the logic of validating rollups into the Ethereum protocol itself is very difficult. We call it protocol-level validation, and this requires a hard fork of Ethereum nodes. Of course, in this case, we can validate some rollup transactions. But if we really do that, you will find a problem. It’s a bit like we want the rollups on L2 to share the load of Ethereum, but in the end, we ask Ethereum validators to do some work for L2. So many people discuss how we can do this.

Later, we talked to a researcher at the Ethereum Foundation named Barnabe, who is currently researching PBS. This is a proposal for Ethereum to divide the tasks of validators into multiple roles, such as builders and proposers. Now we have Flashbots to take on the role of builders in PBS, they form all blocks and send them to Ethereum proposers. So once these blocks are packaged into the Ethereum network, the builders will also receive some rewards. Then in this case, how to reward validators from the Ethereum network who are responsible for verifying rollups?

One solution is “restaking,” which many of you may have heard from EigenLayer or other protocols. Users can restake ETH on other sorting networks. Or reward Ethereum validators who actually run software to verify rollups. In this case, they can get rewards from L2 and also rewards through the restaking protocol. So far, there have been many proposals for this solution, but overall, it is an idea about how to reuse existing Ethereum validators. How can we reuse existing ETH to help usher in a new era of rollups or L2 systems? So it’s basically trying to simplify a lot of the work for rollup projects. If rollups want some new sorters, if they want some new staking sources, they can reuse existing infrastructure or existing staking. So that’s why it’s based on Ethereum, and further infrastructure and staking can also be reused for rollups and L2.

Disadvantages and use cases of shared sequencers and based rollups. – Toghrul

I have some complaints about this proposal, and I am not convinced by the shared sequencer-related proposal at the moment. Of course, they are still in the early stages, and if these designs are improved in the future, then I may fully support them. Just in this form, it is not convincing to me for many reasons.

First, for me, the main value proposition of a shared sequencer is to allow atomic composability between general-purpose rollups such as Scroll or Starknet. However, the problem is that if you have atomic composability, then the final determinism of your rollup is equivalent to the final determinism of the slowest rollup you combine. So, assuming Scroll and Optimistic Rollup are combined, Scroll’s final determinism will be seven days. The main value proposition of ZKRollup is to achieve relatively fast final determinism, and users can withdraw to L1 in a matter of minutes. In this case, it is basically given up.

Another disadvantage is that if you want to get off-chain final determinism, you need to run an L2 node, and as long as the data submitted to the chain is finalized by L1, you will get off-chain final determinism locally on L2. If each combined L2 does not run a full node, local finalization cannot actually be achieved. This means that running this system may be more expensive than running a system like Solana, because multiple full nodes are running at the same time, each with its own overhead, and so on.

So for these reasons, I just don’t think it makes sense for L2. It’s a bit different for L3, because if someone wants to build an application-specific chain and doesn’t want to deal with decentralization issues. For example, if I’m building a game and I only want to deal with issues related to building games, then I can outsource decentralization work. But I don’t think it makes sense for L2 yet.

And for based rollups, I also have concerns. For example, how do you share MEV profits with proofers? Because if you don’t consider the distribution issue, basically L1 can get all the MEV profits. Another minor thing is that its confirmation time is equivalent to L1’s confirmation time, which is 12 minutes, and cannot be faster. Another problem is that since it is permissionless, multiple searchers can submit transaction batches at the same time, which may ultimately result in centralized results. Because if one searcher is easier to connect than others, builders will be incentivized to include their transactions. Therefore, it may lead to only one searcher proposing batches for L2 in the long run, which is not a good solution, because if this happens, we are basically back to the beginning.

Yaoqi

Interestingly, I just had a call with Ben, the founder of Espresso, last week. We talked a lot about shared sorters. As Toghrul mentioned, I think there is some uncertainty around the use cases for shared sorting systems. This is mainly because for generic L2s, we typically don’t have a large number of sorters due to efficiency, complexity, and economics. And I still think that the best use case for shared sorters, based rollups, or restaking is mainly for RAS (rollup-as-a-service) or platforms like that, where we can roll out a lot of rollups. If there aren’t many rollups, to be honest, we don’t really need a large sorting network. When there are only a few generic L2s, these rollups already have their own sorter systems and have their own communities or partners. They don’t really need a separate and third-party network. Similarly, this is a burden for the third-party network, because you have to tailor it for each L2, and each L2 has a different test stack. This requires a lot of changes to your own network.

However, at the same time, as Toghrul mentioned, for some special use cases. For example, if we want some interoperability at the sorter level, shared sorters can be a potential way. For example, the same sorter is used for multiple rollups. In this case, this sorter can basically handle some cross-rollup transactions to ensure cross-chain atomicity between rollups A, B, and C.

But when I describe this situation, you can also see the problem here. If we really have a lot of these shared sorters, they will become bottlenecks and new single points of failure. We have given these so-called shared sorters too much power. They are becoming more like a network, controlling many rollups. In the end, we need to figure out a way to decentralize this shared sorter.

But regardless, I think people are gradually discovering more and more problems while proposing more and more solutions, which is a good thing. Anyway, it’s exciting to see new changes in this field every day. With all these new solutions, at least we are on the right track to truly achieve decentralization of the whole rollup field.

Abdel

Yes, I agree with both of your points. I think it makes more sense for Layer3 and application chains, because they don’t want to shoulder the responsibility of incentivizing a decentralized network again and need to find partners to launch things like sorters. So I think it makes sense for application chains. But like Toghrul, I currently don’t think it has much meaning for Layer2.

Topic 4

What impact will a decentralized sorter have on MEV?

Abdel

For Starknet, in the context of centralization, we do not do any type of MEV and adopt a first-come, first-served model. That is to say, in a decentralized context, of course, there will be more MEV in the future. However, it is too early to say which proportion is because it also depends on the design of the consensus mechanism and other aspects.

Toghrul

But the problem is that even if you don’t do MEV, there may still be some MEV happening in Starknet. So decentralization itself doesn’t really reduce or increase MEV. Of course, if you apply some fair sorting protocol or threshold encryption, then yes, you minimize MEV. But you can’t eliminate it completely. My philosophy is: MEV cannot be eliminated. However, let’s assume you’re just creating a BFT consensus, or building something on top of BFT consensus. This doesn’t actually affect MEV. MEV still exists, and this should be a question about how searchers collaborate with sorters to extract that MEV.

Yaoqi

The problem is that even in a first-come, first-served model, there are still tricky parts. Once we expose the memory pool to some searchers, they still have an advantage to play more. For example, for sorters, they are like waiting at the door of your office. So, in this case, once they see some arbitrage opportunities, not just about front-running or sandwich attacks, once users send transactions, they can immediately see them in the memory pool. So, they can quickly put their own transactions in front of others. So, they have an advantage over other searchers.

But back to decentralization, I think this is mainly for anti-censorship, as we discussed at the beginning. The sorter is managed by a team. The team can say that they are fair to everyone. But this is not prevented in the code. So if we can have a P2P network, if we think these nodes have censored my transaction, and then we can send it to other nodes, that would be great. So, this is really about the fairness of processing transactions on L2.

As for MEV, because recently, in addition to the MEV generated within a single rollup, there are also some MEVs generated across bridges. In this relatively decentralized sorting network, you will have more opportunities to extract MEV. Suppose we have a shared sorting network, if you can somehow influence the shared sorter to reorder transactions, basically you have a greater advantage than others.

Shared sorter network has pros and cons. On the upside, we can further decentralize the sorter system. But on the flipside, everyone has the chance to become a sorter. So, they can basically do whatever they want, and this becomes a dark forest again. So, we introduced decentralization, and then we have to face similar problems that we face in Ethereum. That’s why Flashbots and Ethereum Foundation people are trying to push PBS, separating proposers and builders, and trying to have a separate solution in the builder side.

So when we look at this problem, it’s not just a single problem. It’s not a one-on-one problem anymore. It’s one-on-six, or even more.