Solana co-founder Launching numerous L2 solutions is not a feasible solution for blockchain scalability.

Launching numerous L2 solutions is not feasible for blockchain scalability.

Author: DARREN KLEINE, Blockworks; Translation: Song Xue, LianGuai

Polygon, Arbitrum, Optimism, Base… The list of Layer 2 solutions continues to grow, and it seems to be getting longer.

Anatoly Yakovenko, co-founder of Solana, believes that most emerging Layer 2 solutions (primarily on Ethereum) will not be sustainable scaling solutions in the long run.

“They have fragmented the user base, so the user experience has become very, very complex.”

In the Lightspeed podcast (Spotify/Apple), Yakovenko gave an example from his time working at the Web2 company Dropbox. “We had a huge MySQL database for all the folders because once you fragmented them, it became very difficult to create links between different users in different folders.”

“Tracking consistency between two different databases is a headache. You have to synchronize everything through [layer-1].”

Yakovenko stated that large enough fragmentation leads to “massive composability” and user experience issues. “You have to resynchronize through [layer-1], which incurs the same costs.” “It’s very, very difficult to deal with,” he said.

Yakovenko used the example of NFTs to illustrate this issue. “You can’t have an NFT in every aggregation. It can only be bridged to one,” he said. “If I want a particular NFT, the marketplace where it is being sold is the marketplace I need to buy it from—and it can only exist in one of those marketplaces.”

Yakovenko explained that by creating different states, Layer-2 breaks the composability of specific NFT markets. “This is the fundamental challenge [layer-2] faces.”

What if there is only one Layer2?

Yakovenko stated that in theory, a single Layer2 solution (instead of the various existing solutions in the Ethereum ecosystem) would simplify the composability problem.

“[Solana Virtual Machine] can run as many transactions in parallel as possible,” he explained. “It always scales up the hardware capacity to meet demand.”

“Then, you dump all the data into danksharding,” he said, ideally, this would be the “perfect implementation of the [data availability] layer, trying to maximize its [data availability] system.”

“So, you have a [Layer2], a bandwidth-optimized system. That’s basically the essence of Solana,” he said with a smile. “Solana itself can execute all programs asynchronously and fork them off into separate channels.”

“All the forks get picked up quickly,” he said, “and then these larger systems can execute the programs. Then, if you need to, you can do batch [zero-knowledge] verification. All these things can be achieved on Solana.”

Yakovenko points out that there are still “fundamental differences” between Solana and multi-layered chain solutions. “Solana is not required to do danksharding because it does break the idea of global information synchronization.”

“I don’t want to compromise data availability,” he said. “Even though you may be able to improve bandwidth, there is still a need for trade-offs.”

Yakovenko believes in the necessity of actively avoiding asynchronous solution systems. “When I submit here,” he said, “this phenomenon will be observed at the fastest speed allowed by physics in Singapore, Brazil, and around the world, actually creating value for the world.”

Yakovenko explained that the goal is to minimize “any information asymmetry between two participants.”

“It allows for the existence of a fair market,” he said. “This is a core use case that you can’t optimize with all other systems.”

“A single aggregation can occupy all the bandwidth of the data availability layer, whether it’s Solana, Celestia, or Ethereum,” he said. “I think many engineers agree that this idea is a more efficient design.”