Full Text of Gavin Wood’s Speech: How Polkadot Shifts from “Chain-Centered” to “Application-Centered”
Gavin Wood's Speech: How Polkadot Shifts to "Application-Centered" Polkadot is shifting from "chain-centered" to "application-centered" to better meet the needs of developers and users. Many blockchain projects focus on improving the blockchain itself, but Polkadot is providing developers and users with standardized tools and infrastructure to build and use applications. This shift makes it easier for developers to build applications, allows users to access various applications, and enables chains to communicate with each other. This approach will significantly promote the development of the blockchain industry. Thank you for listening.
Author: Gavin Wood, PolkaWorld
On June 28th, the annual flagship event of Polkadot, Polkadot Decoded conference, was held in Copenhagen, Denmark. Web3 enthusiasts, builders, and investors from around the world gathered to discuss the latest developments in the Polkadot ecosystem.
The most surprising part of this conference was the appearance of Polkadot founder Gavin Wood as a special guest, who brought some very important insights.
Gavin shared the future development direction of Polkadot and proposed a new perspective on viewing Polkadot: not limited to the existing parallel chains and relay chains, but focused on the more fundamental resources needed for blockchain – computing cores (nuclei), and regarded Polkadot as a multi-core computer.
Moreover, Gavin proposed that Polkadot may cancel the existing slot bidding mechanism in the future, and adopt a more flexible resource allocation mechanism centered on “nuclei”, such as monthly “bulk purchase” and “instant purchase” of “nuclei”.
- Institutional players are vying to include supervision and sharing ...
- AlloyX: A liquidity collateral protocol for RWA
- ZK-EVM’s Five Types: Features, Advantages and Disadvantages, ...
The following text is compiled from Gavin’s speech by PolkaWorld.
Polkadot 1.0
Polkadot can now be called Polkadot 1.0.
At this stage, Polkadot’s functions are already fully available. It has implemented all the functions mentioned in the white paper seven years ago, and is about to release the codebase of Polkadot 1.0.
So what is Polkadot 1.0? In the initial white paper, I wrote that “Polkadot is a scalable heterogeneous multi-chain”. That is to say, it is a blockchain, but it has a unique consensus mechanism “BABE” that can provide security for other blockchains (parallel chains).
To put it artistically, it’s probably like this.
The middle one is the relay chain, which is responsible for Crowdloan, Auction, balance management, staking, governance, etc. It is a relay chain with many functions. The small dots on the side are parallel chains, and the relay chain also guarantees the security of the parallel chains. Moreover, these parallel chains can communicate with each other.
So what is the product form provided by Polkadot? It is in the form of a slot, with a lease term of 6 months, and the longest usage period of the slot can be obtained two years in advance, plus the mechanism of Crowdloan. But other than that, there is no other way to use Polkadot. In Polkadot 1.0, the only product is parallel chain slots.
A New Perspective on Polkadot: Multi-Core Computer
This saying speaks to a truth: if a person wants to truly understand the world, a change in perspective is crucial, even more important than going to a larger world.
Therefore, here we will change our perspective and re-understand what Polkadot is .
Parallel chains, relay chains and other concepts are all good and were the way I and many others understood Polkadot early on; these were what we were striving to build.
But over time, we found that what we were doing was actually somewhat different from what we originally envisioned. Sometimes, if you’re lucky, or if your team is strong, you may end up building something even more awesome than what you initially set out to do.
In computer science, abstraction and generalization are important. Later, we discovered that the degree of abstraction and generalization we applied to Polkadot was much higher than we had imagined.
So what is the new perspective on Polkadot?
Polkadot is a multi-core computer
First, what we are doing is not about chains, but about space, about the underlying resources required by chains.
Second, Polkadot is a platform for builders to create applications and for users to use applications . Fundamentally, it is not a platform for hosting blockchains. Chains happen to be one way to make Polkadot useful, but may not be the only way.
Finally, it is very resilient. I think this is a more neutral term than “unstoppable”, meaning it can resist any attempt to make it do something it was not originally intended to do, that is, resist the distortion of its original intent.
So overall, Polkadot is a very resilient, general, and continuous computing provider . Continuous computing means that – it’s not that you have a job, you finish it, and the matter is over; what we want to do is a long-term task that can continue even if it is temporarily paused. It’s a bit like the vision of a “world computer” in 2015 and 2016.
So what is Polkadot from this perspective? It is a multi-core computer where multiple cores can run simultaneously, doing different things. We can see that a blockchain running on one core is a parallel chain, which runs continuously on a core that has been reserved for it. Now we understand parallel chains with this new paradigm.Let’s take a closer look at this “Polkadot computer”.The “Polkadot supercomputer” is multi-core and more powerful than a regular computer. It has about **50 cores** that run continuously and in parallel.According to our prediction model, after years of benchmark testing and optimization, the number of cores can increase to 500-1000 in the later stages.Now let’s look at the performance of each “core”.These cores are similar to CPU cores. They have many features and attributes that can be described, essentially, they are things that do calculations, much like CPU cores.- Bandwidth, or the total amount of data entering and leaving the core, is approximately 1 MB/s.- The underlying computing power, or how much computation it can do, is approximately 380 in the case of Geekbench 5.- Latency, or the time interval between two consecutive tasks, is approximately 6 seconds.As time goes on and hardware improves, these indicators will continue to improve.In the past, the only way for these cores to be useful was through parallel chains. However, there are other ways to use the cores to make them more accessible to everyone.So what does this mean?Cores are actually very flexible. They are not limited to processing a fixed task forever and can easily switch between tasks like a CPU. Since the cores are flexible, the core procurement should also be flexible.While the slot auction model is not flexible enough, it was designed based on the original paradigm of Polkadot – a long-running single chain. Later we had parallel threads as a supplement, but it was only a small step towards the correct paradigm. This model sets a high entry threshold for the Polkadot ecosystem. If you are like me, someone who likes to tinker with various technologies, for example, I don’t want to mess with fundraising and marketing, I just want to deploy some code and see if it can run. But under the current model, I think we missed many potential collaborators like this. A possible future is a “flexible version of Polkadot”. We can abandon the lease and slot model and instead view Polkadot as some “cores”. The time on these cores, which we now call “core time”, but was previously called “block space”, can be sold regularly, that is, people can buy and use core time. My suggestion is this. For the sale of native core time in Polkadot (primary market), it can be divided into two ways: bulk purchase and instant purchase. Bulk purchases are conducted once a month, and one purchase can be used for 4 weeks. Instant purchases are a bit like the instant payment model of parallel threads, which is on-demand purchasing. The cost of using Polkadot, more precisely, the cost of using the core of Polkadot, will be determined by the market situation. There may be multiple cores available on the market, or there may not be any, that’s just how the market works. For immediate use, it will be the continuity of core time sales. In other words, we maximize flexibility and leave the rest to the market. Let’s take a closer look at how bulk purchases work. However, this is not the final proposal, but a version proposed for discussion. Its sale is conducted once every four weeks, and each time a fixed price is sold for four weeks of core time. Everyone will pay the same price.
-
The goal is to lease out 75% of available core time through bulk purchasing.
-
Prices will fluctuate based on the above ratio.
-
Unrented cores will enter the instant market.
-
Special consideration will be given to previous customers who have leased.
Instant Purchasing
Let’s talk about instant purchasing. Essentially, it is purchasing cores when you need to use them.
-
It uses on-chain market makers or broker models with predetermined prices, aiming for 100% utilization.
-
You can take core time from the bulk market, divide it into small pieces, and then sell it in the instant purchasing market.
-
The total sales revenue obtained from instant purchasing will be evenly distributed among core time providers (including Polkadot itself).
The Essence of Instant Purchasing
-
Purchasing is done through the chain by collecting people.
-
It can be used to increase transaction throughput (when you have extra calls, you can increase processing power to double the capacity).
-
It can be used to reduce latency (the chain originally produced one block every 12-18 seconds, but after adding an additional core, it can produce one block every 6 seconds).
-
It can support new forms such as “core contracts.”
The Essence of Bulk Purchasing
-
It is a non-fungible asset. Cores are originally fungible, but when they are divided into many different time periods, they become non-fungible assets. This non-fungible asset can theoretically be demonstrated by XCM. Brokers can present these core times to other chains, which may want to trade these times.
-
This broker chain (system chain) can cut these surrounding time periods into many NFTs.
-
These time periods can be consumed by broker parallel chains, allowing owners to allocate computing to Polkadot cores.
How to Use Bulk Purchasing
So, after you get these times, how do you use them?
-
You can assign them all to a parallel chain, which is the current situation, but it is not done on a monthly basis, but each chain occupies a core.
-
You can assign them to multiple parallel chains to share and take turns using a core.
-
You can put them on the instant market.
-
You can also sell them separately after dividing them up. It may be possible to do this through a separate parallel chain using the NFT XCM method.
Rental control in bulk purchases
So what if you want to lock up a core for a long time? Then, of course, you need to predict the price trend.
I recommend setting such a rule. When allocating a new month’s bulk core time, the broker records the price and who it was allocated to as a backup. In the next month, this person can purchase it at a limited price (there will be an upper limit on the price increase).
What does this mean for existing parallel chains?
-
Existing parallel chain leases will remain unchanged. For example, if you have won two years of slots, it will continue.
-
The pricing of bulk purchases is determined by governance.
-
Personally, I think we should start from a relatively low price to reduce the threshold for participation.
-
For those that have already set the floor price, rental control, and priority transfer rights, in order to ensure long-term price guarantees. We currently only guarantee two years of usage time, but theoretically it can be renewed indefinitely thereafter.
In addition, parallel chains will have more flexible block generation times.
Currently, the block generation time of parallel chains is fixed, approximately 12 seconds, and after further optimization it will be approximately 6 seconds. In the future, I think the block generation time of parallel chains will be more flexible.
Parallel chains will have a “base speed.” For example, if a parallel chain shares a core with one or several other parallel chains, a block is generated every 12 or 18 seconds. However, if higher throughput is needed, you can go to the instant market or purchase more core time through OTC on some enterprise chains.
The nucleus time can also be compressed (by sacrificing bandwidth to reduce latency). By compressing multiple parallel chain blocks into a relay chain nucleus, this will reduce latency but add some bandwidth costs because you have to pay for the opening and ending of a block.
The nucleus time can also be combined (by adding additional nuclei to improve performance and reduce latency). You can have two nuclei in the same period to get two complete parallel chain blocks. This can reduce the block time from 12 seconds to 6 seconds or even 3 seconds.
All of the above is significant for the existing parallel chain:
-
Get more transaction bandwidth when you need it
-
Lower costs when you don’t need it
-
Can be a high-performance multi-core chain
-
Can be a periodic running chain
-
Can be a pure pay-per-use chain
-
Can be a low-latency chain (e.g. one block per second)
-
Can plan long-term capital expenditures
So how can the nucleus be used? The nucleus time can be split up and reassembled.
Simple Use of the Nucleus
This picture is the current situation, the simple use of nucleus time. From left to right, time gradually moves forward. Each row corresponds to a nucleus on Polkadot. Currently, each of the 5 parallel chains occupies one nucleus.
But in fact, it doesn’t matter which nucleus each chain is assigned to. Parallel chains can run on any available nucleus without affecting performance, and these nuclei do not have any special affinity for any particular chain.
Flexible Use of the Nucleus
Flexible nucleus usage, also known as exotic scheduling.
Intervals can be divided
Intervals can be divided, and the owners of intervals can divide and trade intervals. A parallel chain can run for a period of time and then stop its own transaction processing, allowing another parallel chain to run.
We see the parallel chain in light blue stop for a while and then continue. The green chain does the same.
Can cross intervals
Multiple chains can take turns running on a core to share costs. For example, you may occupy 2/3 of the time and another chain 1/3, such as the light blue and yellow chains in the figure.
Core compression is possible
The same core can process multiple blocks at the same time. Verify multiple blocks on one core to achieve higher block rates and lower performance latency.
Cores can be combined
Use multiple cores to get more powerful computing power, which can be momentary or long-term.
The same BlockingraID and the same “task” can be assigned to multiple cores at the same time. It can use two cores to process two blocks during this period of time. For example, the orange here has a fixed core and another intermittent core.
Possible future direction: multiple chains share a core
Two to three chains can share the same core at the same time to reduce costs without reducing latency. This is a more speculative usage.
Possible future direction: Mix and match the above usage
In theory, all of the above usages are combinable, and mixing and matching them will result in an extremely flexible and universal computing resource.
Chain-centric → Application-centric
Polkadot 1.0 is a chain-centric paradigm: Let isolated chains send messages to each other, which is essentially similar to a single chain plus cross-chain bridges, except that the parallel chains are all connected to the relay chain.
This leads to a fragmented user experience. Users may use a certain application on one chain, but they also want to use this application on another chain, that is, to use the application in a multi-chain way.
However, if we have a chain-centered paradigm, we will also have a chain-centered user experience. And if an application is not chain-centered, everything will become difficult.
In reality, if we want to fully utilize the potential of Polkadot, the application needs to be deployed cross-chain, and it needs to be seamless cross-chain, at least for users, and ideally also for developers.
This is an artistic schematic diagram of “what Polkadot looks like”:
In order to quickly launch Polkadot, we chose to put many of Polkadot’s application capabilities on the relay chain. But this is actually a trade-off.
The advantage is that we can deliver many functions in a short time before the technical foundation is fully completed, such as excellent pledging, governance, tokens, and identity systems.
But it also has a cost. If we tie a lot of things to one chain, problems will arise. For example, the relay chain cannot always use its resources on its own job-ensuring network security and message transmission. And it induces everyone to form a chain-centered thinking mode.
In the past, we could only focus on one chain and put all of Polkadot’s functions on the relay chain when it was launched. This was our earliest goal. But unfortunately, the relevant tools have not kept up with the era when both applications and users are cross-chain.
Now, system-level functions are shifting towards the paradigm of cross-chain deployment. System chains are more common, and the things processed by the relay chain are getting less and less. Applications need to be able to cross these chains and cannot make the user experience difficult as a result.
This is a schematic diagram I just drew half an hour ago. This is what I think is a better perspective to understand “what is Polkadot”.
Polkadot is not actually a relay chain in the middle, with parallel chains surrounding it, at least not for people who come to the Polkadot ecosystem. In fact, Polkadot should be an integrated system, a computer that runs many applications.
Yes, there are boundaries between the business logic components of different chains (that is, parallel chains), but this may not be as important to users as we think. More importantly, users can do what they want to do, and they can do it easily, clearly, and quickly.
The dots on the graph are applications, and the dashed line dividing the dots is “Blockingras”. I don’t want to call it a parallel chain because that would lead us into the thinking trap of “each parallel chain corresponds to a nucleus”. This is the pattern of Polkadot so far, but it is not the only choice.
These dots should be able to communicate with each other normally, and almost as easily as they do with the space inside the dotted line.
XCM
How can this be achieved? This brings us to XCM.
XCM is a language, and the transport layer that really delivers the message is called XCMP. I admit that these two names are a bit confusing.
What is XCM for? Its function is to abstract common functions in the chain, creating a descriptive language to describe what you want to do or what you want to happen.
As long as the chain honestly translates this message, everything is fine. But unfortunately, it is not guaranteed that the chain will honestly translate your XCM message. In a trustless environment, XCM is not ideal.
Let’s give an example. In trade, we say that XCMP, this means of transport, gives us a safe trade channel where we will not be robbed on the way. What is sent can be ensured to be received. However, it does not give us a framework for creating binding terms between different trading entities.
Let’s take a more intuitive example – the European Union. What is it? Essentially, it is an alliance that you can join, and it is a treaty framework that allows different sovereign countries to abide by specific treaties. However, it is not perfect, because although there is a common judiciary that can translate the laws of each country and ensure that it complies with the law, it cannot prevent a country from changing its laws to make them inconsistent with the requirements of the European Union.
In Polkadot, we face similar problems. XCM is a language that expresses intent, and WebAssembly is a way to express the laws that parallel chains must abide by in Polkadot, which can be thought of as the European Court of Justice (ECJ), ensuring that parallel chains comply with their own logic, but this does not mean that this logic cannot be legitimately changed by the parallel chain to refuse to comply with the XCM language.
XCM is a language for expressing intent, such as “I am ready to transfer assets” or “I am ready to vote”. This is not a problem between trusted system chains. But if they are between different governance processes or legislative procedures, there will be a problem. In the Polkadot ecosystem, we can do better.
Accord
Here, I propose a new term called Accord. An Accord is a voluntary covenant that spans multiple chains. It’s like saying “I voluntarily agree to follow this business logic, and anything I do will not change this.” The chain itself cannot disrupt the logic of the covenant.
Polkadot guarantees the faithful execution of this logic. The covenant will be specific to a particular function. Any chain that joins the covenant must comply with the rules, which will be specific to that particular function.
To ensure a lower entry threshold, the proposal of the covenant is not licensed. Because it is voluntary to join, it does not affect anyone before passing through and registering.
This diagram is not the most accurate, but the general idea is this. The outer circle is Polkadot, with some small dots inside, and we place the diagram horizontally. So the Accord is a separate mechanism governing its locality.
Accord cannot exist in all systems. To my knowledge, Polkadot is the only system that can support its existence, because Polkadot is the only system with the same strength security layer, and can provide specific state transition functions for each shard. These features enable Polkadot to achieve a cooperation mode that cannot be achieved in other architectures (such as cross-chain bridges).
People familiar with Polkadot may have heard of “SPREE”, which is the technology that can implement Accord.
Some scenarios for using Accord
Let’s take a look at some possible use cases for Accord.
One of them is asset hub.
Currently, if two chains want to interact with assets, they must go through a third chain, that is, the asset hub chain. If one of the chains is a local asset chain, it will be slightly different. But theoretically, if two unrelated chains want to trade third-party assets, you must go and establish an additional path.
With Accord, you don’t have to do that. You can think of it as an embassy, which exists in the universal process space, is scheduled on the same core as the parachain at the same time, but is not part of the parachain business logic, but exists separately. This is a bit like an embassy has its own country’s laws, but its geographical location is in the local country. Similarly, Accord is like external business logic, but it is recognized by everyone and exists locally.
Another example is the XCM router for multicast. It can send a message that spans multiple chains and can be ordered in some way. For example, to perform one operation here and another there, but always with my permission. This is currently not possible.
Another example is the decentralized exchange, which can set up forward stations on multiple different chains so that exchanges can occur directly on the local chain without the need for bidirectional channels.
These are just a few examples I can think of for now, and I believe that the potential of this technology will be further unleashed in the future.
Project CAPI
Let me briefly talk about the user interface, Project CAPI. Its purpose is to allow Polkadot applications that span multiple chains to have smooth, well-designed user interfaces, even when using light clients.
Hermit Relay
This means that all user-level features in the relay chain are transferred to the system chain. For example:
-
balance
-
pledge
-
governance and identity
-
leasing of cores
Ultimately, Polkadot’s functions will span multiple parallel chains, freeing up space in the relay chain.
Building a Resilient Application Platform
Finally, I want to reiterate what we are doing and why. It’s all about resilience.
The world is always changing, but it’s important to respect clear intentions if people have them. The systems we use today are not resilient enough and are based on very old-fashioned ideas.
When your system lacks cryptography and game theory, bad things happen. For example, a large-scale network attack mentioned in this news article led to the leakage of information from 6 million people, that is, one-thousandth of the world’s population. And these things happen frequently.
So how do we build a system that is not subject to these threats? First, of course, we need to build a decentralized, cryptography-based, and game-theory-proof system. But what specifically do we need to do?
Although we promote “decentralization” every day, if everything needs to be supplied through the same RPC provider, it cannot be considered truly decentralized.
Decentralization requires multiple factors to work together:
-
Use of lightweight clients: Smoldot and CAPI will allow high-performance UIs based on lightweight clients
-
ZK primitives: Build a feature-rich, high-performance ZK primitive library. The first library is almost complete and will provide privacy protection for on-chain collectives (including Fellowship).
-
Sassafras consensus: New PoS consensus algorithm. Improves security and randomness, with high-performance transaction routing. Improves the performance and user experience of parallel chains, encrypted transactions prevent Front-Running, and there may be potential MEV revenue.
-
Hybrid network/onion routing: Avoids leaking IP information of transactions. It is a common messaging system between users, chains, and OCWs.
-
Human decentralization: Introduce many diverse people to participate in the system. Encourage everyone’s participation through governance, treasuries, salaries, subsidies, etc., and absorb and maintain collective knowledge.
Remembering our original intention
Finally, I want to reiterate our original intention. Polkadot does not exist to create a specific application, but to provide a platform for a way to deploy multiple applications in that environment, and allow applications to leverage each other’s functionality to improve the welfare of users. And we want to ensure that this vision can be realized as soon as possible, which is the mission of Polkadot.
If Polkadot cannot maintain a certain degree of resilience to changes in the world, then building Polkadot will be meaningless. These changes can be other ways to achieve the same goal, or threats from external organizations that hate the decentralized world.