This article talks about the current status of DApp development, three expansion ideas for public chains, Ethereum’s progress to Serenity, Gavin Wood’s new journey Polkadot, Cosmos by different routes, DApp development comparison, network topology comparison, differentiation of “cross-chain”, next-generation DApp 9 aspects including development technology selection.
The primary title of the sharing is Polkadot architecture analysis, and the subtitle is a review of next-generation DApp development technology. In fact, the subtitle can much better summarize this sharing, because we are not only talking about Polkadot, but a relatively comprehensive review of platform-type open public chains, including Ethereum 2.0, Cosmos, etc. Of course, Polkadot is the focus.
I am hoping to clarify the direction of DApp advancement technology. This is among the primary issues within the advancement of the blockchain sector. It isn’t only important for developers, but also affects other sector participants. Therefore, I play the role of as straightforward as possible to ensure that nontechnical audiences can generally realize it.
- How come DApp important?
Let me start with the DApp itself, as the context is very long. In the end, I made a decision to talk briefly, otherwise it might be logically incomplete.
DApp is a Decentralized Application decentralized Internet app. For example, Bitcoin is a DApp, which really is a decentralized store of value cryptocurrency. The concept of decentralization will be more complicated. V God has an article detailing that decentralization has three measurements: structures, governance, and logic. You can find and take a look.
Through the perspective of users, decentralization could be simply understood as a trustworthy application attribute that can’t be controlled by individual or a few individuals. Blockchain is the popular technical means to realize DApp, or blockchain is the infrastructure of DApp.
The blockchain mentioned with this sharing identifies the general public chain unless otherwise specified. The distinction between DApp and normal Internet applications is the fact that D will be decentralized. So why is decentralization important? Why is it worthy of the participation of several IT Internet professionals? Is it a pseudo-demand concept?
The clearest answer to this question is Chirs Dixon, a partner of a16z, who published an article entitled “Why Decentralization Issues” in February 2018, which is why decentralization is important.
To comprehend his viewpoint, we must first know very well what network effects are. System effect identifies the mechanism where the electricity of a product or service boosts as users develop.
For example, WeChat, the more folks use it, the more powerful and indispensable it is. The primary of Internet apps is to create and maintain system effects. The firms of giants such as Google, Amazon, and BAT have established strong network effects, rendering it difficult for latecomers to overcome.
Chirs believes that in order to establish system effects for Web platforms, it is necessary to accomplish everything possible to attract customers, attract programmers and businesses, etc. But after splitting through the main element scale, the attractiveness of the system has become more powerful and stronger, and its own control has become stronger.
For example, unless you rely on Tmall, JD.com or even WeChat to become an e-commerce business, it is nearly impossible to succeed. Because they possess formed a huge network effect, customers and merchants are usually locked. The providers of the web platform are companies, and the objective of the company is to increase profits.
When customers and retailers are inseparable in the platform, the relationship between the system and the user’s merchant changes. Considering the picture above, the system initially attracts customers, and after developing a network effect, it starts to create money from customers whenever you can.
The platform has gradually moved from cooperation to competition with programmers, content creators and retailers. For example, everyone understands that Baidu serp’s are not sorted by the authenticity and importance of the information, but whoever pays more is ranked first.
At the initial, Baidu contacted various companies extensively and asked everyone to submit information to him to facilitate user search. Right now unless you pay, the business’s official website can’t be found on Baidu. In order to make money, Baidu diverts patients to Putian hostipal wards. But domestic customers understanding this, they nevertheless cannot perform without Baidu, because Baidu has the nearly all data and understands the users greatest. Think about it if it is terrible.
DApp can transform the monopoly of Web systems. Because DApp will be decentralized, an economic climate maintained by open and clear consensus. The higher the contribution from the system participants, the greater the corresponding rights, but no specific can control the entire situation.
Any participant who wants to harm the interests of others will either not work or will cause a fork. DApps can stay open and fair for a long time, so you don’t have to be worried about crossing the river to demolish the bridge. It is a bit like the interpersonal ideal of assigning each in accordance with his ability.
This is the true open network, the original intention of the web which should not be forgotten. Therefore, many Web giants possess a smooth place for DApp and the blockchain technology that realizes DApp, and have high hopes.
- DApp advancement dilemma
Decentralized applications have the fantastic ideal of reshaping the web, but its development status is very embarrassing. Everyone understands this. I’ll briefly point out it.
The first is that we now have hardly any users. For example, the forecast marketplace Auger, a superstar project within the DApp field, has elevated tens of millions of dollars. The development has had more than three years. After going online, you can find dozens of everyday active customers, and Auger is not the case.
Let’s go through the picture above, from DAppReview. It’s the top 5 daily active customers of Ethereum DApp, and the highest is just a thousand customers. The top Web applications can reach vast sums of daily active users, and the gap is definitely 5 orders of magnitude.
How come the DApp scenario so pitiful? The primary reason would be that the blockchain infrastructure is not strong, making DApp use barriers high and user experience bad. It’s like Ethereum is a village-level road with high tolls and congestion. Of course, no one wants to move.
The figure below shows the utilization rate of Ethereum. It could be observed that from the end of 2017 as yet, Ethereum continues to be operating at close to full capacity. In other words, DApps are slow and expensive, however the infrastructure has truly gone all out and there is absolutely no room for enhancement.
In such a dilemma, it is impossible for DApps to break through key scales, produce network effects, and compete with centralized Web applications. Therefore, the blockchain infrastructure must be upgraded.
Third, the explanation for the slow and expensive-the extremely redundant structure from the blockchain
The root cause from the slow and expensive DApp is the architectural limitation from the blockchain platform. This architectural limitation can be basically summarized as: Blockchain is an extremely redundant processing architecture.
Redundancy is duplication, allowing multiple computers to execute the same computations and store exactly the same data repeatedly. Redundancy will be intentional, not waste. Appropriate redundancy will be common in organization computing and the web.
The most frequent is the master-slave structure, two similar computers, one master and one backup, perform exactly the same calculations and store exactly the same data. The host machine fails, and the standby machine gets on top rapidly. Although two devices do the task of one, it enhances the usability of the machine.
But how come the blockchain extremely redundant? Because the blockchain pushes redundancy to the limit, all computers within the network, regardless of it is a few hundred or tens of thousands. Both perform the same computations and store exactly the same data. The degree of redundancy can’t be added. Extreme redundancy means extremely high cost. How high is the cost?
V God has provided an estimate that the expense of executing calculations or even storing data on Ethereum is 1 million times greater than completing exactly the same computation or storing exactly the same data on a business cloud platform. In other words, the computation that may be completed by spending 100 bucks on normal cloud solutions, and put it on Ethereum, requires a cost of 100 million. Therefore, when contemplating what business could be converted to a DApp, the price must be regarded.
Don’t just place all kinds of cats and dogs around the blockchain just for telling stories and making money. That is clearly a huge waste of sources. So what benefits can be obtained by spending 1 million times the cost? Large availability is of course not a problem. In the Bitcoin or Ethereum system, computers can sign up for or exit anytime, which has no effect on the business.
But high availability is obviously not enough, since it only needs moderate redundancy to accomplish, without extreme redundancy. The new attribute that extreme redundancy provides us will be decentralization. Specifically, decentralization means trustless, permissionless and censorship ressistancy to customers, this means trustlessness, permissionlessness, and censorship opposition.
No permission is simple to understand. Anyone who wants to make use of Bitcoin or Ethereum doesn’t need to apply to others. Anti-censorship is also very clear, no one can end you from making use of blockchain. Consider WikiLeaks, for example, the most powerful country on the planet hates it and wants to remove it rapidly, but WikiLeaks can nevertheless get Bitcoin donations.
The vague meaning is to trust, the English is trustless, trust free or trust minimal. I believe the most accurate statement is have faith in minimal. The usage of decentralized apps actually implies trust in the entire blockchain network.
For example, if you are using Bitcoin and Ethereum, you must have faith in that Bitcoin and Ethereum will not be 51% attacked. The usage of Cosmos and Polkadot requires that less than one-third of destructive validators are believed. Therefore, the exact meaning of trustlessness is the fact that under the idea of trusting the entire blockchain network, you don’t have to trust specific miners or verifiers, and no need to have faith in counterparties.
For an application, if the benefits that users get in the three areas of trustlessness, permissionlessness, and censorship opposition are worthy of a million times the price, then it is reasonable for the application to become placed on the blockchain. Will there be such an app? From my very own viewpoint, the only thing that may be well worth this cost is the need for value storage.
Jimmy Song, the famous Bitcoin maximizationist, said that Bitcoin can succeed, while fiat currency and everything altcoins can fail. Associated with which the centralized currency will never perform the centralized foreign currency, and the decentralized product will never perform the centralized product.
The actual logic is the fact that the same Online sites product includes a cost difference of just one 1 million times, and of course it cannot do it. His statement is definitely reasonable, but too rigid. Because the cost gap of just one 1 million times is not inevitable, it could be transformed and narrowed.
Can the price difference between DApp and centralized Web applications be narrowed from 1 million times to 100,000 times, 10,000 times, as well as 1,000 times. At exactly the same time, the three great things about trustlessness, permissionlessness, and censorship opposition are still taken care of. The answer is definitely entirely possible, as long as the amount of redundancy will be reduced, costs could be reduced. You can find three forms of methods, that’s, the three suggestions of blockchain expansion-representative program, layering and sharding.
Fourth, the first expansion idea-representative system
The first idea of ??expansion-representative system, comes from the ancient political wisdom of mankind. That’s, democracy will be good, but direct democracy for all people is too inefficient. Brexit has been decided by a referendum, but it will be obvious a referendum can’t be held on all issues.
The representative system is that the people elect representatives, who then negotiate laws or main resolutions. You can find two explanations why the representative program improves the efficiency of decision-making. The first is that the amount of people taking part in consensus continues to be greatly reduced. The second reason is that associates are usually full-time politicians, who have more sources and understanding to negotiate nationwide affairs.
The representative system can be used to expand the blockchain, and the most frequent one is EOS with DPoS consensus. EOS token holders go for super nodes, and 21 super nodes take works out of blocks. Weighed against Ethereum, the amount of computers taking part in the consensus has slipped by 3 orders of magnitude.
Furthermore, the computing power of Ethereum’s nodes is unequal, and the process parameter setting must consider low-end computers. The EOS super node host hardware construction and system bandwidth have exactly the same high requirements. So it’s no surprise that EOS can get to a large number of tps, higher than Ethereum.
EOS continues to be around the cusp from the storm since its birth. Some people within the crypto community significantly criticized EOS, saying that it’s centralized, and even convinced that it is not a blockchain in any way. Supporters believe that EOS’s amount of decentralization is sufficient. Users can nevertheless enjoy the great things about trustlessness, permissionlessness, and censorship opposition.
So is the amount of decentralization of EOS sufficient? My estimation is: in some instances it is plenty of, in some instances it is not enough. This will depend on the application and who’s using it.
There is a huge difference among users and users. Just by nationality, you can find Americans, Chinese, Iranians, North Koreans, etc. There’s also differences in sex, age, race, region, occupation, religious beliefs, etc.
The other is a specific user, his needs may also be diversified, such as social, entertainment, finance, collaboration etc. The major types are split into a lot of subcategories. In fund, there are only valuable storage needs for foreign currency, large-value transfer needs, and small-value transaction requirements.
If a lot of the net worth is to use cryptocurrency for long-term storage of value, I would prefer Bitcoin. If it’s a small transaction, or playing mahjong or rolling dice, EOS will be of course no problem. In the blockchain entire world, from probably the most decentralized Bitcoin and Ethereum to minimal centralized EOS and TRON.
It could be seen as a decenralization spectrum. Every public chain, including Polkadot and Cosmos highlighted afterwards, occupies a particular position within the spectrum and has the chance to be suitable for particular needs. There is absolutely no possibility of one chain fit all hitting the world.
To do structures design is to make a bargain. If you have an option, you will certainly give up. The core idea of ??this sharing is that the future blockchain entire world is definitely heterogeneous and multi-chain coexistence. Of course, I don’t think that hundreds or a large number of open public chains are essential, because you can find not so a lot of reasonable options. In the case of similar positioning, system effects will eliminate the weak.
V. The second expansion idea-layering
Layering is also called two-layer expansion or off-chain expansion, which is to place some transactions beyond your blockchain for execution while still ensuring deal security. You can find two forms of technology, state channel and side chain. There is also a type of two-layer technology that transfers computationally intensive tasks to off-chain execution. It has nothing to do with sharing topics and will not be pointed out.
State stations and side stores are different complex metaphors, however when it comes to implementation, they are actually virtually identical. Since Cosmos and sidechains possess a serious internal connection, I’ll spend some time here to speak about the principles of sidechains.
To understand the medial side chain, you must first understand the SPV evidence. SPV is the abbreviation of Simplified Transaction Verification. In order to permit devices with limited computing and storage space capabilities to use Bitcoin, SPV, or lighting client or lighting node, has emerged.
The cellular wallet is a lighting client. It generally does not have to synchronize all blocks, only the stop headers, which reduces the amount of data transmitted and kept by 1,000 times. The figure around the left is the basic principle of SPV evidence, making use of Merkel tree. It doesn’t matter if you don’t realize it, remember which the Merkle tree is the most important data structure from the blockchain.
It could be utilized to store hardly any data to prove a large number of facts have occurred and belong to a specific place. As far as the blockchain can be involved, only the stop header is kept, and it could be verified if the deal exists in a particular block in the future.
The medial side chain solution is to lock the main chain asset tokens, and create a token acceptance bill on the side chain accordingly. The costs of exchange is definitely executed on the side chain, and the person who gets the money order on the side chain can redeem the main chain token. Let’s appear specifically at the Ethereum Plasma MVP sidechain plan on the right.
First, the Plasma clever contract should be deployed around the Ethereum main chain, assuming you can find two sidechain customers, Alice and Bob. Alice initiates a main chain deal to deposit the token into the Plasma contract, and the token will be locked by the contract.
Once the operator of the medial side chain finds that Alice has transferred the token, it will create a side chain token in the medial side chain, that is the acceptance bill of the main chain token. Please be aware that the medial side chain is also a blockchain, which includes its own consensus process and miners.
In the Plasma MVP scheme, the consensus adopted by the medial side chain is the PoA proof authority, that’s, an Operator has the final say, which is in charge of accounting and generating blocks. PoA will be of course not really the only choice. Loom’s Plasma sidechain adopts the DPoS consensus.
After depositing, Alice can use the token around the Plasma MVP chain to create payments or transfers. For example, she can have fun with video games with Bob, win or lose tokens, and could quickly play a lot of rounds, producing a large number of exchange transactions. Sidechain dealings only require the nodes from the sidechain to reach a consensus. The medial side chain is usually much smaller than the main chain, so deal execution is faster and the price is low.
The stop header of the medial side chain block will be submitted to the Plasma contract of the main chain by the Operator. It doesn’t matter how a lot of transactions are within a stop of the medial side chain, whether it is 1,000 or 10,000, only the first deal from the documenting block happened on the main chain. Therefore, the Plasma contract on the main chain is the same as the SPV lighting node of the medial side chain. It shops the stop header such that it can confirm the presence of side chain transactions.
For example, Alice transfers the token to Bob on the side chain, and Bob may send a demand to the Plasma contract, including the SPV proof the side chain deal, indicating that Alice has given me these tokens.
The Plasma contract can verify which the transfer transaction does exist on the side chain, thereby satisfying Bob’s withdrawal requirements. This instance illustrates how the hierarchical plan can transfer a large number of dealings to off-chain execution, or to the second-tier system for execution.
Sixth, the 3rd expansion idea-sharding
The third expansion idea is sharding. The basic principle is very simple, that is, don’t let all nodes perform all transactions. Separate the nodes into a lot of organizations, or into a lot of pieces. A number of shards can process dealings in parallel, and the entire processing capacity is definitely improved.
Of course, a special chain is required to take care of all of the shards. This is generally called the main chain. It has to do a lot of function, which will be described in detail later. The rough explanation is that if there is absolutely no main chain and there is absolutely no connection between multiple shards, then you can find multiple blockchains which are totally independent and have nothing to do with expansion.
The basic idea of ??sharding expansion is very simple, but in practice it faces a lot of complicated problems. In order to realize several public chain architectures to become compared and examined later, you must first roughly realize these problems. Furthermore, because this open public chain adopts PoS consensus, we discuss sharding issues and solutions predicated on PoS.
Seven, the issue of sharding-validator selection
The first is after sharding, each shard needs a set of validators. Let’s take a look at this schematic diagram.
If more than half from the malicious validators are about the same chain, they can attack the machine. After sharding, as long as there is a majority in a shard, the shard could be attacked. Therefore, the greater fragments are split, the lower the attack cost, that is, the lower the security.
The solution would be that the validator grouping from the shard is not fixed, but selected randomly, and regrouped at regular intervals. In this manner, destructive verifiers cannot understand beforehand which group they are assigned to, and they’ll become punished for sending attacks rashly, so the protection of the machine will not decrease linearly as the amount of shards increases.
The key to the random active grouping of verifiers is to have reliable random numbers. Random figures will always be an elaborate and interesting problem in computer science. Decentralized Byzantine fault-tolerant generation of reliable arbitrary figures is very hard, which is also a sizzling problem in blockchain study.
- The problem of sharding-cross-shard deal integrity
In the sharding scheme, one or more DApps can run on each shard, plus they should be interoperable regardless of whether the DApp is within exactly the same shard or not really. First of all, we must clarify what is cross-chip interoperability? Because shards may also be blockchains, cross-shards are usually equal to cross-chains.
Everyone knows which the blockchain could be seen as a state machine for distributed consensus servicing. The state machine completes state transfer through deal execution. Cross-chain interoperability should trigger state exchange between both celebrations, that’s, two interoperable stores have executed dealings, and the state after performing the transaction is definitely consistent.
In other words, a cross-chain transaction will cause the state of two chains as well as multiple chains to change, and these changes are either effective or unsuccessful, and there is absolutely no intermediate state. This is nearly the same as the concept of distributed dealings in enterprise processing.
It’s just that the individuals in traditional distributed dealings are usually multiple databases, as the individuals in cross-chain dealings are usually multiple blockchains. Non-technical students may not be acquainted with the concepts of state devices and distributed dealings. Because the idea of cross-chain dealings is very important to understand the conclusion of the sharing, I’ll clarify it in nontechnical language.
Suppose you want to exchange 10,000 yuan from an ICBC account to a CCB account. This exchange transaction is truly a deduction of 10,000 yuan in the ICBC account and a rise of 10,000 yuan within the CCB account. ICBC and Cina Construction Lender each possess a database to store account balances, so there must be a mechanism to ensure that the functions of both databases, one plus one minus, will either be successful or fall short under any situations.
If there is no such promise, the ICBC account is reduced, the CCB account is not added, and you lose 10,000 yuan, you’ll not do it. When the ICBC account is not decreased, the CCB account is added, and you have a supplementary 10,000 yuan, the lender will definitely not really do it.
This is called the integrity or atomicity of distributed transactions. Simple? In fact, it is quite difficult to begin with, because whichever server of ICBC and CCB has power outage, system disconnection, software accident, etc., various extreme conditions must ensure that the deal is complete. On the blockchain, the exchange becomes a exchange certificate.
A certain token is issued around the The chain, and 10 tokens are used in the B chain through the cross-chain. Following the cross-chain deal is completed, the 10 tokens around the A chain are frozen, and you can find 10 more tokens around the B chain. These two state changes either be successful or fall short under any conditions.
Because the blockchain might fork, cross-shard transactions tend to be more complicated than traditional distributed transactions. Let’s go through the picture. When the part of the cross-shard deal on shard 1 will be packaged in stop A, it is packaged by stop X’on shard 2. Both shards may fork, and stop A and X’may become deserted orphan blocks. That’s, cross-shard dealings may partially be successful and partially fail, and the integrity will be destroyed
How exactly to solve this problem? Let’s evaluate it. The root cause from the integrity of cross-chain dealings is the fact that multiple elements of the deal are packaged into blocks, however the chain could be reorganized and the block may become an orphan stop.
To place it bluntly, it means that the deal is entered into the stop, but it is unreliable and could go back. The state statement is not very clear about finality. Finality would be that the stop must be included in the blockchain.
On the Bitcoin blockchain, the greater blocks connected to a block, the lower the possibility of it being reversed or abandoned, but it can never be 100% certain, so it’s called probabilistic finality or progressive agreement Sex. The perfect solution is to this problem is to possess a mechanism for that block to truly have a very clear finality rather than be ambiguous.
Nine, the issue of sharding-finality VS activity
Finalize is to produce the stop final, We translate it as finalized. Make the stop last. The concise technique is to finalize the stop instantly. The Tentermint consensus of Cosmos is just that. But this approach can cause issues in special situations.
Considering the picture, a particular Tendermint consensus blockchain has been originally exported normally. Suddenly the submarine optical wire was broken, and the web was split into two components. Each of the two components contains general validator nodes. The Tentermint consensus requires that a lot more than 2/3 from the verifier’s signatures are usually collected to create blocks.
After being disconnected, both elements of the network both collect for the most part half of the validator’s signatures, so the block production stops, or the blockchain loses its liveness. Some people think this is tolerated. It is a special situation, so end first and await the network to return to normal before continuing to operate.
The submarine optical cable is broken, and Web access, phone calls, and video conferences are affected. Why can’t the blockchain become suspended? Others believe it is undesirable to stop creating blocks and that the blockchain must always become active. then what should we perform? The solution is to split block generation and finalization, also called mixed consensus.
In the case of network interruption just pointed out, in two separate networks, nodes can continue steadily to produce blocks, but you can find insufficient validators to take part, so it can’t be finalized. Following the system is definitely restored, decide which blocks are usually finalized, in order that both liveness and finality may be accomplished.
Moreover, hybrid consensus makes it possible for individual nodes to rapidly produce blocks subsequently. At exactly the same time, the finalization process could be slower, enabling a large number of nodes to participate, ensuring decentralization, increasing the difficulty of episodes and collusion, and ensuring security. Therefore, hybrid consensus also considers performance and protection. Both Ethereum 2.0 and Polkadot use hybrid consensus.
- The problem of sharding-transaction validity
Another sharding problem is deal validity. The issue of deal validity is to prevent invalid dealings from entering the block and become part of the traditional truth maintained by the blockchain.
Take Bitcoin as an example. If I feel a brilliant miner, I have a lot of the processing power. I wish to forge a deal and exchange the bitcoins from somebody else’s address to me. Can it be completed? The answer is definitely no.
Because this deal does not have the private key personal corresponding to the address, it is invalid, and the stop containing this deal is also invalid and will not be accepted by other nodes. Even though I mastered a lot of the processing power and will dig out the longest chain, I just built a very long fork.
Many Bitcoin wallets and exchanges will not recognize my fork. Therefore, a 51% assault cannot steal someone’s btc or create bitcoin out of thin air. For the most part, it is a double-spending assault. Overall, the Bitcoin system does not have transaction validity issues.
So how could such a problem that has been perfectly solved a decade ago arise again? Associated with which the nodes of open public chains such as btc have all of the data, so they can verify the validity of dealings completely separately. Now it has become multiple shards, and the nodes only store part of the data, so they cannot separately verify the validity from the transaction.
Let’s go through the picture around the left. You can find two shards. Shard 1 continues to be controlled by a destructive validator, and invalid dealings have been packaged in stop B. For example, many tokens have already been created out of thin air for your own address. In the next stop C, the attacker initiates a cross-shard deal to exchange the token to the DApp on Shard 2, which might be a decentralized swap. The dealings in block C observed by shard 2 are proper, and shard 2 does not have the data before block C, so the validity from the transaction can’t be verified.
Below we introduce a solution to resolve the validity of dealings in a fragmented environment, called report rewards. In fact, you can find other plans, however they have nothing to do with the subject, therefore i will ignore them.
Considering the picture on the right, although Shard 1 is managed by a malicious validator, there is still a minumum of one honest validator. Shard 2 cannot verify the validity from the cross-chain deal, so choose to have confidence in Shard 1, which packages the cross-chain deal. At this time, the honest nodes in shard 1 can jump out and review, saying that stop B is unlawful, and I have evidence.
When the program accepts the document, it will punish the malicious validators in Shard 1, confiscate their pledged tokens, and provide rewards to the reporter. So why in some blockchains, validators have to wait several months to withdraw the pledged tokens. The primary reason is to permit plenty of time for reporting and confirming the review.
Above we’ve introduced four sharding issues and the corresponding solutions. In fact, the problems of sharding expansion tend to be more than these, tied to time, we will not listing them again.
Eleven, Ethereum Zhengshuo-Serenity
The lay1 expansion idea of ??the next-generation Ethereum is sharding. Regarding the following generation of Ethereum, the information is very complicated, and even the names are not uniform. You can find Ethereum 2.0, Serenity, Shaper, Casper Ethereum, etc., which we collectively contact Serenity.
Everyone, go through the structures diagram of Serenity, which was produced for Ms. Wang Shao, a mature Ethereum researcher in Taiwan. From the top to underneath, the top is the PoW main chain, that is the Ethereum currently running. Serenity will not replace the PoW chain, but will be deployed online as a aspect chain.
But in the long run, Serenity does not rely on the Pow chain. The three levels below PoW belong to Serenity, plus they match the three levels of Serenity’s advancement.
The first is the Beacon String, whose main function is to manage validators. Following the beacon chain is online, if you wish to become a validator of Serenity, exchange eth in the Pow chain to the beacon chain. The side chain is still used, and the beacon chain deploys smart agreements around the PoW main chain.
The transfer of eth into the beacon chain is one-way and can’t be transferred in the beacon chain back to the PoW chain. Getting eth around the beacon chain, staking and running nodes, it is possible to become a validator. In order to achieve full decentralization, the threshold to become Serenity validator is very low, only 32 ETH needs to be pledged, and the set of validators will be large, which can reach the purchase of tens of thousands to thousands.
The beacon chain is also in charge of generating random numbers for validator grouping and block producer selection. The beacon chain implements the PoS consensus process, including its own consensus and the consensus of all shard stores, and rewards and punishes verifiers. There is also a transfer station for cross-slice dealings. The beacon chain is likely to go online by the end of this yr or early following year.
Several teams are currently developing the beacon chain node software, and several teams have deployed testnets. In the next phase, a open public, long-running test system will be deployed, and the nodes produced by each team will be put together for testing.
Below the beacon chain are multiple shard stores, 100 shards are attracted on the picture. The sharding chain is regarded as Serenity’s data coating, responsible for storing transaction data, keeping data consistency, accessibility and liveness, that’s, ensuring that blocks can continually be produced and will not be locked. The launch time of the shard chain is uncertain.
Below the shard chain is a virtual machine, which is in charge of executing smart contracts and transferring transactions, altering the state, that’s, reading and creating data around the shard chain. Serenity’s essential design decision is to decouple the data layer sharding chain from the reasonable execution engine virtual machine.
Decoupling brings many benefits, for example, it could be created separately, launched separately or upgraded. Serenty virtual machine use wasm, which can improve performance and supports multiple programming languages.
So how exactly does Serenity talk about the four fragmentation issues mentioned above? The first is to control the validator pool around the beacon chain, and arbitrarily designate several validators for every shard chain. Using blended consensus, validators take works out of blocks, and Casper FFG can be used to determine finality. Use reporting rewards to ensure the effectiveness of dealings.
- Gavin Wood’s new journey-Polkadot
Revealing is halfway through, and lastly it is the protagonist Polkadot’s turn to appear. Gavin Wood is the soul of Polkadot. A lot of the classmates know him perfectly. If you don’t understand him, I’ll not bring in it by searching the Internet.
Gavin Wood is the founder and current president from the internet3 foundation. Polkadot is the primary project from the internet3 foundation. Similar to the partnership between Ethereum and the Ethereum Foundation.
About web3, it is necessary to introduce it. In task documents like the internet3 Basis and Polkadot, the textual expressions from the internet3 vision are different. But both possess two meanings.
The initial layer: web3 is a serverless, decentralized Internet. serverless Serverless also means decentralization, because within the system computing structures envisioned by internet3, individuals or nodes are usually equal, and there is absolutely no distinction between a server and a client. All nodes tend to be more or less mixed up in formation and documenting of system consensus. . What is the use of the decentralized Web?
This is the second meaning of web3: everyone can control their own identity, assets and data.
Mastering one’s possess identity means that you don’t have for other people or organizations to assign identity, along with other people today or organizations cannot fraudulently make use of or freeze the identity. Getting control of your own assets means that you will not become deprived of assets and you may freely get rid of assets. Mastering your personal data means that everyone can generate, store, conceal, and kill personal data in accordance with their own desires. Without his authorization, nobody or any business can use their personal data.
The web3 vision is not unique to the web3 foundation or the Polkadot project. Many blockchain projects, including Bitcoin and Ethereum, possess similar visions. There are various names, including open networks, next-generation Web etc. The name is not important. You should think about the connotation from the internet3 vision. Is that the web you want?
Gandhi said: Be the change you want to see on the planet. I translated: Toward the world you want. If internet3 is also a vision you agree with, then become involved and focus on it.
Polkadot is the backbone of internet3 and the infrastructure of internet3. It is described by Gavin Wooden and the internet3 foundation as the way to the vision of internet3.
Substrate is an open source blockchain advancement framework formed through the advancement of the Polkadot task. It could be used to create the Polkadot ecosystem, or it could be used to create blockchains for additional purposes.
To be continued, click to read “With the Polkadot Architecture (Part 2)”