The Merge - Danksharding

In this article we'll discuss one of the most important mechanics that will be implemented after the Merge, "Danksharding" (DS) and why it matters

As we discussed in our last article, the new timeline for the Ethereum Merge has been released and looks to be proceeding full steam ahead. In this article we will discuss one of the most important mechanics that will be implemented after the Merge, "Danksharding" (DS)

Danksharding

The rundown

TL:DR

DS is a new mechanism expected to be implemented after the Ethereum Merge that gets its name from its creator, Ethereum researcher Dankrad Feist. If it works as intended it will streamline validation on Ethereum L1 allowing for higher traffic and lower gas fees.  

A preliminary implementation called “proto-Dank Sharding” is expected to be implemented at the Merge with the full implementation following about 6-12 months after with the Shanghai Upgrade.

the merge danksharding

Ethereum is working to accommodate L2s

Ethereum is going through changes which will hopefully put it in a position to serve more users in a cost-efficient and scaleable way in the future. A lot of focus has been on separate L1 protocols which claim to be viable long term replacements for Ethereum that offer lower transaction fees and higher speed. The big problem however is that none of these networks have been able to do so on a large scale. Many of them have seen fees go higher as network traffic increases and have even experienced network outages because of a lack of scalability.

L2s or “rollups” are built on top of L1s like Ethereum, and seek to strike a balance between scalability and security. These protocols can make transactions on their own network and then validate them in bundles on L1s like Ethereum. This is more gas efficient than users making their transactions individually and does not compromise security because the transactions are still validated on the Ethereum network.

the merge danksharding

Ethereum is betting that the wider adoption of L2s is the best way to achieve scalability and reduce transaction costs. They are making changes to better accommodate this shift by streamlining the way transactions can be confirmed on Ethereum L1 as bundles or “blobs” of data, while leaving the job of validating the individual transactions to the L2s. These changes will not happen immediately at the merge but are expected to come in the near future.

Sharding and “Dank Sharding”

Originally the merge was going to include a solution called sharding, which we can now call “Sharding 1.0.” It started out with a model using 64 shard chains and a beacon chain to oversee them all. The plan was to use committees of vaildators to prove the data availability of every shard chain. The Codex released a previous issue about ETH 2.0 and the Gas Crunch if you'd like to read more on the subject.

the merge danksharding

Sharding 1.0 proved to be very tricky for several reasons. Validators would need to be kept from colluding to submit malicious blocks, and thus would need to be rotated between shard chains frequently, causing logistical problems. Also the need to constantly bridge between shards when protocols on different shard chains interact with each other offers fresh attack vectors for Ethereum. Now a new model for sharding has emerged which is currently favored by the Ethereum foundation called Dank Sharding (DS).

DS is sharding in name only. it does not split the chain into different blocks but actually uses a special builder to create one large block (special builders will be required for this job because it will be require very expensive hardware). This large block will then be validated by taking samples to make sure that all the data is available to validators. This lines up with the roadmap outlined in Vitalik’s well-known piece “Endgame,” which called for centralized block production, but decentralized validation

the merge danksharding

Data sampling and extrapolation

DS uses a form of polynomial math called KZG commitments which allows for extrapolation of entire rows or blocks of data based on only a sample of >50% of that data. This saves network bandwidth because validators don’t have to download and validate the entire block every time. Once this is done many times by many people, we can be satisfied that the block is valid (if you’d like to read more on the subject this article covers it in great detail).

the merge danksharding
💡
Next we’ll discuss how DS is expected to help protect users from high MEV costs through a mechanism called Proposer-Builder Separation.