Execution and well-conceived integration are a mandatory minimum of realizing the potential of new innovations. Grab a coffee (or hot chocolate if you're @greg), sit back, and relax, we're going on a deep dive.
349
No replies to this section yet.
The Apple ][ is frequently hailed as one of the major milestones in personal computers. Not only did it make home computing more accessible than ever, it delivered many things we now take for granted that were incredible technological achievements for the time. Did you catch them in the picture above?
17
No replies to this section yet.
Let's look at an example of what others in the industry were offering to help demonstrate the difference: The Commodore PET. See the difference now? The Apple ][ could do color - something no other personal computer had been doing yet. The Apple ][ had floppy disks, faster and at greater capacity than any other. How?
13
No replies to this section yet.
At the time, video signals for home computers utilized NTSC composite video, the kind that worked with television monitors. These sets supported color through chrominance subcarrier frequency modulation - a technique that at the time typically required expensive chips to support.
15
No replies to this section yet.
But supporting limited colors could be achieved cheaply, if you could get the timing right, by emitting the signals in a particular way. Wozniak was predictably nerdsniped by this, and used this trick with the MOS 6502 and its timing set to 1.023 MHz, 2/7 of the NTSC color subcarrier rate.
17
No replies to this section yet.
The integration of controlled timing from the deepest part of the hardware down to the software unlocked color displays for home users, while its competitor, the PET, using the exact same chip, could not do this. But the division runs deeper. A sentiment was growing that floppy disks were needed for non-hobbyist PCs.
13
No replies to this section yet.
Woz stood up to the challenge again, this time, taking the raw hardware physically controlling the disk, and augmenting it with a cheaper software and timing-driven approach, reducing the overall chips in use (and cost, saving Apple >$300 per drive), while simultaneously making it faster and able to store more data.
14
No replies to this section yet.
Commodore's PET disk drive design ultimately required two processors on the scale of the Apple ]['s main central processing unit just to make it function, and for all that expense, still couldn't display color.
12
No replies to this section yet.
As I am wont to do, I see a heavy parallel here to the crypto industry. When we look at crypto today, the innovations that are happening are frequently very downstream of ossified architectural designs, which results in huge efficiency loss.
17
No replies to this section yet.
The original block chain was designed out of a minimalist point of view to most succinctly encapsulate transactions of a singular coin, encoded as simple Forth-like scripts, and to be provably referenced in its inclusion via the use of Merkle proofs.
15
No replies to this section yet.
At the time, Merkle trees were one of the most efficient means to collect contiguous segments of data and verify inclusion. The compactness of the proof, however, leaves a little bit to be desired, especially when scaling out to the whole of human commerce, or trying even more loftily to be a world computer.
13
No replies to this section yet.
For starters, to verify a given transaction in the original design, you'd have to hold the entire history to self verify. Many moved to incorporating the use of a Merkle root of the overall ledger state in the block header so as to avoid such dilemmas, but still mandated full nodes synchronize full state.
10
No replies to this section yet.
Additionally, some have used alternatives to Merkle trees, such as Ethereum's choice of PATRICIA-Merkle trees, and doing a proper proof requires more sibling hashes for every ascent of the tree, currently about 3,584 bytes. Opting to reincode as a binary tree, it still requires nearly a kilobyte.
11
No replies to this section yet.
Thankfully, bandwidth has improved, but so too have our cryptographic techniques: KZG commitments/proofs offer a very succinct scheme for proving inclusion in a set (accurately, a vector) - it is constant size: the item you wish to prove was included, its position in the set, and the elliptic curve point. Nice!
13
No replies to this section yet.
What's not so nice: the time to compute larger sets. Complexity wise, constructing the commitment is O(n^2), not to mention you also have to have constructed a secure reference string equal in the number of elements (accurately, the degree) to the maximum size of the set.
10
No replies to this section yet.
For Ethereum, at 263.80M accounts, we're talking about a secure reference string of 2^28, and that's just for today! If you thought contributing to the Ethereum KZG ceremony was long for 2^15 elements, you'd be in a much worse world of pain (by most recent report of such a setup, a single contribution took two days).
10
No replies to this section yet.
What about a compromise? Keep the tree structure, lose the sibling proofs, with each traversal step of the tree as a KZG commitment/proof. At log16(263.80M) ≈ 7, this reduces the proof size to 7 points (48 bytes each), or 336 bytes. Not bad! If you followed this so far, congrats, you now know what a Verkle tree is!
16
No replies to this section yet.
So what is Ethereum doing with verkle trees? Nothing yet! Instead, Ethereum is using the simple vector commitments for up to 4096 elements (32 bytes each), allowing up to six of these commitments to be posted as transactions per block, with a dynamic fee market of its own. How's that going for them?
13
No replies to this section yet.
Inscriptions showed up (as I predicted), more L2s than there are slots per block showed up, and this is just beginning to ramp up. The competitive fee market separated from regular transactions has resulted in the original significant 100x+ price reduction currently sitting closer to a modest 3x.
11
No replies to this section yet.
If pressure continues, this will trend soon towards blobs becoming _more expensive_ to use than calldata, on top of the accounts producing the blobs still needing to exist in the world state PATRICIA-Merkle tree. But hey, no worries, verkle trees will make that more efficiently verifiable, right? Right?
12
No replies to this section yet.
According to certain active voices in the space, they're going to be bringing "the next 1 billion users onchain". What does that look like? Ethereum has accumulated some cruft over time, so despite a healthy average over the last year of about 500k daily active addresses, we're sitting at 263.80M accounts.
10
No replies to this section yet.
This is a poor estimate (before maxis surely @ me for this), but we need to get an idea for how much cruft we can reasonably assume will accumulate from historic data. At a ratio of ~528 addresses per active user, we're talking about adding 528 billion new addresses.
11
No replies to this section yet.
Even with succinct (10 points for a tree that large, 480 bytes) proofs, the sheer scale of data that must be held is staggering. Not to mention each full node must hold it! So what about sharding? Ah, about that
10
No replies to this section yet.
Given L2s have been put into a turf war for very limited blob space at high prices, some concessions will need to be made here, and ultimately many of these billions of addresses would then never be able to live on the L1 proper. But wait – these L2s are mostly just variations of Ethereum itself.
12
No replies to this section yet.
How will they manage the scale? Many of them rely on being centralized, making them little more than glorified databases with proofs. But even this has its limits – namely, gas limits. Gas limits exist on Ethereum to ensure that many different computers can successfully verify a block quickly.
13
No replies to this section yet.
While many of these centralized L2s make promises on the roadmap to decentralize, the likelihood of this is contingent on never increasing the gas limit too far – as one can very easily see with high throughput chains like Solana, the centralizing effect of large blocks becomes limited to very high end hardware.
13
No replies to this section yet.
But the billions of people out there just waiting to be on-chain are not in one country. They do not share the same laws, or proximate latencies to the centralized sequencers, and invariably, the beauty of decentralization creating unfettered access to a new economy becomes walled off, limited by geography and law.
11
No replies to this section yet.
Even still, billions will not fit on a single sequencer. So many L2s would have to work in tandem. The crux of the issue is: they still have to get consensus at the L1 before they can reconcile state, leading to minutes-long latencies between chains in the best trustless case.
12
No replies to this section yet.
All the while, all this magnificent cryptographic novelty is being used at great expense – to build a slower, more expensive floppy disk and monochrome screen. Let's consider what Quilibrium's finalized architecture looks like. Quilibrium also utilizes KZG commitments, and has a partitioned layered proof structure.
11
No replies to this section yet.
But where it differs is where the magic happens. At the highest level, replicated across the network, is a mere 256 points (in our case 74 bytes each, or ~19KB). The 256 points are aligned in a bipartite graph, with 65536 points, forming the collective core shard commitments.
14
No replies to this section yet.
Traversing down this leads to the core shards themselves, each responsible for managing up to 1GB of addressible data, split into 256 sectors, each sector containing 65536 sub-sectors, and each sub-sector containing 64 bytes.
9
No replies to this section yet.
Structurally, we produce a traversal length of only 4 actual points, but to prove to any node rather than cluster peers we need to incorporate additional commitment data, leading to 8 points total, or 592B to prove any of the bits on the network, with a greater capacity of bits than there are atoms in the universe.
13
No replies to this section yet.
What this comes down to is a series of architectural decisions no different than the ][ vs the PET. You can get so much more mileage if you use a smaller singular blob to contain shard commitments instead of a fee market for anyone to post garbage to and ditch the verkle tree altogether for a faster commitment scheme.