In which I run with the GHOSTs of my hometown*
In his youth he reasoned: Since gold is more valuable, it ranks as money; whilst silver, which is of lesser value, is regarded as commodity… But at a later age he reasoned: Since silver [coin] is current, it ranks as money; whilst gold, which is not current, is accounted as commodity.
– Babylonian Talmud, Tractate Baba Mezi’a, pg 44
Mistakes happen
In 2013 I entered into a master’s degree program in computer science at the Hebrew University. I was recommended by the head of school to work with Aviv Zohar, but Aviv suggested we work on Bitcoin and, coming from mathematics undergrad studies, I found Bitcoin awfully boring and practical. A half a year later, during Shabbat dinner, a friend told me about Bitcoin, and since I hadn’t done anything useful in the time that passed, I decided to return to Aviv (I spent the preceding semester trying to solve the Goldbach conjecture in order to win a $1M prize. In retrospect, I could have gotten the money by investing a few bucks into Bitcoin. Mistakes happen.1)
My project lab turned into a Financial Crypto paper analyzing the security-speed tradeoff of Nakamoto Consensus and proposing the GHOST protocol. GHOST is an alternative to Bitcoin’s longest chain rule, using the proof of work embedded in off-chain blocks by traversing the tree structure (resulting from forks under high speed) and selecting the main chain differently. The idea was to count the number of blocks that extend a certain block rather than the length of the chain above it. This was supposed to alleviate the need to suppress the block time and to allow for very speedy blocks without suffering from orphans and deteriorating security. Some time after the protocol’s publication, Vitalik Buterin incorporated GHOST into the Ethereum white paper and roadmap. Ethereum didn’t end up implementing GHOST. However, they did end up implementing a variant of our Inclusive Blockchain Protocols, also published in Financial Crypto that year (coauthored with Yoad Lewenberg). This paper first proposed the directed acyclic graph structure for blocks, the “blockDAG,” but its focus was on increasing throughput (for the same security) and linearizing the block rewards across miners.
Around the same time we developed a more thorough approach towards PoW consensus, and as part of it we realized a weakness in GHOST that does not allow it to scale regardless of network parameters—meaning, GHOST’s block times and throughput, like Bitcoin’s, are still limited by the network’s latency. We then directed our efforts towards devising a PoW protocol, or family of protocols, that would alleviate the speed-security tradeoff, and would allow for arbitrarily short block times (provided the network’s capacity limit is not exceeded). These protocols eventually became my PhD thesis, as well as my personal obsession (there are others, but I find this particular one less useless).
Longest chain is largest 0-cluster
The proposition underlying my PhD research was that PoW can be made into an internet-speed service, if done correctly. Other than achieving speedy confirmation times, high block times reduce the variance of block rewards, mitigate the need to join mining pools, and thereby contribute to the decentralization of mining.
This vision is reinforced by the observation that Nakamoto Consensus is a special case of a larger family of protocols, PHANTOM (in fact, Nakamoto Consensus is the slowest member in this family), which can support a high block rate—subsecond block times, say. These protocols are based on blockDAGs. The core idea behind using a PoW-based DAG is to replace the mining paradigm of Nakamoto Consensus, in which miners propagate and extend the winning chain only, with the more informative paradigm, in which miners propagate and extend the entire history of blocks—each new block points at all recent blocks in the history, rather than to the winning one. This history—the blockDAG ledger—may contain conflicts, so the protocol needs to provide a preference ordering over all blocks in order to resolve inconsistencies. The properties of the resulting blockDAG system, and specifically its resilience to 49% attackers and its speed of convergence, are determined by this ordering protocol.
The latest protocol that we devised is called GHOSTDAG, and it is a member of a family of protocols called PHANTOM. In PHANTOM, we select (i.e., give preference in the ordering protocol) the largest k-cluster in the DAG, where a k-cluster is a sufficiently connected set—“sufficiently” as defined by the k parameter. This pre-encoded parameter represents an upper bound over the expected width of the DAG, informally the expected degree of asynchrony. When you choose k=0, PHANTOM coincides with the longest chain rule, and so it is a family of protocols in which Nakamoto Consensus is a special case. GHOSTDAG is a practical efficient variant of PHANTOM which chooses a large enough k-cluster.
Block liveness matters
Some people argue that “faster block times” is meaningless, and that confirmation times do not depend on block times. I agree that in a highly liquid, GPU-based mining platform a transaction’s time to (probabilistic) settlement does not shorten due to short block times. Having conceded this, in a different post (WIP) I will elaborate on the importance of high block rate for faster confirmations, especially in an ASIC-powered system.
The path to preminelessness
A few years into my PhD, there was interest from the community to build these protocols into a working platform. Guy Corem, and later Sizhao Yang, helped me raise a few million dollars from crypto VCs—a small amount in crypto 2018 terms—and we bootstrapped a small team of developers and researchers called DAGlabs. The money was given with the understanding that there will be no presale of tokens; premine or founders’ rewards models are quite antithetical to a decentralized cryptocurrency project, as they usually (read: always) imply the control of a central organization, be it a for-profit company or a “non-profit” foundation. Instead, DAGlabs used part of the money to purchase mining equipment, and it will participate in the mining of the new token with a plainspoken advantage of being the first to its market. More on this, and on mechanisms for community members to participate in mining, in a separate post. The major part of the funding went into R&D. DAGlabs developed code that is based on a fork of the BTCD full node codebase. A primary objective was keeping the design of the system as close to Bitcoin’s architecture and assumptions as possible: PoW, blocks, UTXO, transactions fees, etc. The codebase underwent significant refactoring by the core devs, who then adapted it to a blockDAG governed by the GHOSTDAG protocol.
Kaspa it is
The vision behind this project is to build a Nakamoto-like service that operates on internet speed. We wanted to build a system that surpasses the limits of Satoshi’s v1 protocol (aka Nakamoto Consensus) yet adheres to the same principles embedded in Bitcoin. Contrary to Satoshi’s vision, Bitcoin did not become a peer-to-peer electronic cash system. Instead, it is solidifying as the ultimate store of value, or e-gold, and that’s pretty much it from the Bitcoin side.2 This is not a mild achievement by any measure—it’s one of the most important financial revolutions in human history. Yet it leaves lots of room for improvement (of L1) and/or for choosing different tradeoffs (for L1).
For a thorough inspection of Satoshi’s vision, I highly recommend Examining a Conspiracy Theory about Satoshi’s Intent by Elliot Old.
We look to silver, which presented a different tradeoff vs. gold. As demonstrated in the prefacing quote about gold vs. silver (in the original Aramaic text: Dahava vs. Kaspa), silver was historically treated as less precious than gold but more circulative, less valuable yet more acceptable as payment. With this prospect I suggested the name “Kaspa” for the project; “kaspa” is the Aramaic word for “silver” and “money.”
There are no solutions, only tradeoffs
Aside from optimizing for speed and accelerating the block rate, there are several axes on which I suggest Kaspa trades off differently than Bitcoin:
-
Increased throughput — gain: lower transaction fees; loss: increased CPU and bandwidth consumption for full nodes. I suggest that Kaspa, unlike Bitcoin, not be optimized for home users being able to run full nodes (even though they most probably would be able to do so, with 2020 commodity hardware) but rather for flexibility and user convenience (manifested in speed of confirmations and low transaction fees). Specifically, Kaspa aims at supporting a few thousands of transactions per second, similar to credit card companies’ transaction volume.
An alternative is to keep the throughput of the base layer low, in terms of transaction count and specifically the number of cryptographic primitives per block, yet allow for large payload data, with the intention to support smart contracts in L2. This would keep the CPU requirement for full nodes low, but increase the bandwidth requirement; though one can imagine demand for a new type of node, a partial node, which validates all base layer transactions similarly to a full node, yet skips on the data availability checks with respect to transactions’ payload. I will continue this discussion in a separate post (WIP).
-
Pruning historical data by default — gain: faster sync for new nodes; loss: new nodes sync in SPV mode, by default, or, if they wish to validate the entire history, through resource heavy (hence more centralized) archival nodes. More on this in a different post (WIP), in which I argue that validating historical data thereby protecting against historical corruption is not meaningful to user sovereignty.
-
Support for L2 expressiveness — gain: a plethora of use cases; loss: slight increase in the attack surface of the base layer due to the addition of scripts. The base consensus of Kaspa should be Bitcoin-like limited, with scripts that unlock specific users’ scripts and that have no read/write permissions to the entire state. At the same time, innovation around Ethereum L2, specifically Rollups, allows for the base layer to enforce consensus without being aware of [anything other than a hash commitment to the] state. Kaspa should incorporate this, and should participate in the amusing innovation of DeFi going on over Ethereum. I will propose a new design that optimizes for speed of confirmation of L2 transactions in a different post (WIP).
On a zero-sum game vs Litecoin
Admittedly, as Nick Szabo likes to remind us, any new cryptocurrency gets a negative on the social scalability part. This can be said on any new social network, not only money-powered ones. The sentiment against proliferation of coins is shared among almost everyone—we are all maximalists with respect to most cryptocurrencies created by other teams; non-coiners just go one coin further.3
In a sense, Kaspa does not aim to become another member of PoW coins but to replace an existing one—Litecoin. Litecoin was supposed to be the silver to Bitcoin’s gold, so we were told. However, it turned out to be quite totally useless, and provided no meaningful functionality over Bitcoin—it is essentially a basket of minor parameter changes from Bitcoin. From the perspective of a 2020 end-user, there’s no major difference between waiting for 15 minutes and waiting for an hour; both are equivalent forms of “forever.” I admit to not fully understanding the “lite” aspect in Litecoin. Depsite the meme, nothing in Litecoin’s design makes it lighter; it bears the same block size limit as Bitcoin, and a theoretical higher throughput capacity. The only reason it is currently “lite” is that it is empty… But a silver should be used more—not less—frequently than gold to justify itself, and a crypto silver’s ledger should in fact be heavier than Bitcoin’s.
Perhaps it is widely understood that Litecoin is simply a light-minded version of Bitcoin and its community, being more agile, and less principled than Satoshi’s adamant followers. The founder going by @satoshilite does speak more to it than any further discussion. I suppose this light-mindedness was a positive thing at the time, as a testbed for Segwit was very much needed. (Just saying: if serving as a testbed for one patch to Bitcoin’s protocol provided Litecoin with an intrinsic value of 1010 USD, imagine the potential market cap of Kaspa facilitating as a testbed for ten Bitcoin protocol patches!)
My hope is for Kaspa to become a home for the principled community of devs, researchers, and PoW fans, to be a vibrant testbed for new ideas, a place where innovation is welcome (outside the money base!) and the path to integration of tested and justified features is in sight. This agility is one of the privileges of being the self proclaimed “little brother,” and we should take full advantage of it.
The burden of PoW is on us.
Notes
-
I did solve the conjecture though, which was really fun and satisfying. It started by first breaking RSA using a hypothetical machine that transmitted waves whose crest points were supposed to coincide with the solutions to the relevant parabolic diophantine equation. It turns out that, for this to work in practice, the size of the machine had to exceed the size of the observable universe, as explained to me by our math professor. He claimed that my solution is equivalent to using an infinite registry, hence trivial. Although he was empathetic and all, I decided to leave the production-ready version for another time. ↩
-
Notably, there are efforts to make Bitcoin usable for payments at large scale: the Lightning project. It is an interesting albeit unproven path for scaling, one which is more complex conceptually and UX-wise, which requires more trust (watch towers, etc.), and whose economics and decentralization dynamics are yet unknown. ↩
-
By the way, Richard Dawkins’ original statement assumes a very specific ordering of arguments. But if you first poll for the question whether Nature was created or rather created itself into being, most of us would probably take the former stance, despite disagreements about the pseudoidentity of its Creator. If you then run the GHOST protocol on the nodes of statements you will arrive at a different conclusion that the one Dawkins hoped for. So ordering matters. ↩