KEEPING THE CYBERVEIN DAG IN CHECK WITH PROOF OF CONTRIBUTION
DAGS ARE AS BEAUTIFUL AS THEY ARE EFFICIENT. HOWEVER, THE MATHEMATICAL STRUCTURE ARISING FROM THEIR DESIGN CREATES CHALLENGES THAT NEED TO BE ADDRESSED BEFORE DAGS CAN BE CONSIDERED AS FAIL-PROOF AS TRADITIONAL BLOCKCHAINS.
MOST DAG PROJECTS DO THIS BY INTRODUCING CENTRALZIED ELEMENTS. THIS POST EXPLAINS HOW CYBERVEIN SOLVES THIS WHILE REMIANING A %100 DECENTRALIZED NETWORK.
DAGs are amazing structures of mesmerizing beauty and mathematical elegance. They present an excellent example of how robust complexity can be achieved by the virtue of relatively simple rules.
The simplicity of the concept behind DAGs is captivating: each broadcasted transaction entails the validation of two previous ones, creating an ever increasing throughput rate as the network grows. However, the mathematical structure arising from this design creates challenges that need to be addressed before DAGs can be considered as fail-proof as traditional blockchains.
To better understand these challenges, and how CyberVein addresses them, let’s start with an example:
In the example above, transaction #5 performs the validation work for #1 and #3 and will itself be considered valid once #9 returns the favor. Note how in this simplified example transactions are not necessarily validated in the same order as they’re broadcasted. #2 remains unverified, while #3 is already considered valid. Similarly, #5 will see three transactions pass before it will enter the official record.
This temporary uncertainty (or “low Confidence”) is mitigated as the ledger marches on, while more and more transactions validate their respective trees of predecessors which eventually overlap to create a coherent network state.
If the network is sufficiently large, this happens quickly enough to not really pose a problem. However, until then and in several other corner cases, this attribute of DAGs allows for various attack vectors.
An attacker could potentially exploit this situation to create a “double spending attack”, meaning spending the same dollar twice, or even spending money they do not own to begin with. The way this would be done is by spamming the network with trash transactions while Confidence is still low, eventually throwing the network out of sync.
HOW THIS IS SOLVED
There are several ways to approach this. Before we dive into CyerVein’s unique proposal, let’s summarize the methods most DAG-developing projects utilize
TEMPORARY CENTRALIZED CONTROL
Most DAG projects introduce Master Nodes, or “Coordinators”, which are essentially appointed nodes that are trusted by the network’s developers. These nodes are privileged in the sense that all their validations have an immediate 100% “Confidence”, without the network needing to “grow” on top of them. If an attacker tries to spam the network with trash transactions, they’ll be out of sync with the Coordinator’s verdict and hence ignored.
This method is cheap and fast but does entail trust in the network’s developers. A truly decentralized network cannot and should not rely on centralized control, otherwise it pretty much misses the point of its own existence.
ARTIFICIAL VALIDATION COSTS
Another way to deal with spam transactions is by attaching an artificial added cost to the validation process. This is essentially what Bitcoin introduced with its Proof-of-Work algorithm some nine years ago. By posing a requirement to perform costly cryptographic work to validate previous transactions, attacks are rendered infeasible, both practically as well as economically. An attacker would need to outperform the entire network in order to corrupt it, which is technically impossible in most cases, and not worth the while in all of them.
This method has already been proven to be secure, but it is very costly and inefficient. The waste of electricity and computational resources entailed in this process is neither tolerable nor sustainable in the long run. Furthermore, it makes transactions unproportionally expensive and is based on an easy to monopolize resource (Hashing power in this example).
Proof of Contribution (PoC), just like Bitcoin’s PoW, also makes attacks costly and technically impossible. However, in contrast to PoW, Proof of Contribution demands “work” that is actually useful to the system. Instead of having nodes solving otherwise useless cryptographic puzzles, PoC measures the much needed storage capacities a node donates to the network.
To understand how this works, we’ll briefly examine how DAGs store their transaction ledger:
Unlike blockchains, in a DAG network nodes are not required to store the entirety of the network’s transaction history. Instead, each node only stores the transaction and validation history relevant to its own operations. This is called “sharding” and it is one of DAG’s most attractive features.
However, the more nodes volunteer to store more than they’re supposed to, the higher the redundancy within the network, and consequently its safety and reliability. PoC incentivises nodes to do so by compensating them monetarily. In addition, nodes that store the entirety of the ledger will be identified by PoC as “Full Nodes”, which perform the same function as the Master or Coordinator nodes mentioned above.
Since storage space is a scarce resource, it acts as a “spam fee” that hinders attackers to flood the network with fictitious Full Nodes which would be required to perform an attack. This method is fully decentralized, much cheaper, and more resource conserving than Proof-of-Work or similar approaches.
Consequently, implementing PoC results in a securely decentralized network from day one, with considerably lower transaction costs than traditional blockchains. Additionally, using storage as the barrier to spam, the network scales much better than using computation resources which are easier to consolidate and monopolize.