Stellar news delivered weekly:

Dark Archeology - Digging through Stellar XDR Archives

Everybody knows that Stellar Network is a blockchain-powered platform, right? However, it's architecture is completely distinctive from, let's say, Bitcoin or Ethereum. There is no mining, no PoW or PoS algorithms, it resembles rather a database with a simplified API than Bitcoin sequence of blocks. So why is it listed on the CoinMarketCap with other "classic" blockchains? Because Stellar is an immutable distributed ledger.

Immutability is one of the key blockchain concepts. All blocks in the ledger are sequential, and each new block is bound to its predecessor. Nobody can modify a block somewhere in the middle of the chain and enjoy one more million on the account balance because the whole chain becomes invalid. So we can safely assume that data on the blockchain is legitimate, having been verified by multiple network validators. Let's look closer at Stellar History Archives, the source of truth and ultimate authority on Stellar Network.

Each synced Stellar Core node maintains its database with an up-to-date ledger state of the whole network. That's right, all accounts and their balances, all trustlines, current SCP quorums – strictly speaking, the snapshot of everything the node needs to know about the network. Therefore when the node receives a transaction, it can validate and apply it, altering the current ledger state. And here we come to ledger immutability. When a node in the quorum set closes the ledger, the cryptographic hash of the previous ledger is included in the freshly forged ledger. In that way, we can be sure that none of the preceding blocks can be modified without compromising the whole ledger state.

Validating Stellar Core periodically creates so-called "checkpoints" – consistent history archives used by other nodes to catch-up with the active quorum.  Checkpoints are made once every 64 ledgers (approximately once every 5 minutes). Each checkpoint contains a set of XDR-encoded and gzipped files that describe the current ledger state and the changes applied since the previous checkpoint.

So is it possible to replay the complete Stellar history without Stellar Core? The answer is yes. Whether you want to aggregate historical statistics, run a fine-grained audit, or write your own Horizon implementation, you just need to fetch the data from a history archive and deserialize and process XDR-encoded data structures. Archives contain the complete history, so there is no need to write an app that interacts with any stellar-core peers in order to replay/audit the whole blockchain. Ok, it sounds interesting – let's delve deeper.

Each checkpoint contains the following history files:

- One History Archive State (HAS) with checkpoint metadata in JSON format.

- A ledger-headers file containing only metadata headers for all included ledgers.

- A transactions file, the concatenation of all the transactions applied in all the ledgers of a given checkpoint.

- A results file which stores the results of applying each transaction.

- Zero or more bucket files that can be used to reconstruct the network state at the checkpoint time without transactions replay.

- An scp file with a sequence of nomination and ballot protocol messages exchanged during consensus for each ledger.

For our needs, we will analyze only three types of files: ledger headers, transactions, and results. Ledger headers contain handy system metadata like ledger timestamp, protocol version, network-wide constants (max transaction set size, base fee, base account reserve), fee pool size, total lumens, ledger hash, and SCP state. All ledger state changes occurred when transactions were applied by validators and are included in result files.

How to download all those files? At the time of writing, the last known ledger in the Stellar public network was 19643699. Given that checkpoints are made every 64 ledgers, we have to download (19643699 / 64) * 3 = 920,798 files. Of course, we can fetch them sequentially, but it will take a few days to fetch all of them, even if you have a broadband connection. The only way to speed up the process is to perform multiple downloads in parallel. In practice, you can safely run 20-40 concurrent downloads depending on your internet connection speed.

Tip for developers: if you are using a remote server as an archive source, implement the local disk cache first and write all fetched files into some temp directory – you will have the local archive copy that later will save you hours each time you re-ingest the whole ledger from the scratch.

XDR files naming scheme is a bit odd when you look at it for the first time:

category/ww/xx/yy/category-wwxxyyzz.xdr.gz

where

- category describes the content of the file (ledger, transactions, results).
- wwxxyyzz is the 32bit ledger sequence number for the checkpoint at which the file was written, expressed as an 8 hex digit, lower-case ASCII string.

So if we want to download archive entries for the ledger, say, 19643647, first we need to convert the sequence number to the HEX representation (012bbcff) and then generate relative file path, like transactions/01/2b/bc/transactions-012bbcff.xdr.gz.

Note that unlike all other checkpoints, the first archive contains only 63 ledgers. Therefore, we can fetch archives using a simple algorithm where the next checkpoint sequence number equals the current sequence plus 64 (63, 127, 191, 255, and so on).


Here are helper functions in plain JS that I wrote to deal with archive file paths:

<p> CODE: https://gist.github.com/kylemccollom/6e32e4b11ecf5659b82f34bb69ef342c.js</p>

Once the archive is downloaded, it can be unzipped and parsed using the corresponding XDR data contract.

Suppose we have fetched all archives successfully, now we need to analyze and apply them consecutively to replay all ledger state changes. And here we are entering the territory where many treacherous pitfalls and perplexing riddles are waiting for us.

The previous phase gave us gave us deserialized ledger headers, transactions, and application results. So we just have to prepare graph-like ledger structures by attaching transactions to ledgers and results to the corresponding transactions, right? Not exactly.

First of all, result entries reference the parent transaction by its hash, yet the transaction XDR does not contain the hash field. So we have to parse the XDR and calculate the transaction hash using the SHA256 algorithm. Note that network identifier is also mixed into the hashing base, so you will get two completely different hashes for public and test networks using the same transaction XDR.

Ok, we have transactions for the ledger and a set of corresponding results. Now we can save everything to the database and analyze data! Not so fast... Transactions order in XDR tx sets does not always match the transaction application order. If you replay them as-is, at some point you might end up with negative account balances or payments without trustlines. To make everything right, we'll have to stick to the application order of transactions in the results file.

The next curious thing is that the transaction set archive file may contain some failed transactions in addition to successful transactions, and there is no way to tell them from successfully applied transactions without looking into the corresponding results file. Let's see which failed transactions are included and why. If a transaction submitted to Stellar Core passes a basic validation (all required fields like source account, sequence etc are present, and it has the correct signature), the Stellar Core node tries to apply it. Some operations may fail and the whole transaction is discarded, for example, because the destination account does not have a trustline to the transferred asset. Hence the operations from that transaction should be ignored. Still, we cannot just omit such transactions because they result in source account balance changes as the transaction fee is charged. If we are going to do, say, an account balances audit, failed transactions fees should also be counted. Currently, Horizon server does not allow to fetch such transactions so it cannot be used to consistently reconstruct account balances.

In some cases, even the reconstructed transaction+results combination is not sufficient to determine actual operation effects. For instance, it's impossible to tell if a trustline was created or changed by the CHANGE_TRUST operation – the operation results XDR contains CHANGE_TRUST_SUCESS in both cases. Horizon generates two different effects in such a situation, either TRUSTLINE_CREATED or TRUSTLINE_UPDATED. To achieve the same output you'll have to maintain an in-memory map (we don't want to use thousands of much slower database queries instead of in-memory cache, correct?) with all existing trustlines and lookup for matches on each CHANGE_TRUST operation occurrence.

Meaningless operations serve as a cherry on the cake. PATH_PAYMENT is a champion here, some apps/wallets use it in all cases including regular payments (source and destination assets are the same) and as an alternative to trades initiated by MANAGE_OFFER operation. Did you know that you can send to yourself any amount of any custom asset via PAYMENT operation without creating a trustline to the issuing account? The operation yields no effects and balance is not changed, but the operation is considered valid by the Stellar Core.

As you can see, this quest is only for those with a strong spirit. What's waiting for us at end of the way? Of course, piles of treasures for the brave!

I implemented my own custom archive ingestion engine to overcome the growing problems of the previous version of StellarExpert backend which was built on top of the StellarHorizon database, and it gave me all the benefits I expected. Database size is only 16 GB (MongoDB with zlib compression) compared to 250 GB of fully-synced Horzion+Core databases with auxiliary aggregation tables. Full ledger ingestion time takes only 10-11 hours compared to 4-6 days Stellar Core catchup. All statistics are calculated on the fly, and I have much more information to work with. So it was worth the effort without any doubts. Do not hesitate to explore history archives in case you need to work on the low level, you'll get truly hardcore and fascinating experience.

Originally published
here
.