Posts

Blockchain Scalability Trade-offs Banner

Blockchain Scalability Trade-offs: How to Choose a Platform

Blockchain scalability is one of the most frequently cited reasons that blockchain adoption isn’t happening faster.

The truth is, most blockchain networks are slow when compared to traditional centralized technologies and networks like AWS or the Visa network. Ethereum, for example, can process transactions in the range of 15 transactions per second — and Bitcoin is even slower.

On the other hand, blockchain networks have unique properties, such as native digital scarcity and unstoppability, that cannot be achieved in the same way with centralized approaches.

As developers continue to experiment and iterate on novel uses of these properties in decentralized software applications, popular platforms are hitting up against scalability and transaction limits. In this context, blockchain scalability is widely seen as a limiting factor in further blockchain adoption among software developers and end users.

In response to these blockchain scalability issues, the community has poured a lot of energy into developing scalability solutions at Layer 2 (e.g. Lightning, Plasma) and migrating existing platforms to faster consensus mechanisms (e.g. Ethereum 2.0). These solutions are all interesting in their own right, but I would argue that they are less important for scalability than design decisions in the protocol itself, and the use cases which are being targeted by those designs.

Developers who are investigating which decentralized platform to build on should carefully consider their requirements and the design goals of the platform they are choosing relative to those requirements.  Two important trade-offs in order to achieve your optimum level of scalability for your application are the level of decentralization and the level of programmability that you require. Not every application needs maximum decentralization nor programmability.

The Top Two Trade-offs to Blockchain Scalability

Level of Decentralization

The Blockchain Trilemma

For those who are unfamiliar with the famous blockchain trilemma, it states that when designing a blockchain protocol, you can only have two of these three properties: security, decentralization, and scalability. The insight is that it is hard to achieve all three simultaneously.

From my point of view, the most interesting use cases for blockchains are related to the storage and movement of value, so any significant sacrifices to security seem like a non-starter.

The key point behind the trilemma is the relationship between decentralization and scalability. It is easy to achieve scalability if you sacrifice decentralization. For example, traditional deployments using AWS infrastructure achieve very high scalability in a centralized model. But in this deployment model, the key properties that make blockchains interesting — namely, native digital scarcity combined with unstoppability — disappear.

Some projects use this relationship to their advantage. Take the example of EOS. In the EOS network, there are 21 block-producing nodes. This is far fewer than Ethereum or Bitcoin. By becoming more centralized, EOS achieves far higher transaction throughput than either Ethereum or Bitcoin. Twenty-one nodes isn’t totally centralized, but it is much more centralized than many other blockchain networks.

EOS’s goal is to be decentralized enough to keep the most interesting blockchain properties intact, but centralized enough to achieve significantly higher performance than competing blockchain networks. The question to ask yourself as a DApp developer is, what level of decentralization does your use case require? How worried are you about censorship of your application? Many high-value-oriented use cases may require higher levels of decentralization; for others, it may not matter.

Level of Programmability

The level of programmability that is supported by a blockchain protocol is at least as important a factor for blockchain scalability as decentralization. The key question is, what use cases are your applications going after, and what level of on-chain logic do you need to achieve your goals?

I can think of a range of applications with varying needs for platform programmability, running from money and asset transfer use cases on one end of the spectrum to Web3/decentralized applications on the other end of the spectrum. A money transfer application, a securitized tokenization platform, or a marketplace for digital collectibles may be perfectly well served by a platform that supports a rich set of asset transfer use cases. Whereas, a DAO or decentralized exchange that needs extensive onchain logic would require a full Turing-complete programming environment.

All other things being equal, platform scalability degrades as you move toward a more decentralized model. Bitcoin seems to be an exception to this rule as it is firmly on the “money” end of the spectrum, but it also isn’t scalable. The point is that Bitcoin would be even less scalable if it had full Turing-complete smart contracting functionality.

Platform scalability also decreases as you add Turing-complete smart contract functionality. With Turing-complete smart contracts, you need a gas concept to meter contract execution and produce deterministic behavior, which adds fees and operational costs to your application. You also need to allow smart contracts to store arbitrary state or data on the chain, which leads to more fees and increased storage requirements for the underlying blockchain nodes. Most smart contract platforms have a single one-size-fits-all virtual machine for all contracts on the platform, which can also become a scalability bottleneck. All of these factors reduce the scalability and throughput of Turing-complete smart contract platforms.

Opposite Ends of the Blockchain Scalability Spectrum: Algorand and Ethereum

Let’s take a couple of concrete platform examples to illustrate the point.

Algorand Prioritizes Performance Over Turing-Complete Programmability

Algorand is a high-performance, next-generation blockchain that focuses on the money and asset end of the design spectrum, and is also highly decentralized (unlike EOS). It achieves very high performance and transaction throughput by focusing on money, asset, and asset transfer use cases and doing them well.

Algorand’s new scripting language, TEAL, is intentionally non Turing-complete to avoid the gas fees, arbitrary storage, and infinite loops that come along with Turing-complete smart contract platforms. These are specific choices that allow it to achieve high performance for the money, asset, and asset transfer scenarios. This is why applications supporting these use cases that have high throughput requirements, such as Tether and Securitize, are porting their applications to run on Algorand.

However, if your use case requires Turing-complete onchain logic such as DAOs, onchain DEXes, or the development of onchain decentralized protocols sitting above the base protocol layer like Compound or 0x, then the level of programmability in Algorand isn’t as good of a fit.

GET STARTED WITH PURESTAKE’S API SERVICE FOR ALGORAND

Ethereum Prioritizes Turing-Complete Programmability Over Performance

By contrast, Ethereum is the most prevalent full smart contract platform. Ethereum puts programmability first, at the expense of throughput and scalability.

Scalability issues on Ethereum result from arbitrarily complex logic execution and arbitrarily large storage that comes along with its smart contracts. Logic and storage are metered by gas fees, and these fees have increased over time as the number of smart contracts has increased. The underlying blockchain on an Ethereum full node (let’s leave out archival nodes for now) takes over 100GB of storage and growing. Ethereum has a small fraction of the scalability of Algorand from a transaction-per-second and node storage perspective.

However, what you get in return is the ability to express arbitrary smart contract logic in the Ethereum virtual machine. And since all the programs run in the same virtual machine, you get composibility between different contracts which is driving a lot of advances in the DeFi space right now.

For DApp developers, you should carefully consider which parts of your application need to be de-centralized / onchain, and if you really need a full Turing-complete smart contract language to build your application. Turing-complete smart contract platforms allow for arbitrary expressions of onchain logic, but often at a cost in terms of scalability and throughput.

Choosing the Right Platform for Your Application

The scalability of the underlying platform is a critical part of choosing where to build. Even if your application itself doesn’t need high levels of scalability, those scalability challenges inevitably lead to high transaction fees, which will increase your operating costs as an application project, and may make certain use cases unviable from an economic perspective.

You should carefully consider the level of decentralization and programmability you need to support your use cases when choosing a platform. For a use case that requires only an ERC-20 contract on Ethereum, but does not require interoperability with other Ethereum smart contracts, it may make sense to look at platforms that are optimized for this scenario, such as Algorand. On Ethereum, you could be paying a lot more unnecessarily without realizing the benefit of the increased programmability.

 

Algorand Node API Acceleration

Accelerating Performance of the Algorand Node API

It’s been a few months since we launched our Algorand API as a Service. In that time, we have gotten a lot of great feedback on what is working well and what still needs more work.

One thing we have heard and observed ourselves is that certain operations time out and don’t return any data. Digging into this, we observed that one of the queries that we provide has highly variable performance associated with it. Our API service is fronting pools of Algorand nodes, and calls are serviced by the Algorand Node Rest API on these nodes. The performance issues we are seeing are issues with a specific query to the Node Rest API itself.

The /v1/account/{address}/transactions Endpoint

The REST endpoint that has the performance issues is /v1/account/{address}/transactions. This query is only available if you run a full archival indexer node.

The purpose of this endpoint is to be able to query the transaction history for a given account. It is a useful query from a developer point of view as you could use it to, for example, populate a transaction history when looking at an account in a wallet or other application.

Sometimes, this query returns quickly and sometimes, the query times out. By default, the query has a max result set of 100. The current behavior of the query is that, given an account, the indexer node will start walking backwards from the head of the chain looking for transactions to fill its result bucket that meet the query constraints.

This becomes problematic for accounts with relatively low transaction volume. If there has been a lot of transaction activity on the account, the query will reach the result set limit quickly and return. If there has not been a lot of transaction activity, it will keep walking backwards — all the way to genesis — looking for transactions that meet the query criteria. In this second, low-transaction activity scenario, the query generally times out and also has the side effect of making the node unresponsive to other queries.

This second scenario is problematic, since users are actively hitting this endpoint, which starts taking nodes in our node pools offline for periods of time. While we have plenty of capacity in our node pools, if enough of these queries were received in rapid succession, we could suffer an outage due to a kind of denial of service situation.

Even for accounts that have a large transaction volume, when parameters restricting the scope of the search are used, for instance fromDate and toDate, the query becomes non-performant and the service suffers.

Improving the Endpoint with a Backend Datastore

First, it is important to recognize that the node can’t be optimized for every situation. It already performs a variety of different roles including supporting consensus, relaying, etc. However, given the current behavior of this query, we must provide an improved path to our customers so they can reliably retrieve transaction data.

We have decided to replace the backend handler for this query with a datastore that is optimized to return results much more quickly and efficiently.

This backend datastore is based on AWS Aurora and includes a set of AWS lambda data management routines to keep this datastore reliably in sync with the Algorand TestNet, MainNet, and BetaNet. Queries coming into this particular endpoint will be serviced by this new datastore. All other queries will remain serviced by our node pools.

GET STARTED WITH PURESTAKE’S API SERVICE FOR ALGORAND

Philosophical Questions

The direct node API represents the truth in terms of the state of the Algorand blockchain. The downside to servicing this query with an alternate third-party data store is that there is a chance that the third-party will return the wrong data due to a bug or other problem with the infrastructure.

At PureStake, we spent some time debating this situation. We could have just turned off the endpoint, but that didn’t seem like a good solution, and certainly not helpful to developers trying to build on Algorand. After all, this is a useful query to have available for a variety of use cases.

Additionally, we had to consider that the way the query works against the node isn’t necessarily what most developers want. Even for the scenarios where the query returns — accounts with high transaction activity — you only get the last 100 transactions, not all of them.

In the spirit of providing a solution to the intent behind this endpoint, we decided to move ahead with offering a more performant version of this query, while at the same time making it compatible with the available SDKs. However, we will clearly distinguish between queries that are serviced by the node and ones that are serviced by other infrastructure we are running. We will mark this particular query in the response headers and in our portal as being serviced in a different way to the other node api queries. We feel this approach strikes a good compromise and helps developers building on Algorand.

Comparative Performance of the New Endpoint

To take a concrete example, the following query when run against a TestNet indexer node will time out (on a reasonably spec’d machine):

GET /v1/account/EWZYOHWLR2C44MDIPNZMOGZMAAY66BWI2ALGJB2NE22TWO7YGNCS7NTFVQ/transactions

The state of account EWZYOHWLR2C44MDIPNZMOGZMAAY66BWI2ALGJB2NE22TWO7YGNCS7NTFVQ is that it has 2 transactions in its history, less than the 100 max results query limit, so the node will start walking back towards genesis looking for transactions. The query times out in 30 seconds for API users, but will continue running on the node until complete – in a test for this article the query was manually stopped after 25 minutes. The io / iops on the node goes very high while the query is running on the node and consumes the allotted burst capacity, eventually locking the machine up.

Backed by the new datastore queries to this endpoint return in under 1 second generally or 3 seconds or under from a cold lambda start .

Next Steps and Future Direction

It isn’t possible for the node to achieve high performance for all queries.

PureStake’s new datastore will be the basis of a new query-optimized set of endpoints that will be offered alongside the node-based APIs. These APIs would be well-suited to certain types of applications, such as explorers, wallets, etc. However, users will have the choice to opt for the node-backed APIs or the query-optimized APIs, depending on their preferences.

PureStake will likely continue to create different kinds of optimized data stores over time in order to support different types of queries and use cases. A de-normalized data warehouse is another obvious optimization for aggregate and over-time type data queries that aren’t possible with the current node API.

We may remove our query-optimized endpoint for this specific node API in the future, if the performance and behavior of the underlying node API changes.

Are there other APIs or features you would like to see added to the PureStake API services application? Reach out to us and let us know.

 

Banner for Demystifying Algorand Rewards

Demystifying Algorand Rewards Distribution: A Look at How & When Algorand Token Rewards Are Calculated

The Algorand Proof of Stake network issues rewards to its token-holders in order to stimulate and grow the network — but it’s not always clear how these returns are calculated. In this article, I’ll outline the basic framework of the Algorand rewards distribution model, and walk through some potential impacts it could have on the rewards calculations for current token-holders.

What are Network Rewards?

Traditional Proof of Work (PoW) based blockchains (BTC, ETH) rely on miners to validate transactions and record them in consecutive blocks on the ledger. In this effort, the miners compete against each other by solving complex mathematical problems. The miner with the winning block gets a certain amount of tokens rewarding them for their work in return.

This basic economic principle provides the incentive to support the required infrastructure to run the network. One of the major flaws of PoW networks is the unproductive effort put into competing for mining blocks, resulting in relatively slow performance and huge amounts of energy wasted.

The Algorand blockchain, like many newer blockchain networks, is solving the inherent issues of PoW with the concept of Proof of Stake. Unlike PoW miners, PoS networks rely on validators to verify transactions written to each block on the chain.

Algorand chooses its validators using a process called sortition with a selection preference towards validators holding the largest amount of tokens, referred to as stake. Just like miners, the validators receive rewards for their work, but unlike PoW, no resources are wasted on unnecessary, competing work.

The rewards are typically a percentage yield of return on the amount of stake a validator is holding. That way, the economic incentives for validators are directly linked to the amount of stake as a motive for supporting the network i.e. the higher your stake, the more interest you have in contributing to the health of the network, and the more rewards you earn.

How are Rewards Distributed on Algorand?

Algorand currently deploys a more liberal rewards scheme in order to benefit every token holder whether or not their tokens are staked and participating in the consensus protocol. The idea behind this is to stimulate the adoption and growth of the network by rewarding all token holders equally. To support the operation of the network during this initial period, Algorand has issued token grants to early backers for running network nodes to help bootstrap a scalable and reliable initial infrastructure backbone.

As the network grows over time, I anticipate that the rewards distribution will shift favoring active stake-holders and validators.

Algorand Rewards Distribution Schedule

The current rewards distribution is determined and funded by the Algorand Foundation. You can read a detailed explanation of the overall token dynamics here.

The first 6M blocks on the Algorand blockchain have been divided into 12 reward periods of 500,000 blocks each. Each period is funded by an increasing amount of reward tokens to offset the increasing total supply of tokens. The token supply periodically increases due to early backer grant vesting and token auctions facilitated by the foundation.

 

Period Start Date (estimated) Starting Block Ending Block Rewards Pool (Algo) Block Reward (Algo)
1 6/10/2019 1 500,000 10,000,000 20
2 7/7/2019 500,001 1,000,000 13,000,000 26
3 7/31/2019 1,000,001 1,500,000 16,000,000 32
4 8/25/2019 1,500,001 2,000,000 19,000,000 38
5 9/20/2019 2,000,001 2,500,000 22,000,000 44
6 10/15/2019 2,500,001 3,000,000 25,000,000 50
7 11/10/2019 3,000,001 3,500,000 28,000,000 56
8 12/5/2019 3,500,001 4,000,000 31,000,000 62
9 12/31/2019 4,000,001 4,500,000 34,000,000 68
10 1/25/2020 4,500,001 5,000,000 36,000,000 72
11 2/20/2020 5,000,001 5,500,000 38,000,000 76
12 3/16/2020 5,500,001 6,000,000 38,000,000 76
Algorand Network Rewards Schedule

Distribution Mechanics

At the time of writing, we have just entered the sixth period of the current rewards distribution schedule (see the highlighted row in the table above). According to the schedule, 50 Algo are distributed as network rewards in each block, approximately every 4.4 seconds.

Algorand rewards are calculated each block based on the account balance of every to-or-from address recorded on the blockchain. The minimum account balance that is eligible for receiving rewards is currently 1 Algo.

If you monitor any eligible account, you will notice that the account balance doesn’t get the rewards applied at every block, but after a certain number of blocks instead. This is a function of the smallest unit of Algo that can be disbursed by the system, which is 1 MicroAlgo (10-6 Algo).

Since the minimum balance is currently 1 Algo and the smallest reward is 1 MicroAlgo, you can calculate the number of rounds it will take for rewards to be credited using this formula: Formula to calculate Algorand rewards: rounds equals ten to the negative sixth power divided by the product of block reward times total supply

With a supply of 2,103,868,588.605500 Algo at Block 2550212 and a Block Reward of 50 Algo, a reward of 1 microAlgo is dispersed for each Algo held every 42 rounds or roughly every 3 minutes. This will obviously change as the total supply grows.

Rewards APR vs APY

When you check the account balance of any address, the node API will return the balance that was last recorded on-chain plus any accrued rewards. The combined balance will not be written to the blockchain until the address appears in an actual transaction. In other words, rewards are only committed to the account balance on the blockchain when the address appears in a to-or-from address as part of a transaction. This is expected behavior, as ongoing, immediate on-chain updates of the balances of all eligible accounts on every round would pose a serious performance challenge.

There is a small but important consequence of this behavior. As pointed out earlier, rewards are calculated based on the recorded on-chain balance for each account. That means that none of the accrued rewards are included in this ongoing calculation. In essence, it is not compounding the accrued rewards interest. For accounts with large account balances, the difference in reward returns can be significant.

The current daily rewards percentage return based on the assumption of 1 microAlgo every 3 minutes is:

Equation to Calculate Daily Rewards Percentage: the number of minutes in a day which is 24 times 60 divided by the product of 3 minutes times ten to the negative sixth power times one hundred percent equals 0.048 percent

At the time of this post, an account that is holding 1M Algo would generate about 480 Algo per day or 172,000 Algo per year in rewards with an effective APR of 17.52%. If the rewards were compounded on a daily basis the effective APY would increase to 19.15%. These numbers will change over time as the rewards schedule changes, the total supply grows and transaction fees may increase.

Rewards Compounding

Compounding rewards is simple. Since rewards are calculated from the last recorded balance on the blockchain, the easiest way to force rewards compounding is to send a zero Algo payment transaction to the target address on a frequent, recurring basis. This transaction will trigger the commit of all accrued rewards and record them to the on-chain balance of the account.

So what is the ideal compounding frequency if you want to maximize your rewards?

For one, it doesn’t make sense to send compounding transactions more frequently than the number of rounds it takes for rewards to be disbursed, currently >3 minutes. Secondly, every transaction has an associated transaction fee, currently 1,000 microAlgo. This means the cost of these transactions shouldn’t outweigh the gains achieved by compounding.

Given that:

Rewards APR 17.52%
Transaction Fee 0.001

 

Net Compounding Rewards Interest at Specific Account Balances
Account Balance 50,000,000.000000 5,000,000.000000 50,000.000000
Annual Simple Rewards Interest 8,760,000.000000 876,000.000000 8,760.000000
trx Frequency trx/Year trx Charge Net Comp Rewards Int Rewards APY Net Comp Rewards Int Rewards APY Net Comp Rewards Int Rewards APY
10 min 52,560 52.560 9,574,154.528546 19.15% 957,368.148855 19.15% 9,521.647089 19.04%
Hour 8,760 8.760 9,574,111.351504 19.15% 957,403.251150 19.15% 9,565.360112 19.13%
2 x Day 730 0.730 9,572,971.479139 19.15% 957,296.490914 19.15% 9,572.242209 19.14%
Day 365 0.365 9,571,719.996054 19.14% 957,171.671105 19.14% 9,571.355361 19.14%
Week 52 0.052 9,556,683.398054 19.11% 955,668.293005 19.11% 9,556.631450 19.11%
Month 12 0.012 9,498,812.777423 19.00% 949,881.266942 19.00% 9,498.800789 19.00%

The cells highlighted in the table above show the maximum return achieved at certain compounding frequencies for specific account balances under the current conditions. It also illustrates that there are diminishing returns at higher rates and that, for most cases, a daily compounding transaction will suffice to get the core benefit of compounding rewards.

Conclusion

Clearly some frequency of rewards compounding always makes sense for long-held account balances (e.g. staked balances). Since rewards are currently equally distributed across all accounts whether they are staked or not, the returns represent more of an inflationary rate across the entire supply rather than an interest return for node operators or staked balances.

It is also important to note that the compounding gains are significantly exaggerated at this time due to the already high rewards interest and the low transaction fees. You will need to evaluate what amount of compounding makes sense for you, based on changing parameters and held account balances.

PureStake automatically compounds all customer accounts that are staking with us. Please reach out to us if you would like to learn more about this and our services.

Vanity Algorand Address Generation Banner

Struggling to Organize Your Algorand Addresses? Use This Utility to Generate Vanity Addresses

Everybody has had that experience of second guessing yourself after sending a transaction. Did I send to the right address? Did I mistype a character? Why isn’t the transaction showing up in the explorer?

Algorand addresses consist of a random collection of 58 capital letters and numbers. Working with these addresses can be challenging, as they don’t have any meaning. Getting the address wrong by even one character often means that funds are lost forever.

Some wallets allow you to name addresses, but these names are only available in the context of the wallet. There are projects such as Ethereum Name Service which aim to provide a DNS-like mapping from human-readable names to Ethereum addresses, but this service is specific to Ethereum and cannot be used for other chains like Algorand.

Vanity Algorand Addresses Make It Easier

Think about phone numbers from the pre-mobile era.

For most people, phone numbers aren’t meaningful by themselves. Back then, you committed phone numbers to memory in their entirety for family and friends. To introduce more meaning, businesses would often purchase vanity phone numbers that were easy for their customers to remember.

But when it comes to crypto accounts, raw addresses are far too long to remember in their entirety, so users — at best — remember the first few and last few characters.

Much like vanity phone numbers, vanity crypto addresses can be a way to make addresses more meaningful and easier to remember without the need for additional services. We might generate an address that starts with the string “PURE” to show that the account belongs to PureStake, or we might create a 5-letter customer code to help us remember which customer the account is for.

How to Use the Algorand Vanity Address Utility

I’ve created a small utility for generating Algorand vanity addresses. It can be used to generate addresses that start with a specified string. The time taken to generate a given address is dependent on the underlying computer hardware and, most importantly, on the number of characters you specify. 5 or 6 characters is the practical limit for a string that will finish in a reasonable amount of time.

The vanity generator can be downloaded here:
https://github.com/PureStake/api-examples/blob/master/python-examples/algo_vanity.py

It is written in Python and tested on Ubuntu 18.04. It requires python3, pip3, and py-algorand-sdk. It does not require a full Algorand node installation.

If you don’t have pip3 you can install it with:

sudo apt install python3-pip

With pip3 installed you can install py-algorand-sdk:

sudo pip3 install py-algorand-sdk

At this point, you should be able to run the program. Let’s generate an account that starts with “PURE” that we can use as an internal company account. You pass the string you want to look for as an argument to the program.

derek@puredev:~/py$ ./algo_vanity.py PURE
Detected 2 cpu(s)
Found a match for PURE after 2431449 tries in 123.83 seconds
Address: PUREXXP2S2IIOUP7ZSBIVBHOA54ZLYNND5G3YIENSAHJZ5D7AAYSCM7K5E
Private key: noble city arrest oyster pluck tennis toast flip same business drum below flame must lonely gorilla you local turtle desk suspect anger basic abandon upper

Or in the case of customer accounts, lets create a couple numbered accounts of the form CXXXX where XXXX is a customer id. Note that there is no 0 or 1 in Algorand addresses:

derek@puredev:~/py$ ./algo_vanity.py C2222
Detected 2 cpu(s)
Found a match for C2222 after 9893404 tries in 499.07 seconds
Address: C2222YNLSZHZ5PBO7L3UDOK7I5EZISQGSORDRGVSVNXGJNQCSKZDLK6LRE
Private key: swarm famous entry paper pause always magic hire burden aisle attack spring sport custom lend treat client burst decrease dad access pumpkin bulb ability blood
derek@puredev:~/py$ ./algo_vanity.py C2223
Detected 2 cpu(s)
Found a match for C2223 after 17209546 tries in 892.68 seconds
Address: C2223JBUAC43767TEDJCOD3T3N6A6L5ELVGGPSZWENAD7XUYFTMINWK734
Private key: dignity napkin faculty air mean mother crisp party spoon resource hub exhaust stand above logic siren book lock shallow shallow index copper hundred absent bring

The program works by simply brute-force generating address/key pairs over and over until it finds a match. There is no shortcut here, as any shortcut would represent a security problem for someone trying to guess the key for an existing account. The activity is similar to a Proof of Work mining algorithm.

If you use algo_vanity.py to generate vanity addresses make sure you do it on a secure machine, and that you take appropriate security precautions with the generated private keys. The script will take advantage of all the cores it finds on the machine.

Feel free to use algo_vanity.py to generate more meaningful addresses for your business accounts, or maybe just for fun.

How to Set Up a Ledger Nano S with an Algorand Account

Security Check-up: How to Use Ledger Nano S to Secure Algorand Accounts

Key management can be very stressful to cryptocurrency investors and users who control a large amount of crypto funds. Despite your best efforts, it can be very easy to compromise a key that has had any exposure to the internet, or to hijack a user’s phone, in order to steal their digital assets. So it’s no surprise that hardware wallets have taken over as a popular (and effective) counter-measure for hackers seeking to gain access to your private keys.

Hardware wallets do have some limitations, though. In this article, I’ll briefly review some important points to consider before employing a hardware wallet, and then I’ll provide a step-by-step walkthrough on how to set up a Ledger Nano S in order to better secure a cryptocurrency account (in this case, an Algorand account).

Why You Should (or Shouldn’t) Use a Ledger

Hardware wallets such as the Ledger Nano S offer significant advantages over software-based wallets.

First, there’s the physical security element. By storing private keys for an account in a secure element within the hardware device, it becomes very difficult for an attacker to steal the private key for your account without having physical access to the device and knowledge of the PIN for the device.

By keeping the private key on a hardware wallet, you also reduce the risk of malware and other online attacks from compromising your account spending keys, since the keys never leave the physical device.

HOWEVER, hardware wallets can be finicky and difficult to use.

For example, in Algorand, only the command line tools support the Ledger device.  So that means that you cannot use a Ledger with the Algorand mobile wallet, at least as of right now.  Only the Ledger Nano S is supported; the Nano X is not yet supported. Using a Ledger on Algorand means you are limited to apps that specifically have Ledger support.

The Ledger also only supports a single key, so multisig configurations will require multiple ledger devices which adds complexity.  Ledger support is planned for the Algorand mobile wallet application at some point, but it is not clear when this will happen. To use a Ledger with your Algorand account today requires comfort at the command line.

So, to recap:

Pros

of Using a Hardware Wallet to Secure an Algorand Account

Cons

of Using a Hardware Wallet to Secure an Algorand Account

  • Difficult to steal the key without physical access to the device
  • Less likely to fall victim to malware & other online attacks
  • Applications must explicitly support wallet hardware.  Currently only command line supported in Algorand for Nano S
  • Limited multisig support requiring complex multi-device and multi-step process

 

How to Set Up the Ledger Nano S for Use With Algorand

To use a Nano S to secure an Algorand account, you first have to go through the basic setup of the Nano S. For this article, I’m going to assume that you are starting with a fresh Ledger Nano S that will only be used to store ALGOs securely.

To start, you will download the Ledger Live application to your computer. Ledger Live is what you use to manage the applications on your Ledger device. You can download Ledger Live here.

Once you install it and plug your Nano S into your computer, click the “Get Started” button. You should see this screen:

 

Get Started with a Ledger Nano S on Ledger Live Screenshot

 

When initializing a new device, the first step is to choose a PIN code:

 

Choose Your PIN Code in Ledger Live

 

You should follow the steps in the Ledger Live application and on your Ledger Nano S device. There are 2 buttons at the top of the device. Hitting both buttons simultaneously acts as the “Enter” option.

Again, since I’m assuming this is a new device, you will want to elect to configure it as new. This will wipe out any previous configurations on the device.

 

How to Set Up a Ledger Nano S as a New Device

 

The next step is to choose a PIN code.  This is a critical step to preventing someone who steals the device from being able to use it to access your funds.

Follow the prompts on the device and Ledger Live app closely, and use the left and right buttons on the device to select a PIN code.  It should at least be 6 digits long. Hitting both buttons advances you to the next position. The selecting the check mark will indicate that you are done.

 

Choose a PIN Code for Your Ledger Nano S

 

The next step is to write down the recovery phrase for the device. This is critical. In case you ever lose the device or if it malfunctions, you will be able to restore the account to another device. The Ledger Live app walks you through this:

 

Write Down the Recovery Phrase in Ledger Live So You Can Recover Your Ledger Nano S Later

 

You will need to write down all 24 words of the recovery phrase, and you will be tested to make sure you have written down all the words.

 

Confirm the Recovery Phrase of a Ledger Nano S

 

Once you have verified the recovery phrase, the base setup of the Ledger is complete.

Next, you need to use the Ledger Live application to install the Algorand application onto the device. Go to the manager section of the Ledger Live app, search for “algorand” and click to install the Algorand application.

 

Install the Algorand Application in the Manager Section of Ledger Live

 

Once the application is installed on the ledger, you should see the Algorand app as shown below.

 

Install Algorand on Ledger Live to Use with Your Ledger Nano S

 

The installation of the Algorand app on the ledger created an Algorand account and stored the private key of the account on the secure element of the Ledger device. The private key never leaves the device, but you can see the account address if you go into the Algorand app on the device under “Address”:

 

View a Public Account Address on a Ledger Nano S

 

Using the Ledger From the Algorand Command Line

Now that we have the ledger configured with the Algorand app, it is ready to use with an Algorand node installation.

In my examples below, I have an installation of Algod installed from the Debian package running under Ubuntu 18.04 with a synced blockchain. I have plugged in the Ledger device to the computer with Algod on it.

The first thing we can do is look to see that the Ledger device has been recognized. We can do this with the “goal wallet list” command:

 

ubuntu@ubuntu:/var/lib/algorand$ sudo goal wallet list -d /var/lib/algorand
##################################################
Wallet:    Ledger Nano S (serial 0001) (default)
ID:    0001:000a:00
##################################################
ubuntu@ubuntu:/var/lib/algorand$ sudo goal account list -d /var/lib/algorand
[offline]    Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE    Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE    0 microAlgos
ubuntu@ubuntu:/var/lib/algorand$

 

Note that the ledger shows up as a wallet on this computer once it is plugged in. I did not create this wallet with the “goal wallet new” command. It was created for me when I plugged in the ledger device. Issuing the “goal account list” command shows the single account on the device and the balance of that account, which is 0. I also did not create this account with the “goal account” command, it simply came along with the wallet that was automatically created.

When you list the accounts, if you get the error message “Error processing command: Exchange: unexpected status 680.” This means that you need to unlock the Ledger with your PIN. It should work after that.

In this example, the Algod node is on the TestNet. In order to try out a transaction, let’s use the TestNet dispenser to give our Ledger account some testAlgo:

 

Issue Algo Using the Algorand Dispenser

 

Using the dispenser, we issue 100 testAlgo to our account. After dispensing the Algo we can verify the balances using the “goal account list” command again:

 

ubuntu@ubuntu:~$ sudo goal account list -d /var/lib/algorand
[offline] Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE 100000000 microAlgos
ubuntu@ubuntu:~$

 

Note that the account now has 100,000,000 microAlgos or 100 Algo in it. Now that we have a balance, let’s try sending a transaction from this account. To do this, we will use the “goal clerk send” command to send 1 Algo to another account:

 

ubuntu@ubuntu:~$ sudo goal clerk send -a 1000000 -f Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE -t OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA --note "" -d /var/lib/algorand

 

Note that the –note option with the empty string is needed, as the Ledger does not support values for the notes field and it will complain if you don’t explicitly specify the notes field to be blank.

Once you issue this command, you will be prompted on the ledger to sign the transaction. Recall that the private signing key for this account never leaves the secure element of the ledger, so the signing action happens on the ledger device:

 

How to Initiate a Transaction on a Ledger Nano S

 

There are a bunch of details about the transaction that you are shown on the ledger device including sender, firstvalid round, lastvalid round, genesis id, genesis hash, receiver, and amount. You will ultimately be asked if you want to sign the transaction:

 

Sign a Transaction on a Ledger Nano S

 

If you click yes, you will see progress on the Algod command line:

 

Sent 1000000 MicroAlgos from account Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE to address OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA, transaction ID: TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q. Fee set to 1000
Transaction TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q still pending as of round 2060606
Transaction TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q committed in round 2060608
ubuntu@ubuntu:~$

 

Note that if you take too long, the operation can timeout on the Algod side, requiring you to start over.

Once we have completed the transaction we can view the balances for the account once again:

 

ubuntu@ubuntu:~$ sudo goal account list -d /var/lib/algorand
[offline] Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE 98999000 microAlgos
ubuntu@ubuntu:~$

 

You can see that our account that had 100 Algo in it now has 98.999000 Algo in it. 1 Algo was sent to OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA, and there was a 1000 microAlgo transaction fee on top of that getting us to the resulting balance.

 

A multisig transaction example using Algo and an Algorand wallet

A Multisig Transaction Example: 5 Steps to Sending Algo Securely with an Algorand Multisig Account

Multisig transactions are a great deal more secure than single-key transactions, and for good reason: you’re removing a single point of failure, and distributing signing responsibility to a greater number of keys. However, this can add a great deal of effort to the transaction process when it comes time to send your funds to another account.

In this article, I’ll walk you through an offline multisig transaction example using an existing Algorand account (follow the steps in this article if you have not already created an account). While this example shows you how to spend funds, the same steps will apply to registering a participation key, bidding on auctions, and (in the future) voting.

To begin, let’s review all the things we have so far:

  • An online computer with a working Algorand node installation.
  • An “Ubuntu” bootable USB drive, which can be used for an offline computer.
  • A “Keys” USB drive with algokey and a file with the keys for our multisig account
  • A “Transfer” USB drive for transferring files between the online and offline computers

We will use these components to securely send a transaction from our multisig account while keeping our spending keys totally offline. The process will be:

  1. Create an unsigned transaction on the online computer
  2. Move this transaction file to the offline computer
  3. Sign the transaction on the offline computer
  4. Move the signed transaction back to the online computer
  5. Send it to the network

1. Prep Spend Transaction and Save Out to tx File (Online)

The process starts on the online computer. We will prepare an unsigned transaction file that describes the transaction we want to execute. Our transaction will be to send 1 Algo from the multisig account we created in this post to a destination account with address: 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY.

Open a terminal on the online computer and issue the goal node status command:

purestake@algo-node:~$ goal node status
Last committed block: 1630913
Time since last block: 2.7s
Sync Time: 0.0s
Last consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0
Next consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0
Round for next consensus protocol: 1630914
Next consensus protocol supported: true
Genesis ID: mainnet-v1.0
Genesis hash: wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=
purestake@algo-node:~$

The goal node status command returns information about the node and its view of the blockchain. Make a note of the “Last commited block” value, which we will need when we construct our transaction file. The reason is that transaction files are only valid for up to 1000 rounds or blocks. So we need to specify a validity range with the last commited block as the starting value for the range. The goal clerk send command can be used to create the transaction file:

purestake@algo-node:~$ goal clerk send -a 1000000 -f FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY -t 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY --firstvalid 1630913 --lastvalid 1631912 -o transaction.tx
Please enter the password for wallet 'MyWallet':
purestake@algo-node:~$

The goal clerk command above creates a file called transaction.tx in the working directory with an unsigned transaction that will send 1 Algo from FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY to 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY with a validity range of block 1630913 to 1631912.

Note that the amount is specified in microalgo as 1000000. All the algorand command line tools generally take Algo amounts in microalgo, or millionths of an Algo. Be very careful when specifying amounts as arguments to these commands. Without commas to create visual separation, it is very easy to make a mistake with an extra or missing zero.

I used the previously-recorded block height value of 1630913 as the firstvalid argument. To come up with the lastvalid argument value I added 999 to the firstvalid value. Generated transactions can have a maximum validity of 1000 blocks. Blocks are being finalized every ~4.5 sec currently, so this means that the transaction file will be valid for roughly 75 min. This can make timing tricky, depending on the coordination needed to actually sign the transaction. However, you can specify validity ranges out into the future if you need more time to perform the signing action.

Inspect tx file (online)

It is always a good practice to check the transaction file for correctness before proceeding to subsequent steps. The file is a binary file so opening it in a text editor is not useful. But we can use the “goal clerk inspect” command to look at its contents. To inspect the file contents, run the following command:

purestake@algo-node:~$ goal clerk inspect transaction.tx
transaction.tx[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA"
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE"
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4"
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

purestake@algo-node:~$

The first section shows that this is a transaction from a multisig account, and the 5 public keys for the multisig account are present. In the bottom section, you can find details about the transaction that we specified on the command line, such as the amount, firstvalid, lastvalid, the destination address, etc. The fee value is the fee for sending the transaction, which currently defaults to 1000 microalgo.

2. Copy tx File to the Air-gapped Machine

The transaction file transaction.tx is ready to be signed. But recall that we don’t have the spending keys on this online computer. The spending keys are on a USB device and will be used for signing on the offline computer. The unsigned transaction file cannot be used to send funds without being signed with 3 of the 5 spending keys associated with the multisig account, so it is reasonably safe to copy this file.

Copy transaction.tx to the “Transfer” USB drive.

As a next step, reboot the computer using the Ubuntu USB drive into an offline state and plug in the Transfer and Keys USB drives.

3. Sign tx File on the Air-gapped Machine (Offline)

Once we are booted to the offline Ubuntu desktop, we will perform the signing action for the transactions. We will create a folder on the Ubuntu desktop called tx and copy into it the algokey, the text file containing the keys, and transaction.tx. To sign transaction.tx, open a terminal to the tx folder on the desktop and issue the algokey multisig command to sign the transaction file:

ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction1.tx.signed -m "expire wear husband fancy now until laundry token strong dignity arrow valley post raven pudding farm twin chalk cloud tenant cart off shop abandon trophy"
ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction2.tx.signed -m "lucky dust hub crew barely leave gas crew canvas exhibit margin mixed impose air wasp chat athlete sketch ozone humble parent rail remind abandon host"
ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction3.tx.signed -m "draft mule stamp run absent congress leopard notice minute hungry fresh physical flee favorite cram green salad promote remember route assume gentle early absorb during"

These 3 algokey multisig commands each perform a private key signing action on the provided transaction.tx that we created in a previous step, as outlined in this blog post. The private key is supplied on the command line as a mnemonic, and each invocation creates a different signed transaction output file, transaction1.tx.signed, transaction2.tx.signed, and transaction3.tx.signed.

4. Move tx Files Back From the Air-Gapped Machine

With the signed transaction files in hand, copy transaction1.tx.signed, transaction2.tx.signed, and transaction3.tx.signed to the Transfer USB, remove the Ubuntu bootable USB and the Keys USB, and reboot the computer back to its regular online mode. Once it is booted, log in and copy the 3 signed transaction files from the Transfer USB to a directory on the computer. In my case, I just put the files in my user’s home directory.

Merge the Signatures Back to a Single tx File

We can inspect one of the signed transaction files using the same goal clerk inspect command that we used before to inspect the unsigned transaction.tx file. Issue the following command:

purestake@algo-node:~$ goal clerk inspect transaction1.tx.signed
transaction1.tx.signed[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA",
"s": "M7dVRrm9zmcE0dLkZTMX7JTjk/tsZdIgLn0qQuL9sGDDCnPZfiKRE9kpBYpSyfZ9uWvtCijJzJIInIbtNijRBg=="
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE"
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4"
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

This looks very similar to the unsigned transaction.tx that we inspected before, but note that the first public key (pk) in the top section now has an “s” value. This “s” value is the signature that was created using the private key for that address. It is not the private key itself, but it demonstrates that we had knowledge of the private key. The other 2 files look similar, but have an “s” value for public keys 2 and 3. What we need to do is merge all of these signatures into the same transaction file which we will call transaction.tx.signed. We can do this using the goal clerk multisig merge command like this:

purestake@algo-node:~$ goal clerk multisig merge -o transaction.tx.signed transaction1.tx.signed transaction2.tx.signed transaction3.tx.signed
purestake@algo-node:~$

We now have a merged signed transaction file called transaction.tx.signed in the working directory.

Inspect tx File Before Sending (Online)

Now let’s inspect the resulting merged transaction.tx.signed file:

purestake@algo-node:~$ goal clerk inspect transaction.tx.signed
transaction.tx.signed[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA",
"s": "M7dVRrm9zmcE0dLkZTMX7JTjk/tsZdIgLn0qQuL9sGDDCnPZfiKRE9kpBYpSyfZ9uWvtCijJzJIInIbtNijRBg=="
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE",
"s": "rjITXvqzQwFWZ5shfXjhkxpcAkPSJquv9s2gLACLljHKnaoYefTGUXjfKZHtGZixFIAGPWr22DMrk/rcdnf8CA=="
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4",
"s": "w08AQ3gJr9W8qVmV1HN4o7okFjU/ozWIHGs3kn4cWjRkx/j1xO3wv+bL5X7fFjt208zaFuacE0y6jKIIc2p3DQ=="
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

You can see that the first three public keys all have “s” value signatures. We only need signatures for the first three public keys because this is a 3-of-5 multisig and three valid signatures meets the threshold for the account. If we had signed with a fourth or fifth key, this wouldn’t cause any problems, but it isn’t necessary. This transaction is ready to be broadcast to the network.

5. Broadcast tx to the Network (Online)

To broadcast the signed transaction to the network we can use the goal clerk rawsend command:

purestake@algo-node:~$ goal clerk rawsend -f transaction.tx.signed
Raw transaction ID AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB issued
Transaction AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB still pending as of round 1631216
Transaction AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB committed in round 1631218
purestake@algo-node:~$

The transaction id for your transaction will be unique and different than what you see above. Algorand finalizes blocks in under 5 seconds, so you shouldn’t have to wait long for the transaction to broadcast to the network. Once committed, you can check account balances to make sure you see the balance change that is expected. Once used, the transaction file cannot be used again. Sending it again will result in an error.

Conclusion: More Complex, But More Secure, Too

This multisig transaction example shows that the setup of a multisig account and execution of a transaction are a lot more complicated than just using a single spending key account directly on an online computer with an Algorand node installation. But, by using a multisig account, we can substantially improve the security of the setup and greatly reduced the risk of the private keys and thus the funds in the account from being compromised.

The Algorand multisig features can be used to create multiple keys which can be used to split the secrets needed to spend funds across different people and locations. The exact number, locations, storage, and people will vary according to the environment and situation, but it opens the door for a much more secure setup than a single key account.

Keeping the spending keys totally offline is another substantial improvement to Algorand account security for high-value accounts. Most of the attack vectors for compromising keys involve online scenarios, malware, or other network exploits. By never having the secrets on an online computer, the risk of key compromise is greatly reduced. Another way to improve the security of an Algorand account is to use a ledger hardware wallet, which will be the subject of a future blog post.

How to Use Multisig Accounts in Algorand

How to Use Multisig and Offline Keys with Algorand

Multisig accounts and offline keys provide a great deal of added security, but are not always simple to set up. To help you get started, I’ve outlined the steps you will need to take to create a multisig account with Algorand and store keys offline on an air-gapped device.

For this tutorial, you will need at least 3 USB drives:

  1. “Ubuntu” to serve as a bootable Ubuntu USB device
  2. “Key” to hold the algokey binary and private keys
  3. “Transfer” to move transaction files to and from the offline computer

If you are going to store significant funds in the account being created, make sure that they USB drives are new, so there is no chance of any unwanted data or malware on the drives. It may seem excessive to use so many USB drives, but in the case of the private key drive, it is important that it never is plugged into a computer that is on the internet.

While the ideal approach to using multisig keys offline is to have a separate, dedicated laptop or computer, that will not be necessary to complete this tutorial. I will demonstrate how to use a bootable USB device in place of a dedicated offline machine.

Setting Up Your USB Drives

1. Download and Install the Algorand Node Software

On your online computer, download and install the Algorand node software.  This software will be used to interact with the Algorand network. Installation instructions for Algorand node software on different platforms can be found here: https://developer.algorand.org/docs/introduction-installing-node

For my examples, the online computer will be running Ubuntu 18.04 with Algorand installed using the debian package from the official repositories using the following instructions: https://developer.algorand.org/docs/installing-ubuntu

2. Create an Ubuntu Bootable USB Device

In order to create an Ubuntu bootable USB device, you can follow the instructions below depending on your OS:

Windows: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-windows#0

Mac: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-macos#0

3. Label Your USBs

When we need to perform sensitive signing actions on our offline computer, we will boot our computer using the bootable USB device you have just created.  Label it “Ubuntu” or something similar to identify it as the bootable USB device.

The second USB device will hold the algokey binary and the private keys that we will use to sign transactions on the offline computer.  Just label this USB drive “Keys” or similar for now. Do not plug this drive into the online computer.  This USB drive should never be plugged into any computer other than the offline computer.  We will put the algokey binary on it in a later step.

The third USB drive will be used to transfer files to and from the online and offline computers.  Label the drive “Transfer” or similar.

4. Copy the Algokey Binary to the Transfer USB

The algokey binary is part of the algorand node installation, but is a standalone executable that can be copied to a different computer running the same architecture / operating system without needing to perform a full node installation.

From our online computer that has a node installation based on the debian package, the algokey binary can be found at /usr/bin/algokey.  Copy the algokey binary to the Transfer USB drive. We will later move algokey to the offline computer and ultimately to the Keys drive, but we will do that once we have the offline computer set up.

Creating an Algorand Multisig Account (Offline)

Let’s start by creating a new 3-of-5 multisig account that we will use to store Algo securely.

Before we start issuing commands, we need to use the Ubuntu USB drive to boot into our offline computer.  Insert the Ubuntu USB drive into your computer and reboot the machine. You may need to enter the bios of your computer to tell the computer to boot from the USB device.

Once you are booted from the USB, you want to indicate that you want to try Ubuntu (not install Ubuntu).

After the computer is booted, you will be logged into an ephemeral Ubuntu desktop without networking.  This will be our offline environment for signing transactions.  Create a folder on the desktop that we will be working in. In the examples below, I called mine “tx.” Insert the Transfer USB device and copy the algokey binary to the tx directory.  Open a terminal window to the tx directory and change the permissions of algokey to make it executable, and test running it to make sure everything is working:

 

ubuntu@ubuntu:~$ cd Desktop/tx
ubuntu@ubuntu:~/Desktop/tx$ ll
total 22472
drwxr-xr-x 2 ubuntu ubuntu       60 Sep 2 10:48 .
drwxr-xr-x 4 ubuntu ubuntu      100 Sep 2 10:48 ..
-rw-r--r-- 1 ubuntu ubuntu 23011080 Sep  2 10:48 algokey
ubuntu@ubuntu:~/Desktop/tx$ chmod 755 algokey 
ubuntu@ubuntu:~/Desktop/tx$ ./algokey -h
CLI for managing Algorand keys

Usage:
  algokey [flags]
  algokey [command]
 
Available Commands:
  export      Export key file to mnemonic and public key
  generate    Generate key
  help        Help about any command
  import      Import key file from mnemonic
  multisig    Add a multisig signature to transactions from a file using a private key
  sign        Sign transactions from a file using a private key

Flags:
  -h, --help   help for algokey

Use "algokey [command] --help" for more information about a command.
ubuntu@ubuntu:~/Desktop/tx$ 

 

We are going to use the algokey utility to create 5 accounts with 5 associated private keys.  These accounts will later be combined to form one 3 of 5 multisig account. We perform this account creation step on the offline machine so that we can record the 5 secrets securely, and so that these secrets are never online.  We will do this by running the “algokey generate” command 5 times.

 

ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: expire wear husband fancy now until laundry token strong dignity arrow valley post raven pudding farm twin chalk cloud tenant cart off shop abandon trophy
Public key: OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: lucky dust hub crew barely leave gas crew canvas exhibit margin mixed impose air wasp chat athlete sketch ozone humble parent rail remind abandon host
Public key: P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: draft mule stamp run absent congress leopard notice minute hungry fresh physical flee favorite cram green salad promote remember route assume gentle early absorb during
Public key: JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: primary tone inquiry video bicycle satisfy combine pony capable stamp design cable hub defy soup return calm correct cram buyer perfect swim tone able math
Public key: GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: seminar screen join potato illegal vacuum predict measure cable reject crazy document edit erosion decline giggle neutral theory orient keen slow walnut reject absorb rain
Public key: ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU
ubuntu@ubuntu:~/Desktop$ 

 

A few important things to note.  First, you will get different accounts and secrets when you run “algokey generate.”  DO NOT USE THE ACCOUNTS LISTED ABOVE, they are example accounts created for this tutorial, and most importantly, the spending keys are right here on this webpage.  Anyone reading this post can spend the funds in these accounts or any multisig account based on these accounts.

Second, note that every time you run “algokey generate,” you get a valid single key account with a public key and a private key.  In Algorand, you will often hear the public key referred to as the address, and the private key as the spending key or mnemonic.

Third, observe that the private key mnemonic has 25 words which is quite unusual.  In other crypto systems, you will typically see word lists that encode a seed phrase or secret using 12 or 24 words.  Algorand uses 25 words, so make sure you get all 25 words. If you plan to use something like Cryptosteel to store the seed phrase, the 25th word will overflow a single plate, which is designed to hold only 24 words.

The public and private keys need to be securely recorded for your accounts.  One way to do this would be to write them down on 5 separate pieces of paper, store them on 5 separate USB drives, etc.  Having a paper backup is a good idea in case the USB drives fail. To more securely store them on the USB drive, they can be saved in a file which is then encrypted using pgp or similar, and the encryption passphrase is then securely stored separate from the drive.

In terms of distribution, you could put the keys in 5 different locations or give them to 5 different people for safekeeping.  There are many ways to securely store these keys, including bank safety deposit boxes, cryptosteel plates, and other options.

For purposes of this tutorial, I will put all 5 keys in a file on the same USB drive labelled “Keys,” but this is not recommended for production use.  It is important to number the keys 1 – 5. The order of the keys will be important when we go to set up the multisig account in the next step.

Set Up a Wallet and Multisig Account (Online)

For this step, you need to go back to the online computer with the online Algorand node.  If you are sharing the same computer for both online and offline needs, remove all USB drives and reboot the computer to bring it back to its online state.  Log in to the computer and open a terminal. We will first create a wallet using the goal command:

 

purestake@algo-node:~$ goal wallet new MyWallet
Please choose a password for wallet 'MyWallet':
Please confirm the password:
Creating wallet...
Created wallet 'MyWallet'
Your new wallet has a backup phrase that can be used for recovery.
Keeping this backup phrase safe is extremely important.
Would you like to see it now? (Y/n): n
purestake@algo-node:~$ 

 

In the example above, I named the wallet “MyWallet,” but you can name it whatever you want.  I also specified a password for the wallet. The reason I elected not to see the backup phrase is that I do not plan on having any secrets on this online node.  I’m only going to use it for looking at balances and sending transactions which have already been signed elsewhere.

The next step is to use the goal command to create a new 3-of-5 multisig account using the keys we generated in the previous step.  This will add the multisig account to the wallet and let the wallet know what the constituent parts of the multisig account are. But the private keys for the account will not be in the wallet, and the wallet will have no control of or ability to spend funds in the multisig account.  By putting the multisig account in the wallet we can work with the multisig account on this node even if we plan to sign multisig account transactions with our spending keys on the offline machine. If you skip this step, transaction files you create for the multisig account on this node will be invalid as the node doesn’t know what the multisig account is and what the component parts of the account are.

 

purestake@algo-node:~$ goal account multisig new OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4 GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU -T 3
Please enter the password for wallet 'MyWallet':
Created new account with address FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY
purestake@algo-node:~$ goal account list
[offline]       Unnamed-0 FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY      0 microAlgos [3/5 multisig] *Default
purestake@algo-node:~$

 

Note that the 5 public keys we created on the offline computer are listed here as arguments to the goal account multisig new command.  Be very careful, as the order of the keys matters. Changing the order of the public keys results in a different multisig address. Recall that we numbered the keys 1-5: always list them in that order, so you will get consistent results.

The “T” flag specifies the threshold, or how many of the associated spending keys in the multisig account need to sign transactions.  In this case we specify 3 making this a 3-of-5 multisig account. The resulting address of the multisig account is: FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY.  The “goal account list” command confirms that this is a 3/5 multisig account with 0 Algo in it.

Now you have successfully created a 3-of-5 multisig account (albeit, with no balance). Next week, I will publish a follow-up tutorial that demonstrates how to sign a transaction with your newly-created multisignature keys, enabling you to spend funds, bid on auctions, and more.

Buy vs. Build Infrastructure-as-a-Service Blog Featured Image

Advantages of Buying Blockchain Infrastructure-as-a-Service vs. BUIDLing It Yourself

In everyday life, we are faced with decisions to either buy readymade solutions or to build something from scratch.

Whether it’s a large purchase like buying a home, or something considerably smaller like choosing between two couches, there are pros and cons to each side. Do you want to stay up until 2:30 in the morning putting together a couch? For the right price, a lot of folks would say, “Absolutely,” while others would say, “No shot!”

With the rise of blockchain as a viable platform, the business community seems to be posed with this question at all levels. Cost, effort, risk, focus, and quality all factor into every decision a company makes, including whether to build the infrastructure that run these applications and platforms, or to pursue a third party vendor that offers blockchain infrastructure-as-a-service.

Risk vs. Reward

The allure of blockchain is real: the technology as a whole promises dramatic cost savings (up to 70%!) to banks and financial institutions. Since up to two-thirds of those costs are attributable to infrastructure, it’s imperative to pursue an infrastructure strategy that captures as much of that cost savings as possible.

But blockchain projects can be deceivingly costly. A recent report of government-sponsored blockchain projects revealed that the median project cost is $10-13 million.

At first glance, building infrastructure in-house seems like the most cost-effective way to approach the blockchain: there are no licensing fees, and your company is in complete control.

Of course, there are always trade-offs: an in-house infrastructure project is very taxing on your organization’s resources.

SECURE INFRASTRUCTURE-AS-A-SERVICE FOR ALGORAND NODES

Your team must have the time and operational skillset to build out a secure infrastructure that is scalable enough to support your blockchain network of choice. Those skills are hard to come by: blockchain skills are among the most sought-after. The rates of a freelance blockchain developer hovers around $81-100 per hour in the US, sometimes going as high as $140+ per hour.

It’s easy to underestimate how much time it will take to create, and whether your team really has the skills and ability to create the offering.

In addition to infrastructure, you’ll also need to build or secure vendors that can address storage needs, network speeds, encryption, smart contract development, UX/UI, and more. Each of those initiatives is going to require additional dedicated budget.

The question then becomes: what kind of advantage does this create? Much of this will depend upon the number of partially or fully decentralized applications (DApps) that you plan to run on it, and how many of them can and will share the same underlying infrastructure.

The ‘aaS’ Revolution

When evaluating your options for a new project, the project planning stage is always tricky. You’ll need to do a full scoping of the project, allocate responsibilities, and create a vendor vetting process.

In the past it was easy: you went out, bought some software and hardware, and got to building. But once the internet made it easier for companies to provide ‘as-a-service’ offerings, it added a layer of complexity for IT and engineering teams as to what options made sense for their project or organization. Salesforce began to displace massive Oracle on-premises implementations. Broadsoft started to displace PBXs and made phone closets a thing of the past. The list goes on and on with applications that replaced their on-prem brethren from years prior because the ongoing maintenance and upkeep was a headache for IT teams to manage. Why keep all of the infrastructure under your management when you could push all that work onto an infrastructure-as-a-service company’s plate, since their primary focus was supporting that exact technology?

This is great from an IT perspective, but what about for engineers and developers? Don’t they need to be able to store their code and applications locally? Don’t they need to own all of the pieces that tie in to their application? Oh and security, THE SECURITY!

Sorry for being dramatic, but the answer is no. These are all valid concerns, however, many of them can be addressed by working with the right service provider that suits your needs.

Uber is a great example of leveraging third-party service platforms to create an application. Did they need to go out and create a maps platform? Nope, they used Google for routing and tracking. Did they need to go out and spin up their own messaging and voice servers? Nope, they use Twilio for their communication services. They took a buy-centric approach which enabled them to focus on their core application and remove the need to focus on things outside of their core skill set.

How We Apply This to Blockchain

How difficult is it to build? How costly is it to manage? Do we have the skillset to support it? These are all questions that companies ask themselves when looking at making an investment for any kind of infrastructure.

On top of the infrastructure, it only takes a few minutes to realize that DevOps is really hard to do well. Making sure that the investments you’re making align with your team’s skill set is critical for your success. So if you’re looking around, saying “We need to bring in DevOps engineers for our Algorand project,” then HARD STOP! Check out below.

PureStake was created with this exact use case in mind. We provide secure and scalable blockchain infrastructure-as-a-service to help everyone from investors to developers better interact with the Algorand network. We’ve recently launched an API service that will provide an on-ramp to Algorand for any application looking to build on their pure proof of stake network. We offer a variety of subscriptions so that, regardless of size or budget (we have free, and free is good), you’ll be able to utilize our service and start interacting with the Algorand network within minutes.

 

Teal Windows Background Graphic

Getting Started with the Algorand REST API and the PureStake API Service

Since Algorand’s MainNet launch, PureStake has focused on building and delivering highly performant infrastructure to support early adopters of the Algorand network. Earlier this year, we launched an Algorand infrastructure service offering that delivers relay and participation nodes as a service targeted at early supporters and customers that want to be active participants in the network. However, we found that these services are not ideally matched to the needs of developers building applications on top of the Algorand network. So we’ve released a new Algorand API service that is specifically designed to help developers get started with Algorand REST APIs quickly and easily.

The Need for an Algorand API Service

PureStake’s Algorand infrastructure services are centered around managing the lifecycle of Algorand relay and participation nodes in an automated and secure way. Managed relay and participation nodes make sense for customers that want to — or have an obligation to — support running the network, but don’t fulfill the needs of developers who are writing applications that interact with the Algorand blockchain.

For DApp developers, the nodes are a means to an end — a way of reading data from or sending transactions to the network. They need a simpler alternative to running their own nodes, which can be costly and time-consuming.

The PureStake API service simplifies interactions with the Algorand network by hiding the complexity of running and managing nodes from the user.

Why Running Algorand Nodes Can Be Challenging

Developers always have the option of downloading and running their own nodes. However, running Algorand nodes requires both significant infrastructure investment and the right operational skills.

For example, most development use cases dictate running full archival transaction indexer nodes to achieve the best possible performance for querying transactions. The storage and sync time requirements for this type of node quickly increase as the block height increases. In the case of the Algorand MainNet, which has been live for about two months and has a block height of 1.4M blocks (as of August 2019), a transaction indexer node requires at least 20GB of storage. However, since the index database grows as a function of the number of transactions, we can expect the storage growth rate to significantly increase as transaction volume on the MainNet increases, further expanding the storage requirements.

The PureStake API Service Simplifies Interactions with the Algorand REST APIs

The API service is a natural extension of PureStake’s Algorand infrastructure platform that we built to support Algorand relay and participation nodes. Our platform uses an infrastructure-as-code approach to deploy security, networking, cloud configurations, compute, storage, and other elements into cloud data centers in an automated fashion.

The API service, which was built on top of this platform, is spread across multiple cloud data centers and features an API network layer, an API management layer, a caching layer, a load balancing layer, and a node backend layer. Each of these layers is fully redundant and managed/monitored 24×7.

GET STARTED WITH PURESTAKE’S API SERVICE FOR ALGORAND

How the API Service Works

The API network access layer is supported by a worldwide edge network with many peering points for request ingress, where requests are then privately routed to one of our POPs. In the POP, the API management layer then handles authentication, authorization, accounting, and further routing of API requests. It will check received requests for a valid API key header, whether the request is valid according to the account requests limits, and other security checks. It then logs the request, which is used to power end user-facing features such as endpoint utilization charts in the API dashboard. The management layer then routes API requests to backend services that can handle them.

Some queries will be handled by a high-performance cache. Other queries will be routed to the load balancer layer, which has awareness of the node resources available and routes requests on to an available Algorand node. The node layer has pools of Algorand transaction indexer nodes that can be swapped out and maintained without any downtime. These nodes are patched and updated with the latest Algorand node software as new versions are available.

What is the Difference Between the PureStake APIs and the Algorand REST APIs?

The PureStake API Service supports the official Algorand node API in the same form as exposed by the Algorand node software, which adds consistency and makes it easy to move off our service and back to self-managed nodes if needed. This design choice was an intentional one, since many proprietary APIs create vendor lock-in for its users. The only differences in our API service and the official Algorand REST APIs are the addition of the X-API-Key header that we require to secure access to our service, and the removal of the API that provides metrics about the nodes themselves. Through this approach, our users have the freedom to move between API services and self-managed nodes as needed.

Currently, our API service supports the Algod API, but not the KMD API. The Algod API can be used for querying historical information from the blockchain, getting information about blocks and transactions, and sending transactions. The KMD API, by contrast, is used for wallet management, key management, and signing transactions with private keys. We have intentionally chosen not to expose the KMD API, as we do not want any customer secrets or keys on our servers. However, customers can manage secrets and sign transactions within their applications, and post signed transactions to our API.

How the PureStake API Service Impacts the Decentralization of the Algorand Network

An essential property of the Algorand network is its decentralization. PureStake is a centralized company providing a centralized service to access that decentralized network. At first glance, it may seem like a centralized service could threaten the decentralized nature of the network (particularly if all or most of the access to the Algorand network happens through the service). Similar concerns have been raised in the Ethereum community in relation to the large number of applications relying on the Infura service to access the Ethereum network. While it may seem counterintuitive, centralized services can actually serve to support and promote the best interests of decentralized networks such as Algorand.

The first thing to point out is that this decentralization risk is not a design or protocol-level risk. No one is forced to use the service. Anyone using the service can leave and run their own nodes at any time.

In fact, there is no reason decentralization can’t proceed normally with lighter weight nodes. Algorand is going to great lengths to make sure nodes supporting the consensus mechanism do not have large storage or other infrastructure requirements. So, if someone just wants to get current account balances, submit transactions, and support consensus, they can do this with non-archival participation nodes that have much lower requirements and may not require a service provider. In addition, the upcoming vault improvements to Algorand will greatly reduce the sync time for participation nodes as well. The developer use case specifically lends itself to larger infrastructures and a service provider approach.

Secondly, Algorand needs developers, applications, and utilization of the network to be successful in the long term. The PureStake API service makes the on-ramp for developers substantially easier and will help grow the utilization and traction of the network. While there may be a hypothetical form of centralization risk in the future if the service is wildly successful, this possible future risk is far outweighed by the direct benefits to the Algorand community in helping to get traction that drives transaction volume and network utilization. In a future with more developers, applications, and network utilization, we expect competitive developer-oriented services to enter the market, which will continue to fragment the market.

Future Expansion and Long-Term Vision for the API Service

Support for the base Algorand node API is the first step for our API service. In the future, potential enhancements could include:

Additional Query-Optimized Data Stores: Taking Algorand block and transaction data and loading it into relational or NoSQL datastores opens possibilities for much more performant queries across the historical data set. These optimized data stores could be used to improve the performance of node API requests or to power net-new APIs.

Eventing Infrastructure: The idea would be to provide support for subscriptions for certain types of events, and to receive callbacks whenever they occur. DApp developers frequently implement these backend infrastructural features to improve the performance of their applications.

Getting Started with the PureStake API Service

Users can register for a free account at https://developer.purestake.io/.

Once logged into the API Services application, users will have access to an API key that is unique to their user account in their dashboard. This API key needs to be added as the value of the X-API-Key request header for any API requests being made to the PureStake service.

There are examples of how to do this and how to use the API once you have logged in at https://developer.purestake.io/code-samples and also in our Github repo https://github.com/PureStake/api-examples.

Do you have a question or an idea for a useful enhancement to the API service? Feel free to reach out to us!

 

Participation Keys in Algorand Blog Banner Image

Participation Keys in Algorand

What Are Algorand Participation Keys?

In Algorand, there are 2 types of nodes: relay nodes and participation nodes. Relay nodes serve as network hubs in Algorand, relaying protocol messages very quickly and efficiently between participation nodes. Participation nodes support the consensus mechanism in Algorand by proposing and validating new blocks. Participation keys live on participation nodes and are used to sign consensus protocol messages.

A participation key in Algorand is distinct and totally separate from a spending key. When you have an account in Algorand there is an associated spending key (or multiple keys in the case of a multi-sig account). The spending key is needed to spend funds in the account. A participation key, on the other hand, is associated with an account and is used to bring stake online on the network. Importantly, participation keys cannot be used to spend funds in the associated account, they can only be used for helping to support the consensus protocol.

Participation Keys Are Good

Having distinct keys for spending the Algo in an account, and staking the Algo in an account, results in several key security improvements.

In any crypto network, protecting the spending keys is of the utmost importance. Situations that require having spending keys on an internet connected computer are inherently dangerous and always contain the risk of loss of funds.

In Algorand, the spending key never has to be online. The spending key can be kept on an airgapped computer or other offline setup and only used for signing transactions offline. The participation key, in contrast, lives on the participation node and signs protocol messages, but the participation key cannot spend any funds in the account.

This separation of duties in 2 different keys improves the security of Algorand infrastructure substantially. Spending keys can always be kept totally offline and an attacker, if they are able to compromise an internet connected participation node, cannot spend or steal any of the funds in the associated account.

Of course, this doesn’t mean that participation keys shouldn’t be highly protected and secured. If an attacker does compromise a participation key, they can stand up a second participation node with the same participation key. This will result in protocol messages being double-signed, which the network will see as malicious behavior and will treat the node / associated stake as offline.

There is no bonding or slashing in Algorand, and staking rewards are still coming in the future, but regardless: being forced offline due to double signing is undesirable and means that the stake in question will no longer be supporting the consensus mechanism.

Participation Key Mechanics

My examples assume Algorand Node v1 software is installed and running in a participation node configuration on the Algorand MainNet. The software is installed using the Debian package on Ubuntu 18.04, with a standard non-multi-sig Algorand account with some Algo in it, and a separate offline computer with the spending key for the account.

To create a participation key you will need to use the “goal addpartkey” command and specify the account that you want to create the part key for and a validity range:

goal account addpartkey -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY --roundFirstValid 789014 --roundLastValid 4283414

A few things to note. The account specified in the -a flag in the command above (WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY) is made up and you would need to replace it with your account. Do not use this account as it, and the associated spending key, are not real. Any funds sent to this address will be permanently lost.

The validity range is specified in rounds. Rounds are equivalent to blocks in Algorand. So if you, for example, want to have a key that is valid from now until a point in the future, you need to find the current block height for the roundFirstValid and a future block height for the roundLastValid flag corresponding to the validity range you want.

To find the current block height you can use the “goal node status” command:

derek@algo-node:~$ goal node status Last committed block: 789014 Time since last block: 2.4s Sync Time: 0.0s Last consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0 Next consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0 Round for next consensus protocol: 789015 Next consensus protocol supported: true Genesis ID: mainnet-v1.0 Genesis hash: wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=

The last committed block, which is the same as the current block height, is reported as 789014, so we use that for our roundFirstValid. Figuring out the right value for the roundLastValid is a little more involved.

First, you have to determine what time range you want. It is a good practice to rotate participation keys and not to create a key with a really long validity range. In our example, we will use a time range of 6 months. What round corresponds to 6 months from now?

To figure that out, we have to do a little math. 6 months is approximately 182 days. So 182 days x 24 hours / day x 60 min / day x 60 sec / min = 15724800 seconds. At the time of writing, each round in Algorand takes about 4.5 sec. So 15724800 seconds / 4.5 seconds per block = 3494400 blocks. Now we need to add 3494400 to the current block height to get the height 6 months from now. E.g. 3494400 + 789014 = 4283414. This is where the 4283414 in the command above comes from for the roundLastValid.
As the network grows, the 4.5 second block time may not be a safe assumption. This may make the validity range slightly different than 6 months. You need to monitor for key validity and make sure to put a new key in place before the old one expires.

Once the addpartkey command has executed, you can find the participation key at:

/var/lib/algorand/mainnet-v1.0/WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA.789014.4283414.partkey

It’s beyond the scope of this article, but this file is actually a sqlite database with N number of keys in it which will be internally rotated through automatically during the validity window. This is an additional security measure that is part of Algorand, where the keys used to sign protocol messages are rotated as rounds progress.

With the participation key created, the next step is to bring the account online. An account being online in Algorand means that the Algo in the account is supporting the consensus mechanism. We bring an account online by using the “goal account changeonlinestatus” command. Note that this action requires that you have a small amount of Algo in the account to pay for the transaction. If you have the spending key for the account directly on the participation node you can simply run this command

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1

However, having the spending key on the participation node is not recommended and kind of defeats the whole purpose of having participation keys in the first place. It is much better to have an airgapped and totally offline computer that has the spending key on it. The process is a little more involved with this setup, but it is much more secure. With this setup you would issue the following command instead:

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1 -t online.tx

This will produce a transaction file called online.tx in the current directory which has an unsigned transaction to bring the account online. This transaction file then needs to be securely moved to the airgapped computer with the spending key on it. Once on the airgapped computer you can use the algokey utility to sign the transaction file. The command would be:

algokey sign -k spendingkeyfile -t online.tx -o online.tx.signed

Note that algokey is standalone and does not need a running Algorand node. Also, the spendingkeyfile is the file that has the spending key for the account. This file can be created by algokey when you first set up your account.

There is also an option to specify the spending key mnemonic instead of a file, but I find this option worse as it leaves the mnemonic in the shell history, etc. The result of this command is that online.tx.signed will be created in the current directory. This file contains the signed online transaction and it needs to be securely moved back to the running participation node.

Once you have online.tx.signed back on the participation node you can send it to the network with the following command:

goal clerk rawsend -f online.tx.signed

Wait a little bit for the transaction to be processed, and your account should now be online. The creation of a transaction file, movement to the airgapped machine to sign the transaction, movement of the signed transaction back to the online node, and then sending the signed transaction to the network is a general pattern for sending transactions in Algorand without ever putting your spending key online.

Final Thoughts on Participation Keys in Algorand

The design of Algorand using separate keys for spending funds and for participating in network consensus improves the security of nodes running on the Algorand network substantially by protecting spending keys and removing the need for them to ever be online. I think this was a good design choice and wouldn’t be surprised if other protocols adopt this approach.