Blockchain Scalability Trade-offs Banner

Blockchain Scalability Trade-offs: How to Choose a Platform

Blockchain scalability is one of the most frequently cited reasons that blockchain adoption isn’t happening faster.

The truth is, most blockchain networks are slow when compared to traditional centralized technologies and networks like AWS or the Visa network. Ethereum, for example, can process transactions in the range of 15 transactions per second — and Bitcoin is even slower.

On the other hand, blockchain networks have unique properties, such as native digital scarcity and unstoppability, that cannot be achieved in the same way with centralized approaches.

As developers continue to experiment and iterate on novel uses of these properties in decentralized software applications, popular platforms are hitting up against scalability and transaction limits. In this context, blockchain scalability is widely seen as a limiting factor in further blockchain adoption among software developers and end users.

In response to these blockchain scalability issues, the community has poured a lot of energy into developing scalability solutions at Layer 2 (e.g. Lightning, Plasma) and migrating existing platforms to faster consensus mechanisms (e.g. Ethereum 2.0). These solutions are all interesting in their own right, but I would argue that they are less important for scalability than design decisions in the protocol itself, and the use cases which are being targeted by those designs.

Developers who are investigating which decentralized platform to build on should carefully consider their requirements and the design goals of the platform they are choosing relative to those requirements.  Two important trade-offs in order to achieve your optimum level of scalability for your application are the level of decentralization and the level of programmability that you require. Not every application needs maximum decentralization nor programmability.

The Top Two Trade-offs to Blockchain Scalability

Level of Decentralization

The Blockchain Trilemma

For those who are unfamiliar with the famous blockchain trilemma, it states that when designing a blockchain protocol, you can only have two of these three properties: security, decentralization, and scalability. The insight is that it is hard to achieve all three simultaneously.

From my point of view, the most interesting use cases for blockchains are related to the storage and movement of value, so any significant sacrifices to security seem like a non-starter.

The key point behind the trilemma is the relationship between decentralization and scalability. It is easy to achieve scalability if you sacrifice decentralization. For example, traditional deployments using AWS infrastructure achieve very high scalability in a centralized model. But in this deployment model, the key properties that make blockchains interesting — namely, native digital scarcity combined with unstoppability — disappear.

Some projects use this relationship to their advantage. Take the example of EOS. In the EOS network, there are 21 block-producing nodes. This is far fewer than Ethereum or Bitcoin. By becoming more centralized, EOS achieves far higher transaction throughput than either Ethereum or Bitcoin. Twenty-one nodes isn’t totally centralized, but it is much more centralized than many other blockchain networks.

EOS’s goal is to be decentralized enough to keep the most interesting blockchain properties intact, but centralized enough to achieve significantly higher performance than competing blockchain networks. The question to ask yourself as a DApp developer is, what level of decentralization does your use case require? How worried are you about censorship of your application? Many high-value-oriented use cases may require higher levels of decentralization; for others, it may not matter.

Level of Programmability

The level of programmability that is supported by a blockchain protocol is at least as important a factor for blockchain scalability as decentralization. The key question is, what use cases are your applications going after, and what level of on-chain logic do you need to achieve your goals?

I can think of a range of applications with varying needs for platform programmability, running from money and asset transfer use cases on one end of the spectrum to Web3/decentralized applications on the other end of the spectrum. A money transfer application, a securitized tokenization platform, or a marketplace for digital collectibles may be perfectly well served by a platform that supports a rich set of asset transfer use cases. Whereas, a DAO or decentralized exchange that needs extensive onchain logic would require a full Turing-complete programming environment.

All other things being equal, platform scalability degrades as you move toward a more decentralized model. Bitcoin seems to be an exception to this rule as it is firmly on the “money” end of the spectrum, but it also isn’t scalable. The point is that Bitcoin would be even less scalable if it had full Turing-complete smart contracting functionality.

Platform scalability also decreases as you add Turing-complete smart contract functionality. With Turing-complete smart contracts, you need a gas concept to meter contract execution and produce deterministic behavior, which adds fees and operational costs to your application. You also need to allow smart contracts to store arbitrary state or data on the chain, which leads to more fees and increased storage requirements for the underlying blockchain nodes. Most smart contract platforms have a single one-size-fits-all virtual machine for all contracts on the platform, which can also become a scalability bottleneck. All of these factors reduce the scalability and throughput of Turing-complete smart contract platforms.

Opposite Ends of the Blockchain Scalability Spectrum: Algorand and Ethereum

Let’s take a couple of concrete platform examples to illustrate the point.

Algorand Prioritizes Performance Over Turing-Complete Programmability

Algorand is a high-performance, next-generation blockchain that focuses on the money and asset end of the design spectrum, and is also highly decentralized (unlike EOS). It achieves very high performance and transaction throughput by focusing on money, asset, and asset transfer use cases and doing them well.

Algorand’s new scripting language, TEAL, is intentionally non Turing-complete to avoid the gas fees, arbitrary storage, and infinite loops that come along with Turing-complete smart contract platforms. These are specific choices that allow it to achieve high performance for the money, asset, and asset transfer scenarios. This is why applications supporting these use cases that have high throughput requirements, such as Tether and Securitize, are porting their applications to run on Algorand.

However, if your use case requires Turing-complete onchain logic such as DAOs, onchain DEXes, or the development of onchain decentralized protocols sitting above the base protocol layer like Compound or 0x, then the level of programmability in Algorand isn’t as good of a fit.

GET STARTED WITH PURESTAKE’S API SERVICE FOR ALGORAND

Ethereum Prioritizes Turing-Complete Programmability Over Performance

By contrast, Ethereum is the most prevalent full smart contract platform. Ethereum puts programmability first, at the expense of throughput and scalability.

Scalability issues on Ethereum result from arbitrarily complex logic execution and arbitrarily large storage that comes along with its smart contracts. Logic and storage are metered by gas fees, and these fees have increased over time as the number of smart contracts has increased. The underlying blockchain on an Ethereum full node (let’s leave out archival nodes for now) takes over 100GB of storage and growing. Ethereum has a small fraction of the scalability of Algorand from a transaction-per-second and node storage perspective.

However, what you get in return is the ability to express arbitrary smart contract logic in the Ethereum virtual machine. And since all the programs run in the same virtual machine, you get composibility between different contracts which is driving a lot of advances in the DeFi space right now.

For DApp developers, you should carefully consider which parts of your application need to be de-centralized / onchain, and if you really need a full Turing-complete smart contract language to build your application. Turing-complete smart contract platforms allow for arbitrary expressions of onchain logic, but often at a cost in terms of scalability and throughput.

Choosing the Right Platform for Your Application

The scalability of the underlying platform is a critical part of choosing where to build. Even if your application itself doesn’t need high levels of scalability, those scalability challenges inevitably lead to high transaction fees, which will increase your operating costs as an application project, and may make certain use cases unviable from an economic perspective.

You should carefully consider the level of decentralization and programmability you need to support your use cases when choosing a platform. For a use case that requires only an ERC-20 contract on Ethereum, but does not require interoperability with other Ethereum smart contracts, it may make sense to look at platforms that are optimized for this scenario, such as Algorand. On Ethereum, you could be paying a lot more unnecessarily without realizing the benefit of the increased programmability.

 

Algorand Node API Acceleration

Accelerating Performance of the Algorand Node API

It’s been a few months since we launched our Algorand API as a Service. In that time, we have gotten a lot of great feedback on what is working well and what still needs more work.

One thing we have heard and observed ourselves is that certain operations time out and don’t return any data. Digging into this, we observed that one of the queries that we provide has highly variable performance associated with it. Our API service is fronting pools of Algorand nodes, and calls are serviced by the Algorand Node Rest API on these nodes. The performance issues we are seeing are issues with a specific query to the Node Rest API itself.

The /v1/account/{address}/transactions Endpoint

The REST endpoint that has the performance issues is /v1/account/{address}/transactions. This query is only available if you run a full archival indexer node.

The purpose of this endpoint is to be able to query the transaction history for a given account. It is a useful query from a developer point of view as you could use it to, for example, populate a transaction history when looking at an account in a wallet or other application.

Sometimes, this query returns quickly and sometimes, the query times out. By default, the query has a max result set of 100. The current behavior of the query is that, given an account, the indexer node will start walking backwards from the head of the chain looking for transactions to fill its result bucket that meet the query constraints.

This becomes problematic for accounts with relatively low transaction volume. If there has been a lot of transaction activity on the account, the query will reach the result set limit quickly and return. If there has not been a lot of transaction activity, it will keep walking backwards — all the way to genesis — looking for transactions that meet the query criteria. In this second, low-transaction activity scenario, the query generally times out and also has the side effect of making the node unresponsive to other queries.

This second scenario is problematic, since users are actively hitting this endpoint, which starts taking nodes in our node pools offline for periods of time. While we have plenty of capacity in our node pools, if enough of these queries were received in rapid succession, we could suffer an outage due to a kind of denial of service situation.

Even for accounts that have a large transaction volume, when parameters restricting the scope of the search are used, for instance fromDate and toDate, the query becomes non-performant and the service suffers.

Improving the Endpoint with a Backend Datastore

First, it is important to recognize that the node can’t be optimized for every situation. It already performs a variety of different roles including supporting consensus, relaying, etc. However, given the current behavior of this query, we must provide an improved path to our customers so they can reliably retrieve transaction data.

We have decided to replace the backend handler for this query with a datastore that is optimized to return results much more quickly and efficiently.

This backend datastore is based on AWS Aurora and includes a set of AWS lambda data management routines to keep this datastore reliably in sync with the Algorand TestNet, MainNet, and BetaNet. Queries coming into this particular endpoint will be serviced by this new datastore. All other queries will remain serviced by our node pools.

GET STARTED WITH PURESTAKE’S API SERVICE FOR ALGORAND

Philosophical Questions

The direct node API represents the truth in terms of the state of the Algorand blockchain. The downside to servicing this query with an alternate third-party data store is that there is a chance that the third-party will return the wrong data due to a bug or other problem with the infrastructure.

At PureStake, we spent some time debating this situation. We could have just turned off the endpoint, but that didn’t seem like a good solution, and certainly not helpful to developers trying to build on Algorand. After all, this is a useful query to have available for a variety of use cases.

Additionally, we had to consider that the way the query works against the node isn’t necessarily what most developers want. Even for the scenarios where the query returns — accounts with high transaction activity — you only get the last 100 transactions, not all of them.

In the spirit of providing a solution to the intent behind this endpoint, we decided to move ahead with offering a more performant version of this query, while at the same time making it compatible with the available SDKs. However, we will clearly distinguish between queries that are serviced by the node and ones that are serviced by other infrastructure we are running. We will mark this particular query in the response headers and in our portal as being serviced in a different way to the other node api queries. We feel this approach strikes a good compromise and helps developers building on Algorand.

Comparative Performance of the New Endpoint

To take a concrete example, the following query when run against a TestNet indexer node will time out (on a reasonably spec’d machine):

GET /v1/account/EWZYOHWLR2C44MDIPNZMOGZMAAY66BWI2ALGJB2NE22TWO7YGNCS7NTFVQ/transactions

The state of account EWZYOHWLR2C44MDIPNZMOGZMAAY66BWI2ALGJB2NE22TWO7YGNCS7NTFVQ is that it has 2 transactions in its history, less than the 100 max results query limit, so the node will start walking back towards genesis looking for transactions. The query times out in 30 seconds for API users, but will continue running on the node until complete – in a test for this article the query was manually stopped after 25 minutes. The io / iops on the node goes very high while the query is running on the node and consumes the allotted burst capacity, eventually locking the machine up.

Backed by the new datastore queries to this endpoint return in under 1 second generally or 3 seconds or under from a cold lambda start .

Next Steps and Future Direction

It isn’t possible for the node to achieve high performance for all queries.

PureStake’s new datastore will be the basis of a new query-optimized set of endpoints that will be offered alongside the node-based APIs. These APIs would be well-suited to certain types of applications, such as explorers, wallets, etc. However, users will have the choice to opt for the node-backed APIs or the query-optimized APIs, depending on their preferences.

PureStake will likely continue to create different kinds of optimized data stores over time in order to support different types of queries and use cases. A de-normalized data warehouse is another obvious optimization for aggregate and over-time type data queries that aren’t possible with the current node API.

We may remove our query-optimized endpoint for this specific node API in the future, if the performance and behavior of the underlying node API changes.

Are there other APIs or features you would like to see added to the PureStake API services application? Reach out to us and let us know.

 

Polkadot vs Ethereum Comparison Blog Banner

Choosing a Platform: A Comparison of Ethereum vs Polkadot

Polkadot is one of the most highly-anticipated next-generation, developer-focused blockchains. This comparison with Ethereum, the most widely adopted developer-oriented chain, is meant to help newcomers to the networks understand the differences between the two, and may help developers choose which one to build on.

At a high level, the two projects are only partially overlapping. Ethereum is a platform for deploying smart contracts, or pieces of logic that control the movement of native assets or state on the single Ethereum chain. In contrast, Polkadot aims to provide a framework for building your own blockchain and an ability to connect different blockchains with each other. Despite these differences, both platforms are designed for developers to build decentralized applications.

Despite Similarities, Very Different Strengths

In terms of similarities, both Ethereum and Polkadot aim to provide a space where developers can create decentralized applications. Both platforms include smart contract functionality, based on Solidity for Ethereum and Ink! for Polkadot. If we look forward to Ethereum 2.0, both platforms are pursuing a scaling strategy based on parallelized execution. Each thread of execution is called a shard in Ethereum 2.0, and a parachain or parathread in Polkadot. Both Ethereum 2.0 and Polkadot will use Wasm as an underlying technology to power on-chain logic and state transitions.

There are, however, important differences between Ethereum and Polkadot.

One of the biggest differences is design goals. Ethereum aims to be a platform for distributed finance and smart contract execution, whereas Polkadot has a vision of helping people build entire blockchains and integrating these blockchains with each other.

I have attempted to summarize what I consider some key points of difference below:

Ethereum 1.0 Ethereum 2.0 Polkadot
Architecture Single chain Multiple chains (shards) Multiple chains (parachains, parathreads)
Backend Development Solidity (JavaScript-like), Vyper (Python-like) Solidity (JavaScript-like), Vyper (Python-like) Rust, Substrate Framework
Execution Environment Single VM Multiple homogenous shards Multiple heterogeneous parachains
Composability Smart contracts can call each other synchronously Smart contracts can call each other synchronously in the same shard, or asynchronously between shards Smart contracts can call each other synchronously in the same parachain, or asynchronously across parachains
Governance Off chain Off chain On chain (e.g. Democracy, Council, Treasury modules)
Consensus Mechanism Ethash Proof of Work Casper Proof of Stake BABE/GRANDPA Proof of Stake
Program Execution Fees Per-call gas/metering-based Per-call gas/metering-based Market cost for parachain slot with unlimited usage or per-call parathread fee
Status (as of Nov 2019) Live since 2015 Will be released in phased milestones through 2021 MainNet launch expected in Q1 2020

LEARN MORE ABOUT STAKING ON POLKADOT WITH PURESTAKE

Ethereum: Large & Thriving, But Hitting Scalability Challenges

Ethereum’s key strength is its large and established ecosystem of developers, users, and businesses including its rich set of developer tools, tutorials, etc. It already enjoys significant network effects from this ecosystem, making it the de-facto smart contract platform to develop on. Ethereum standards, in many cases, become industry standards such as ERC-20.

The value of the Ethereum network is similarly significant, providing a high degree of economic security based on the value of the underlying Ether token. The DeFi space, which is one of the areas in the crypto space with the most developer traction, is largely built on Ethereum and leverages the composability between different Ethereum smart contracts that can call each other in the single Ethereum virtual machine that powers Ethereum 1.0.

The key challenge facing Ethereum is scalability. The success of the CryptoKitties application demonstrated some of the scalability limits that affect Ethereum 1.0. One popular application was able to significantly degrade the performance and throughput of transactions on the network.

Another challenge is the gas cost required to run smart contracts on the platform. Gas fees are required for the security of the system overall, and to protect the system from being stalled by runaway programs. But as the value of Ether has risen, gas fees for running smart contracts has also risen and has made certain use cases prohibitively expensive. These costs tie back to scalability, because if there were more capacity, the fees for each transaction could be lowered.

Ethereum 2.0 aims to solve all of these scalability issues, but it is multi-year roadmap with the execution risk that comes with a multiyear re-platforming initiative. Most of the Ethereum core dev energy is going into Ethereum 2.0, which leaves not much in the way of upgrades and improvements in the existing Ethereum 1.0 chain.

Polkadot: Built on a Flexible Framework, But It’s New and Unproven

Polkadot’s greatest strength is Substrate. Substrate is a development framework for creating Polkadot-compatible blockchains, offering different levels of abstraction depending on developer needs. Polkadot is itself built using Substrate. It dramatically reduces the time, energy, and money required to create a new blockchain.

Substrate provides a much larger canvas for developers to experiment on, as compared to smart contract platforms like Ethereum. It allows for full control of the underlying storage, consensus, economics, and state transition rules of the blockchain, things which you generally cannot modify on a standard smart contract platform.

The design of Polkadot — which allows for shared security within its network — is another strength. Shared security has 2 key benefits.

First, it reduces the burden on parachain builders by providing security-as-a-service from the relay chain. This is different than the approach taken by other networks such as Cosmos, where each zone is fully responsible for its own security. This shared security simplification lowers friction for builders and simplifies the process of launching a new parachain.

Second, shared security provides a framework for parachains to talk to each other, which ultimately allow will parachains to specialize. It reminds me of the old Unix philosophy, where you create tools that do one job and do it well. Then you can achieve higher order goals by combining these purpose-built tools together. I can see something similar happening in the Polkadot ecosystem. This is the power of the Polkadot design that should create strong network effects on the network.

To mirror the old real estate saying, the top three challenges for Polkadot are in my mind are: adoption, adoption, and adoption. Ethereum has a dominant position and the largest developer community of any developer-oriented platform. Further, there are a lot of new platforms coming to market that are looking to compete with Ethereum and gain developer mindshare.

At present, there are only so many developers to go around. We are in a situation where there are more developer platforms than there are developers to support and build on them. The real challenge for Polkadot is getting enough traction and building enough of an ecosystem and developer community for the network effects of their architecture to start to kick in.

How to Choose

In summary, if you are a developer researching these two platforms for your decentralized application, it is a little bit of an apples-and-oranges comparison.

Building on Ethereum is a safe choice and makes sense if your application can be expressed easily as a smart contract, if your use case is affordable in terms of gas fees, if you don’t need a large amount of transaction throughput or control over the underlying economics of your system, or if you need interoperability with other Ethereum ecosystem projects at launch. Development on Ethereum is generally going to be simpler than Polkadot.

If on the other hand, your application is best served by a dedicated blockchain, if it needs higher transaction throughput performance, if you want full control of the environment, state transition function, storage, and economics that your application runs under, and if you are okay with higher implementation complexity, or have use cases that require integration across blockchains, Polkadot will satisfy these requirements.

Have KSMs on Kusama? Nominate PureStake via This Address:

GhoRyTGK583sJec8aSiyyJCsP2PQXJ2RK7iPGUjLtuX8XCn

Validating on Polkadot: The First 11 Days Banner Image

11 Days Validating on Kusama: First Impressions & Emerging Power Dynamics

It has been 11 days since we joined the active validator set for Kusama, and I wanted to share some initial thoughts on the experience in case this is helpful to other validators, nominators, or other participants thinking about engaging with Kusama or Polkadot.

The first impression to convey is that interest in Kusama and Polkadot is high. Currently there are 140 validators that have signaled their intent to validate. Since Kusama switched from PoA to PoS thus switching block production from a limited set of Web3 Foundation-run validators to a decentralized set of validators, the number of validator slots has incrementally increased from 20 to 50, 60, 75, and currently stands at 100. At no time have there been any empty slots and there are currently 40 validators waiting for an opportunity to validate.

This stands in contrast to many other projects that have struggled to recruit enough competent validators to launch their networks. This is a really good sign for Polkadot as they near their MainNet launch.

Parity Team Actively Addresses Bumps in the Road

The process of launching Kusama has and continues to flush out issues.

There have been several point releases: 0.6.7, 0.6.8, 0.6.9, including an issue with 0.6.8 that led to database corruption for some validators. There are performance issues actively being worked on now, which will undoubtedly lead to more releases. Some validators have been slashed or removed from the active set, either due to issues with the software or a failure to run nodes properly.

However, the number of issues has really been relatively small. In each case, the Parity team has been very responsive in diagnosing and fixing issues. All things considered, for a system as complex as Kusama, this has been a very smooth launch process.

Two Primary Types of Validators

The validators in the active set are ones that meet the minimum effective staked KSM levels needed to be in the top 100.

Many of these appear to be representing DOT holders who could claim KSM based on their DOT holdings and thus, have large bonded amounts. I’m inferring this from the fact that transfers are not yet enabled, so large positions would have to come from claimed KSM.

The other set of validators are those that are not existing DOT holders and received KSM grants from W3F to be able to validate. The grants were 10 KSM, so I’m assuming that many validators with bond amounts less than 10 KSM are likely in this bucket. Many of these validators have received nominations, presumably from W3F and possibly others, to get into the active set.

There hasn’t been enough time for validators to really start to market themselves to try to attract nominations. This will presumably start to happen as the Kusama launch process continues to unfold, transfers are enabled, and the validator limit is potentially raised further.

LEARN MORE ABOUT STAKING ON KUSAMA WITH PURESTAKE

NPoS Validator Strategies

While it is too early to tell what strategies validators will use to go to market, there is one notable strategy that has emerged: the “sprawl” strategy.

Cryptium Labs is currently running 19 of the 100 validators in the active set. This is far more than anyone else on the network at this time. In NPoS (Nominated Proof of Stake) this is not only allowed, but perhaps expected. Given the fact that validators are compensated a flat fee for their service, running as many validators as can get into the active set is an economically rational validator strategy.

However, for some, the realization that large players could occupy may of the available slots was disheartening. Here is an exchange from the Riot rooms (where most of the discussions are happening) that illustrates the sentiment:

Fredy from DragonStake started with:

@derfredy:matrix.org

I wonder how the core ( Gav Bill | W3F federico … ) feel about the current adrianbrink | Cryptium Labs sybil attack.

Once we enable the TXs, the whole slots table could be filled with just 2 or 3 independent validators teams. Any concern?

Adrian from Cryptium Labs was quick to respond:

adrianbrink | Cryptium Labs

[snip…] public blockchains need to be designed so that they are secure against rational actors. Security based on altruism isn’t going to last long.
Btw, I’m not suggesting that Polkadot consensus is insecure. Maybe there needs to be more education about it though

And finally from Gavin:

Gav

i think it’s reasonable for w3f to use its KSM to keep the validator community pluralistic.

[snip…]

w3f has its funds;
w3f should act in whatever way it feels is best for the network;
having an active validator community with well-dispersed knowledge is good for kusama;
w3f should use some of its funds to help keep lesser staked validators of high reputation engaged.

This short exchange cuts right to the heart of some of the interesting ideas and dynamics around NPoS and how it will play out. Some good questions that this exchange raises: Will the NPoS design ultimately favor a smaller set of larger validators occupying multiple slots, or will it drive greater validator diversity? Is there such a thing as economically rational behavior that conforms to the protocol, but that nevertheless should be sanctioned or discouraged by social norms and convention?

NPoS is meant to be an improvement over standard DPoS in networks like Cosmos or Tezos. Its design does appear to be intended to discourage or prevent the concentration of stake behind any one validator, as doing this would lead to less staking returns for rationally-motivated nominators.

It is also meaningfully different from standard DPoS (Delegated Proof of Stake) because it has a separation between political power and validator services. This could guard against scenarios where, for example, validators are run for free to gather political power, as appears to be the case with the largest Cosmos validator. Many feel this leads to a weaker validator set, as it becomes difficult to fund legitimate validator businesses.

But if validator power can still be expressed in NPOS by allowing organizations or entities to have more than one — or perhaps dozens — of validators, it seems that some of the decentralization benefit of NPoS may not be as great as many believed would be the case.

I sympathize with Fredy from DragonStake’s point of view that the health of the network is greater with a more diverse set of validators, and that smaller validators shouldn’t have to rely on the goodwill of the W3F to have a shot at making it into the active set. And while W3F’s commitment to validator diversity is admirable, I also agree with Adrian from Cryptium Labs that what happens on these platforms is largely determined by the actions of rational economic actors playing by the rules codified in the protocol. Even if you have a set of social norms you try to enforce in your community, the permissionless nature of these systems means that someone can always come along and ignore your community and norms and do anything that the protocol allows.

It is always hard to predict how these systems will play out. But it seems likely that larger and better-established validator companies will pursue a Cryptium-style strategy on Polkadot. We may not see this yet, as they don’t want to tip their hand or take on the infrastructure expense on Kusama where the opportunity for profit is not possible. It will be interesting to see if, in the end, there is more or less validator diversity in Polkadot with NPoS versus Cosmos, Tezos, and other networks employing the simpler DPoS mechanism.

Decentralized Networks Are Magic

In the end, what has happened over the last 10 days demonstrates the magic of these new decentralized platforms. Probably something on the order of 100 organizations or people from different backgrounds, locations, skills all came to the table with complicated setups of software and infrastructure to help launch and support a network.

This network is something larger than any one participant could have created and wouldn’t be possible without the contributions of all of the participants. I can think of no better example of the power of platforms like Polkadot to organize people and activity in ways that weren’t previously possible, to allow anyone to join, to compensate the participants for their contributions, and to create something emergent and higher order as a result.

Keep an eye out as transfers will be enabled on Kusama soon, and I expect further shifts of stake and in the active validator set once that happens. Also feel free to leave me feedback on Riot if you agree (or disagree) with anything in this post: @mechanicalwatch:matrix.org.

Interested in staking with PureStake on Kusama? Nominate this address:

GhoRyTGK583sJec8aSiyyJCsP2PQXJ2RK7iPGUjLtuX8XCn

Banner for Demystifying Algorand Rewards

Demystifying Algorand Rewards Distribution: A Look at How & When Algorand Token Rewards Are Calculated

The Algorand Proof of Stake network issues rewards to its token-holders in order to stimulate and grow the network — but it’s not always clear how these returns are calculated. In this article, I’ll outline the basic framework of the Algorand rewards distribution model, and walk through some potential impacts it could have on the rewards calculations for current token-holders.

What are Network Rewards?

Traditional Proof of Work (PoW) based blockchains (BTC, ETH) rely on miners to validate transactions and record them in consecutive blocks on the ledger. In this effort, the miners compete against each other by solving complex mathematical problems. The miner with the winning block gets a certain amount of tokens rewarding them for their work in return.

This basic economic principle provides the incentive to support the required infrastructure to run the network. One of the major flaws of PoW networks is the unproductive effort put into competing for mining blocks, resulting in relatively slow performance and huge amounts of energy wasted.

The Algorand blockchain, like many newer blockchain networks, is solving the inherent issues of PoW with the concept of Proof of Stake. Unlike PoW miners, PoS networks rely on validators to verify transactions written to each block on the chain.

Algorand chooses its validators using a process called sortition with a selection preference towards validators holding the largest amount of tokens, referred to as stake. Just like miners, the validators receive rewards for their work, but unlike PoW, no resources are wasted on unnecessary, competing work.

The rewards are typically a percentage yield of return on the amount of stake a validator is holding. That way, the economic incentives for validators are directly linked to the amount of stake as a motive for supporting the network i.e. the higher your stake, the more interest you have in contributing to the health of the network, and the more rewards you earn.

How are Rewards Distributed on Algorand?

Algorand currently deploys a more liberal rewards scheme in order to benefit every token holder whether or not their tokens are staked and participating in the consensus protocol. The idea behind this is to stimulate the adoption and growth of the network by rewarding all token holders equally. To support the operation of the network during this initial period, Algorand has issued token grants to early backers for running network nodes to help bootstrap a scalable and reliable initial infrastructure backbone.

As the network grows over time, I anticipate that the rewards distribution will shift favoring active stake-holders and validators.

Algorand Rewards Distribution Schedule

The current rewards distribution is determined and funded by the Algorand Foundation. You can read a detailed explanation of the overall token dynamics here.

The first 6M blocks on the Algorand blockchain have been divided into 12 reward periods of 500,000 blocks each. Each period is funded by an increasing amount of reward tokens to offset the increasing total supply of tokens. The token supply periodically increases due to early backer grant vesting and token auctions facilitated by the foundation.

 

Period Start Date (estimated) Starting Block Ending Block Rewards Pool (Algo) Block Reward (Algo)
1 6/10/2019 1 500,000 10,000,000 20
2 7/7/2019 500,001 1,000,000 13,000,000 26
3 7/31/2019 1,000,001 1,500,000 16,000,000 32
4 8/25/2019 1,500,001 2,000,000 19,000,000 38
5 9/20/2019 2,000,001 2,500,000 22,000,000 44
6 10/15/2019 2,500,001 3,000,000 25,000,000 50
7 11/10/2019 3,000,001 3,500,000 28,000,000 56
8 12/5/2019 3,500,001 4,000,000 31,000,000 62
9 12/31/2019 4,000,001 4,500,000 34,000,000 68
10 1/25/2020 4,500,001 5,000,000 36,000,000 72
11 2/20/2020 5,000,001 5,500,000 38,000,000 76
12 3/16/2020 5,500,001 6,000,000 38,000,000 76
Algorand Network Rewards Schedule

Distribution Mechanics

At the time of writing, we have just entered the sixth period of the current rewards distribution schedule (see the highlighted row in the table above). According to the schedule, 50 Algo are distributed as network rewards in each block, approximately every 4.4 seconds.

Algorand rewards are calculated each block based on the account balance of every to-or-from address recorded on the blockchain. The minimum account balance that is eligible for receiving rewards is currently 1 Algo.

If you monitor any eligible account, you will notice that the account balance doesn’t get the rewards applied at every block, but after a certain number of blocks instead. This is a function of the smallest unit of Algo that can be disbursed by the system, which is 1 MicroAlgo (10-6 Algo).

Since the minimum balance is currently 1 Algo and the smallest reward is 1 MicroAlgo, you can calculate the number of rounds it will take for rewards to be credited using this formula: Formula to calculate Algorand rewards: rounds equals ten to the negative sixth power divided by the product of block reward times total supply

With a supply of 2,103,868,588.605500 Algo at Block 2550212 and a Block Reward of 50 Algo, a reward of 1 microAlgo is dispersed for each Algo held every 42 rounds or roughly every 3 minutes. This will obviously change as the total supply grows.

Rewards APR vs APY

When you check the account balance of any address, the node API will return the balance that was last recorded on-chain plus any accrued rewards. The combined balance will not be written to the blockchain until the address appears in an actual transaction. In other words, rewards are only committed to the account balance on the blockchain when the address appears in a to-or-from address as part of a transaction. This is expected behavior, as ongoing, immediate on-chain updates of the balances of all eligible accounts on every round would pose a serious performance challenge.

There is a small but important consequence of this behavior. As pointed out earlier, rewards are calculated based on the recorded on-chain balance for each account. That means that none of the accrued rewards are included in this ongoing calculation. In essence, it is not compounding the accrued rewards interest. For accounts with large account balances, the difference in reward returns can be significant.

The current daily rewards percentage return based on the assumption of 1 microAlgo every 3 minutes is:

Equation to Calculate Daily Rewards Percentage: the number of minutes in a day which is 24 times 60 divided by the product of 3 minutes times ten to the negative sixth power times one hundred percent equals 0.048 percent

At the time of this post, an account that is holding 1M Algo would generate about 480 Algo per day or 172,000 Algo per year in rewards with an effective APR of 17.52%. If the rewards were compounded on a daily basis the effective APY would increase to 19.15%. These numbers will change over time as the rewards schedule changes, the total supply grows and transaction fees may increase.

Rewards Compounding

Compounding rewards is simple. Since rewards are calculated from the last recorded balance on the blockchain, the easiest way to force rewards compounding is to send a zero Algo payment transaction to the target address on a frequent, recurring basis. This transaction will trigger the commit of all accrued rewards and record them to the on-chain balance of the account.

So what is the ideal compounding frequency if you want to maximize your rewards?

For one, it doesn’t make sense to send compounding transactions more frequently than the number of rounds it takes for rewards to be disbursed, currently >3 minutes. Secondly, every transaction has an associated transaction fee, currently 1,000 microAlgo. This means the cost of these transactions shouldn’t outweigh the gains achieved by compounding.

Given that:

Rewards APR 17.52%
Transaction Fee 0.001

 

Net Compounding Rewards Interest at Specific Account Balances
Account Balance 50,000,000.000000 5,000,000.000000 50,000.000000
Annual Simple Rewards Interest 8,760,000.000000 876,000.000000 8,760.000000
trx Frequency trx/Year trx Charge Net Comp Rewards Int Rewards APY Net Comp Rewards Int Rewards APY Net Comp Rewards Int Rewards APY
10 min 52,560 52.560 9,574,154.528546 19.15% 957,368.148855 19.15% 9,521.647089 19.04%
Hour 8,760 8.760 9,574,111.351504 19.15% 957,403.251150 19.15% 9,565.360112 19.13%
2 x Day 730 0.730 9,572,971.479139 19.15% 957,296.490914 19.15% 9,572.242209 19.14%
Day 365 0.365 9,571,719.996054 19.14% 957,171.671105 19.14% 9,571.355361 19.14%
Week 52 0.052 9,556,683.398054 19.11% 955,668.293005 19.11% 9,556.631450 19.11%
Month 12 0.012 9,498,812.777423 19.00% 949,881.266942 19.00% 9,498.800789 19.00%

The cells highlighted in the table above show the maximum return achieved at certain compounding frequencies for specific account balances under the current conditions. It also illustrates that there are diminishing returns at higher rates and that, for most cases, a daily compounding transaction will suffice to get the core benefit of compounding rewards.

Conclusion

Clearly some frequency of rewards compounding always makes sense for long-held account balances (e.g. staked balances). Since rewards are currently equally distributed across all accounts whether they are staked or not, the returns represent more of an inflationary rate across the entire supply rather than an interest return for node operators or staked balances.

It is also important to note that the compounding gains are significantly exaggerated at this time due to the already high rewards interest and the low transaction fees. You will need to evaluate what amount of compounding makes sense for you, based on changing parameters and held account balances.

PureStake automatically compounds all customer accounts that are staking with us. Please reach out to us if you would like to learn more about this and our services.

How to Become a Polkadot Validator Blog Banner Image

Here Are the 4 Factors That Convinced Us to Become a Polkadot Validator

We have spent the last several months researching existing and soon-to-be-launched public blockchains, and we have come to the conclusion that Polkadot is an extremely ambitious and interesting project. Many teams are already building projects to be ready by the time the MainNet launches.

Given PureStake’s infrastructure and DevOps expertise as a company, the obvious way for us to engage is as a validator helping to secure the network. From there, we may expand to additional services within the Polkadot ecosystem.

This post will go into some of the rationale that led us to this decision. One of the most important points that influenced us is that the Polkadot vision aligns well with our vision of a multichain future. We also think that developer adoption is key to the success of next generation chains, and Polkadot is well positioned in this regard.

How Polkadot is Different

Too many chains are trying to do everything and be good at everything. The idea of blockchains that can talk to each other opens the door for specialization. Much like the unix philosophy, individual Polkadot parachains can focus on doing one thing and doing it well. And larger effects can be achieved through composability of different components.

Polkadot has an ability to accelerate innovation by significantly reducing the barriers to blockchain development and allowing a rich ground for experimentation. The Polkadot MainNet launch is fast approaching and we are excited to provide secure and reliable validation services for the network. What follows are the reasons we choose Polkadot over all the other networks out there.

Top 4 Reasons PureStake is Validating on Polkadot

Reason #1: Formidable Ecosystem & Leadership

Even though Polkadot hasn’t launched yet, they have already amassed an impressive ecosystem of notable developers, validators, partners, and projects, not least of which is Parity itself.

In addition to leading the development of Polkadot, Parity has a strong history and track record of delivering crypto infrastructure projects at production-grade performance and quality levels. The Parity Ethereum client is currently supporting a large part of the production Ethereum MainNet, and thus already supporting billions of dollars of crypto value. The Parity team, including Gavin Wood, are very close to Ethereum and familiar with all of its shortcomings; as a result, they are well-positioned to address Ethereum’s critical scalability challenges with Polkadot.

While Polkadot obviously has not yet built a community the size of the Ethereum’s, it has already generated tons of information, documentation, chat groups activity, and videos that make it relatively easy for newcomers to get up-to-speed.

At the launch of the Polkadot MainNet, they will have substantial scalability and programmability advantages over Ethereum 1.0. Until Ethereum 2.0 becomes a reality, Polkadot seems well-positioned to gain developer traction, including by stealing away some market share from Ethereum.

Reason #2: Flexible Underlying Framework (Substrate)

Polkadot is built on Substrate, an impressive developer framework that can be used to build a Polkadot-compatible blockchain. Substrate is a very powerful framework for developing blockchain applications and it provides a lot of choices to developers who are looking to build a decentralized applications.

If you want full control over your blockchain, you can use Substrate Core to build an application-specific blockchain that won’t even be part of the Polkadot network. Developing a blockchain this way will be much faster than rolling your own, as it handles many of the low-level subsystems that you will need out-of-the-box.

Substrate gives you flexibility, though. Rather than using Substrate Core, you could pull from the SMRL library of modules to plug in already-developed functionality for things like accounts and balances, fungible tokens, consensus mechanisms, and smart contract functionality. Alternatively, you could opt for the highest level of abstraction, Substrate Node, to get up-and-running with a custom blockchain very quickly and efficiently.

The quality and functionality of Substrate will almost certainly help draw developers in and spur adoption of Polkadot.

Reason #3: Scalable Design

Polkadot implements a Proof of Stake-based consensus mechanism on its main relay chain that uses a scheme called Nominated Proof of Stake. Proof of Stake-based consensus mechanisms offer several significant advantages over Proof of Work and other consensus algorithms, including scalability.

Right now, on the Polkadot Alexander TestNet, blocks are being produced roughly every 6 seconds. This is significantly faster than Ethereum (which is currently producing blocks every 13-13.5 seconds) and provides a scalable foundation for the rest of the system.

Another way that Polkadot achieves scalability is by parallelizing execution using parachains. Each parachain can have its own blockchain, and each of these parachains connects back to the main relay chain. By parallelizing execution into many parachains, Polkadot will inherently be much more scalable than a single chain network like Ethereum — at least until Ethereum 2.0 and its very similar concept of shards is realized.

Parachains will allow for more transaction throughput through parallelization, but separating transactions into different parachains also can provide economic scalability. Developers of applications occupying parachain slots have control over the economics of transactions. They have the ability to make certain classes of transactions less expensive, or perhaps even free, as opposed to a single economic model that is in use on more traditional single blockchain systems. This will allow developers to optimize their applications on Polkadot to achieve cost scalability when deployed.

Reason #4: Solid Security Posture

There are many parts of the Polkadot design that provide compelling security advantages, but there are two examples that stand out.

Stash Accounts and Controller Accounts

In our experience running crypto infrastructure at PureStake, a lot of time is spent worrying about key security, particularly keys that need to be warm or hot and online, versus cold and offline.

For a Polkadot validator, there are three different types of accounts and keys involved in the setup: a stash account, a controller account, and a session account. The stash account can be totally cold and offline — where you keep your funds. The controller account is warm, but needs to hold only a very minimal set of funds to perform certain specific transactions. And the session account is hot, but has no funds in it.

This design is much more secure than almost any other crypto network, since it allows you to store essentially all of your funds cold and offline.

Shared Validators

The shared validator security model in Polkadot provides security-as-a-service for all of the parachains.

This is quite different from Cosmos (which is the other major next-gen network) that enables parallelized application-specific blockchains. In Cosmos, each zone is on its own to recruit validators for security.

There are a lot of next-gen blockchains launching in 2019 and 2020 with some form of Delegated Proof of Stake which need professional validators to help secure their networks. There simply aren’t enough professional validators to go around and secure all of these networks. By having a shared security model, Polkadot has removed a big barrier to launching a parachain which should speed up adoption of the network.

What’s Next for PureStake as an Early Polkadot Validator

To date, PureStake has been providing node, API, and other infrastructure services for blockchain networks, including supporting the Algorand MainNet launch in June.

Now we are expanding our services to become a validator on Polkadot and the Kusama BetaNet (in preparation for the Polkadot MainNet launch). We will leverage a lot of what we’ve already built to support the Algorand network — the skills on the team, existing infrastructure, and code — to deliver highly reliable and secure validator services for Polkadot stakers. That includes:

  • Base compute infrastructure
  • Base storage infrastructure including blockchain snapshot / restore
  • Base network, VPC, VPN infrastructure
  • Authentication and authorization services
  • DevOps automation stack
  • Multi-cloud approach across AWS, Azure, and Google
  • Elastic load-balancer and firewall infrastructure
  • IDS, IPS, vulnerability management services
  • OS patch management and automation
  • Key and secrets management infrastructure
  • Monitoring and alerting infrastructure
  • Log collection and analysis infrastructure
  • DevOps and SecOps processes and reporting

Since only minimal work is needed to port the elements above, we can focus our energy on elements that need more adaptation to support Polkadot validation. Some of these areas include:

  • Validator infrastructure design: create our version of the standard sentry / validator design to support the validation requirements in Polkadot’s NPoS design
  • Extend our cloud automation: support the VPC, VPN, and other networking elements that are part of the validator design
  • Update our DevOps automation at the node and blockchain storage levels to support Polkadot-specific requirements
  • Enhance our monitoring, alerting, and logging / log analysis for Polkadot
  • Add support for Polkadot keys and secrets so they can be managed securely
  • Train our DevOps team on all things Polkadot so they can effectively manage and troubleshoot the services

We expect the PureStake validator infrastructure to be ready in time for the Kusama BetaNet switch to Proof of Stake. If you’d like to learn more about our Polkadot validator or other services we are planning, drop us a line.

Vanity Algorand Address Generation Banner

Struggling to Organize Your Algorand Addresses? Use This Utility to Generate Vanity Addresses

Everybody has had that experience of second guessing yourself after sending a transaction. Did I send to the right address? Did I mistype a character? Why isn’t the transaction showing up in the explorer?

Algorand addresses consist of a random collection of 58 capital letters and numbers. Working with these addresses can be challenging, as they don’t have any meaning. Getting the address wrong by even one character often means that funds are lost forever.

Some wallets allow you to name addresses, but these names are only available in the context of the wallet. There are projects such as Ethereum Name Service which aim to provide a DNS-like mapping from human-readable names to Ethereum addresses, but this service is specific to Ethereum and cannot be used for other chains like Algorand.

Vanity Algorand Addresses Make It Easier

Think about phone numbers from the pre-mobile era.

For most people, phone numbers aren’t meaningful by themselves. Back then, you committed phone numbers to memory in their entirety for family and friends. To introduce more meaning, businesses would often purchase vanity phone numbers that were easy for their customers to remember.

But when it comes to crypto accounts, raw addresses are far too long to remember in their entirety, so users — at best — remember the first few and last few characters.

Much like vanity phone numbers, vanity crypto addresses can be a way to make addresses more meaningful and easier to remember without the need for additional services. We might generate an address that starts with the string “PURE” to show that the account belongs to PureStake, or we might create a 5-letter customer code to help us remember which customer the account is for.

How to Use the Algorand Vanity Address Utility

I’ve created a small utility for generating Algorand vanity addresses. It can be used to generate addresses that start with a specified string. The time taken to generate a given address is dependent on the underlying computer hardware and, most importantly, on the number of characters you specify. 5 or 6 characters is the practical limit for a string that will finish in a reasonable amount of time.

The vanity generator can be downloaded here:
https://github.com/PureStake/api-examples/blob/master/python-examples/algo_vanity.py

It is written in Python and tested on Ubuntu 18.04. It requires python3, pip3, and py-algorand-sdk. It does not require a full Algorand node installation.

If you don’t have pip3 you can install it with:

sudo apt install python3-pip

With pip3 installed you can install py-algorand-sdk:

sudo pip3 install py-algorand-sdk

At this point, you should be able to run the program. Let’s generate an account that starts with “PURE” that we can use as an internal company account. You pass the string you want to look for as an argument to the program.

derek@puredev:~/py$ ./algo_vanity.py PURE
Detected 2 cpu(s)
Found a match for PURE after 2431449 tries in 123.83 seconds
Address: PUREXXP2S2IIOUP7ZSBIVBHOA54ZLYNND5G3YIENSAHJZ5D7AAYSCM7K5E
Private key: noble city arrest oyster pluck tennis toast flip same business drum below flame must lonely gorilla you local turtle desk suspect anger basic abandon upper

Or in the case of customer accounts, lets create a couple numbered accounts of the form CXXXX where XXXX is a customer id. Note that there is no 0 or 1 in Algorand addresses:

derek@puredev:~/py$ ./algo_vanity.py C2222
Detected 2 cpu(s)
Found a match for C2222 after 9893404 tries in 499.07 seconds
Address: C2222YNLSZHZ5PBO7L3UDOK7I5EZISQGSORDRGVSVNXGJNQCSKZDLK6LRE
Private key: swarm famous entry paper pause always magic hire burden aisle attack spring sport custom lend treat client burst decrease dad access pumpkin bulb ability blood
derek@puredev:~/py$ ./algo_vanity.py C2223
Detected 2 cpu(s)
Found a match for C2223 after 17209546 tries in 892.68 seconds
Address: C2223JBUAC43767TEDJCOD3T3N6A6L5ELVGGPSZWENAD7XUYFTMINWK734
Private key: dignity napkin faculty air mean mother crisp party spoon resource hub exhaust stand above logic siren book lock shallow shallow index copper hundred absent bring

The program works by simply brute-force generating address/key pairs over and over until it finds a match. There is no shortcut here, as any shortcut would represent a security problem for someone trying to guess the key for an existing account. The activity is similar to a Proof of Work mining algorithm.

If you use algo_vanity.py to generate vanity addresses make sure you do it on a secure machine, and that you take appropriate security precautions with the generated private keys. The script will take advantage of all the cores it finds on the machine.

Feel free to use algo_vanity.py to generate more meaningful addresses for your business accounts, or maybe just for fun.

How to Set Up a Ledger Nano S with an Algorand Account

Security Check-up: How to Use Ledger Nano S to Secure Algorand Accounts

Key management can be very stressful to cryptocurrency investors and users who control a large amount of crypto funds. Despite your best efforts, it can be very easy to compromise a key that has had any exposure to the internet, or to hijack a user’s phone, in order to steal their digital assets. So it’s no surprise that hardware wallets have taken over as a popular (and effective) counter-measure for hackers seeking to gain access to your private keys.

Hardware wallets do have some limitations, though. In this article, I’ll briefly review some important points to consider before employing a hardware wallet, and then I’ll provide a step-by-step walkthrough on how to set up a Ledger Nano S in order to better secure a cryptocurrency account (in this case, an Algorand account).

Why You Should (or Shouldn’t) Use a Ledger

Hardware wallets such as the Ledger Nano S offer significant advantages over software-based wallets.

First, there’s the physical security element. By storing private keys for an account in a secure element within the hardware device, it becomes very difficult for an attacker to steal the private key for your account without having physical access to the device and knowledge of the PIN for the device.

By keeping the private key on a hardware wallet, you also reduce the risk of malware and other online attacks from compromising your account spending keys, since the keys never leave the physical device.

HOWEVER, hardware wallets can be finicky and difficult to use.

For example, in Algorand, only the command line tools support the Ledger device.  So that means that you cannot use a Ledger with the Algorand mobile wallet, at least as of right now.  Only the Ledger Nano S is supported; the Nano X is not yet supported. Using a Ledger on Algorand means you are limited to apps that specifically have Ledger support.

The Ledger also only supports a single key, so multisig configurations will require multiple ledger devices which adds complexity.  Ledger support is planned for the Algorand mobile wallet application at some point, but it is not clear when this will happen. To use a Ledger with your Algorand account today requires comfort at the command line.

So, to recap:

Pros

of Using a Hardware Wallet to Secure an Algorand Account

Cons

of Using a Hardware Wallet to Secure an Algorand Account

  • Difficult to steal the key without physical access to the device
  • Less likely to fall victim to malware & other online attacks
  • Applications must explicitly support wallet hardware.  Currently only command line supported in Algorand for Nano S
  • Limited multisig support requiring complex multi-device and multi-step process

 

How to Set Up the Ledger Nano S for Use With Algorand

To use a Nano S to secure an Algorand account, you first have to go through the basic setup of the Nano S. For this article, I’m going to assume that you are starting with a fresh Ledger Nano S that will only be used to store ALGOs securely.

To start, you will download the Ledger Live application to your computer. Ledger Live is what you use to manage the applications on your Ledger device. You can download Ledger Live here.

Once you install it and plug your Nano S into your computer, click the “Get Started” button. You should see this screen:

 

Get Started with a Ledger Nano S on Ledger Live Screenshot

 

When initializing a new device, the first step is to choose a PIN code:

 

Choose Your PIN Code in Ledger Live

 

You should follow the steps in the Ledger Live application and on your Ledger Nano S device. There are 2 buttons at the top of the device. Hitting both buttons simultaneously acts as the “Enter” option.

Again, since I’m assuming this is a new device, you will want to elect to configure it as new. This will wipe out any previous configurations on the device.

 

How to Set Up a Ledger Nano S as a New Device

 

The next step is to choose a PIN code.  This is a critical step to preventing someone who steals the device from being able to use it to access your funds.

Follow the prompts on the device and Ledger Live app closely, and use the left and right buttons on the device to select a PIN code.  It should at least be 6 digits long. Hitting both buttons advances you to the next position. The selecting the check mark will indicate that you are done.

 

Choose a PIN Code for Your Ledger Nano S

 

The next step is to write down the recovery phrase for the device. This is critical. In case you ever lose the device or if it malfunctions, you will be able to restore the account to another device. The Ledger Live app walks you through this:

 

Write Down the Recovery Phrase in Ledger Live So You Can Recover Your Ledger Nano S Later

 

You will need to write down all 24 words of the recovery phrase, and you will be tested to make sure you have written down all the words.

 

Confirm the Recovery Phrase of a Ledger Nano S

 

Once you have verified the recovery phrase, the base setup of the Ledger is complete.

Next, you need to use the Ledger Live application to install the Algorand application onto the device. Go to the manager section of the Ledger Live app, search for “algorand” and click to install the Algorand application.

 

Install the Algorand Application in the Manager Section of Ledger Live

 

Once the application is installed on the ledger, you should see the Algorand app as shown below.

 

Install Algorand on Ledger Live to Use with Your Ledger Nano S

 

The installation of the Algorand app on the ledger created an Algorand account and stored the private key of the account on the secure element of the Ledger device. The private key never leaves the device, but you can see the account address if you go into the Algorand app on the device under “Address”:

 

View a Public Account Address on a Ledger Nano S

 

Using the Ledger From the Algorand Command Line

Now that we have the ledger configured with the Algorand app, it is ready to use with an Algorand node installation.

In my examples below, I have an installation of Algod installed from the Debian package running under Ubuntu 18.04 with a synced blockchain. I have plugged in the Ledger device to the computer with Algod on it.

The first thing we can do is look to see that the Ledger device has been recognized. We can do this with the “goal wallet list” command:

 

ubuntu@ubuntu:/var/lib/algorand$ sudo goal wallet list -d /var/lib/algorand
##################################################
Wallet:    Ledger Nano S (serial 0001) (default)
ID:    0001:000a:00
##################################################
ubuntu@ubuntu:/var/lib/algorand$ sudo goal account list -d /var/lib/algorand
[offline]    Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE    Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE    0 microAlgos
ubuntu@ubuntu:/var/lib/algorand$

 

Note that the ledger shows up as a wallet on this computer once it is plugged in. I did not create this wallet with the “goal wallet new” command. It was created for me when I plugged in the ledger device. Issuing the “goal account list” command shows the single account on the device and the balance of that account, which is 0. I also did not create this account with the “goal account” command, it simply came along with the wallet that was automatically created.

When you list the accounts, if you get the error message “Error processing command: Exchange: unexpected status 680.” This means that you need to unlock the Ledger with your PIN. It should work after that.

In this example, the Algod node is on the TestNet. In order to try out a transaction, let’s use the TestNet dispenser to give our Ledger account some testAlgo:

 

Issue Algo Using the Algorand Dispenser

 

Using the dispenser, we issue 100 testAlgo to our account. After dispensing the Algo we can verify the balances using the “goal account list” command again:

 

ubuntu@ubuntu:~$ sudo goal account list -d /var/lib/algorand
[offline] Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE 100000000 microAlgos
ubuntu@ubuntu:~$

 

Note that the account now has 100,000,000 microAlgos or 100 Algo in it. Now that we have a balance, let’s try sending a transaction from this account. To do this, we will use the “goal clerk send” command to send 1 Algo to another account:

 

ubuntu@ubuntu:~$ sudo goal clerk send -a 1000000 -f Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE -t OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA --note "" -d /var/lib/algorand

 

Note that the –note option with the empty string is needed, as the Ledger does not support values for the notes field and it will complain if you don’t explicitly specify the notes field to be blank.

Once you issue this command, you will be prompted on the ledger to sign the transaction. Recall that the private signing key for this account never leaves the secure element of the ledger, so the signing action happens on the ledger device:

 

How to Initiate a Transaction on a Ledger Nano S

 

There are a bunch of details about the transaction that you are shown on the ledger device including sender, firstvalid round, lastvalid round, genesis id, genesis hash, receiver, and amount. You will ultimately be asked if you want to sign the transaction:

 

Sign a Transaction on a Ledger Nano S

 

If you click yes, you will see progress on the Algod command line:

 

Sent 1000000 MicroAlgos from account Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE to address OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA, transaction ID: TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q. Fee set to 1000
Transaction TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q still pending as of round 2060606
Transaction TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q committed in round 2060608
ubuntu@ubuntu:~$

 

Note that if you take too long, the operation can timeout on the Algod side, requiring you to start over.

Once we have completed the transaction we can view the balances for the account once again:

 

ubuntu@ubuntu:~$ sudo goal account list -d /var/lib/algorand
[offline] Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE 98999000 microAlgos
ubuntu@ubuntu:~$

 

You can see that our account that had 100 Algo in it now has 98.999000 Algo in it. 1 Algo was sent to OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA, and there was a 1000 microAlgo transaction fee on top of that getting us to the resulting balance.

 

A multisig transaction example using Algo and an Algorand wallet

A Multisig Transaction Example: 5 Steps to Sending Algo Securely with an Algorand Multisig Account

Multisig transactions are a great deal more secure than single-key transactions, and for good reason: you’re removing a single point of failure, and distributing signing responsibility to a greater number of keys. However, this can add a great deal of effort to the transaction process when it comes time to send your funds to another account.

In this article, I’ll walk you through an offline multisig transaction example using an existing Algorand account (follow the steps in this article if you have not already created an account). While this example shows you how to spend funds, the same steps will apply to registering a participation key, bidding on auctions, and (in the future) voting.

To begin, let’s review all the things we have so far:

  • An online computer with a working Algorand node installation.
  • An “Ubuntu” bootable USB drive, which can be used for an offline computer.
  • A “Keys” USB drive with algokey and a file with the keys for our multisig account
  • A “Transfer” USB drive for transferring files between the online and offline computers

We will use these components to securely send a transaction from our multisig account while keeping our spending keys totally offline. The process will be:

  1. Create an unsigned transaction on the online computer
  2. Move this transaction file to the offline computer
  3. Sign the transaction on the offline computer
  4. Move the signed transaction back to the online computer
  5. Send it to the network

1. Prep Spend Transaction and Save Out to tx File (Online)

The process starts on the online computer. We will prepare an unsigned transaction file that describes the transaction we want to execute. Our transaction will be to send 1 Algo from the multisig account we created in this post to a destination account with address: 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY.

Open a terminal on the online computer and issue the goal node status command:

purestake@algo-node:~$ goal node status
Last committed block: 1630913
Time since last block: 2.7s
Sync Time: 0.0s
Last consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0
Next consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0
Round for next consensus protocol: 1630914
Next consensus protocol supported: true
Genesis ID: mainnet-v1.0
Genesis hash: wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=
purestake@algo-node:~$

The goal node status command returns information about the node and its view of the blockchain. Make a note of the “Last commited block” value, which we will need when we construct our transaction file. The reason is that transaction files are only valid for up to 1000 rounds or blocks. So we need to specify a validity range with the last commited block as the starting value for the range. The goal clerk send command can be used to create the transaction file:

purestake@algo-node:~$ goal clerk send -a 1000000 -f FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY -t 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY --firstvalid 1630913 --lastvalid 1631912 -o transaction.tx
Please enter the password for wallet 'MyWallet':
purestake@algo-node:~$

The goal clerk command above creates a file called transaction.tx in the working directory with an unsigned transaction that will send 1 Algo from FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY to 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY with a validity range of block 1630913 to 1631912.

Note that the amount is specified in microalgo as 1000000. All the algorand command line tools generally take Algo amounts in microalgo, or millionths of an Algo. Be very careful when specifying amounts as arguments to these commands. Without commas to create visual separation, it is very easy to make a mistake with an extra or missing zero.

I used the previously-recorded block height value of 1630913 as the firstvalid argument. To come up with the lastvalid argument value I added 999 to the firstvalid value. Generated transactions can have a maximum validity of 1000 blocks. Blocks are being finalized every ~4.5 sec currently, so this means that the transaction file will be valid for roughly 75 min. This can make timing tricky, depending on the coordination needed to actually sign the transaction. However, you can specify validity ranges out into the future if you need more time to perform the signing action.

Inspect tx file (online)

It is always a good practice to check the transaction file for correctness before proceeding to subsequent steps. The file is a binary file so opening it in a text editor is not useful. But we can use the “goal clerk inspect” command to look at its contents. To inspect the file contents, run the following command:

purestake@algo-node:~$ goal clerk inspect transaction.tx
transaction.tx[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA"
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE"
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4"
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

purestake@algo-node:~$

The first section shows that this is a transaction from a multisig account, and the 5 public keys for the multisig account are present. In the bottom section, you can find details about the transaction that we specified on the command line, such as the amount, firstvalid, lastvalid, the destination address, etc. The fee value is the fee for sending the transaction, which currently defaults to 1000 microalgo.

2. Copy tx File to the Air-gapped Machine

The transaction file transaction.tx is ready to be signed. But recall that we don’t have the spending keys on this online computer. The spending keys are on a USB device and will be used for signing on the offline computer. The unsigned transaction file cannot be used to send funds without being signed with 3 of the 5 spending keys associated with the multisig account, so it is reasonably safe to copy this file.

Copy transaction.tx to the “Transfer” USB drive.

As a next step, reboot the computer using the Ubuntu USB drive into an offline state and plug in the Transfer and Keys USB drives.

3. Sign tx File on the Air-gapped Machine (Offline)

Once we are booted to the offline Ubuntu desktop, we will perform the signing action for the transactions. We will create a folder on the Ubuntu desktop called tx and copy into it the algokey, the text file containing the keys, and transaction.tx. To sign transaction.tx, open a terminal to the tx folder on the desktop and issue the algokey multisig command to sign the transaction file:

ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction1.tx.signed -m "expire wear husband fancy now until laundry token strong dignity arrow valley post raven pudding farm twin chalk cloud tenant cart off shop abandon trophy"
ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction2.tx.signed -m "lucky dust hub crew barely leave gas crew canvas exhibit margin mixed impose air wasp chat athlete sketch ozone humble parent rail remind abandon host"
ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction3.tx.signed -m "draft mule stamp run absent congress leopard notice minute hungry fresh physical flee favorite cram green salad promote remember route assume gentle early absorb during"

These 3 algokey multisig commands each perform a private key signing action on the provided transaction.tx that we created in a previous step, as outlined in this blog post. The private key is supplied on the command line as a mnemonic, and each invocation creates a different signed transaction output file, transaction1.tx.signed, transaction2.tx.signed, and transaction3.tx.signed.

4. Move tx Files Back From the Air-Gapped Machine

With the signed transaction files in hand, copy transaction1.tx.signed, transaction2.tx.signed, and transaction3.tx.signed to the Transfer USB, remove the Ubuntu bootable USB and the Keys USB, and reboot the computer back to its regular online mode. Once it is booted, log in and copy the 3 signed transaction files from the Transfer USB to a directory on the computer. In my case, I just put the files in my user’s home directory.

Merge the Signatures Back to a Single tx File

We can inspect one of the signed transaction files using the same goal clerk inspect command that we used before to inspect the unsigned transaction.tx file. Issue the following command:

purestake@algo-node:~$ goal clerk inspect transaction1.tx.signed
transaction1.tx.signed[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA",
"s": "M7dVRrm9zmcE0dLkZTMX7JTjk/tsZdIgLn0qQuL9sGDDCnPZfiKRE9kpBYpSyfZ9uWvtCijJzJIInIbtNijRBg=="
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE"
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4"
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

This looks very similar to the unsigned transaction.tx that we inspected before, but note that the first public key (pk) in the top section now has an “s” value. This “s” value is the signature that was created using the private key for that address. It is not the private key itself, but it demonstrates that we had knowledge of the private key. The other 2 files look similar, but have an “s” value for public keys 2 and 3. What we need to do is merge all of these signatures into the same transaction file which we will call transaction.tx.signed. We can do this using the goal clerk multisig merge command like this:

purestake@algo-node:~$ goal clerk multisig merge -o transaction.tx.signed transaction1.tx.signed transaction2.tx.signed transaction3.tx.signed
purestake@algo-node:~$

We now have a merged signed transaction file called transaction.tx.signed in the working directory.

Inspect tx File Before Sending (Online)

Now let’s inspect the resulting merged transaction.tx.signed file:

purestake@algo-node:~$ goal clerk inspect transaction.tx.signed
transaction.tx.signed[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA",
"s": "M7dVRrm9zmcE0dLkZTMX7JTjk/tsZdIgLn0qQuL9sGDDCnPZfiKRE9kpBYpSyfZ9uWvtCijJzJIInIbtNijRBg=="
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE",
"s": "rjITXvqzQwFWZ5shfXjhkxpcAkPSJquv9s2gLACLljHKnaoYefTGUXjfKZHtGZixFIAGPWr22DMrk/rcdnf8CA=="
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4",
"s": "w08AQ3gJr9W8qVmV1HN4o7okFjU/ozWIHGs3kn4cWjRkx/j1xO3wv+bL5X7fFjt208zaFuacE0y6jKIIc2p3DQ=="
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

You can see that the first three public keys all have “s” value signatures. We only need signatures for the first three public keys because this is a 3-of-5 multisig and three valid signatures meets the threshold for the account. If we had signed with a fourth or fifth key, this wouldn’t cause any problems, but it isn’t necessary. This transaction is ready to be broadcast to the network.

5. Broadcast tx to the Network (Online)

To broadcast the signed transaction to the network we can use the goal clerk rawsend command:

purestake@algo-node:~$ goal clerk rawsend -f transaction.tx.signed
Raw transaction ID AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB issued
Transaction AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB still pending as of round 1631216
Transaction AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB committed in round 1631218
purestake@algo-node:~$

The transaction id for your transaction will be unique and different than what you see above. Algorand finalizes blocks in under 5 seconds, so you shouldn’t have to wait long for the transaction to broadcast to the network. Once committed, you can check account balances to make sure you see the balance change that is expected. Once used, the transaction file cannot be used again. Sending it again will result in an error.

Conclusion: More Complex, But More Secure, Too

This multisig transaction example shows that the setup of a multisig account and execution of a transaction are a lot more complicated than just using a single spending key account directly on an online computer with an Algorand node installation. But, by using a multisig account, we can substantially improve the security of the setup and greatly reduced the risk of the private keys and thus the funds in the account from being compromised.

The Algorand multisig features can be used to create multiple keys which can be used to split the secrets needed to spend funds across different people and locations. The exact number, locations, storage, and people will vary according to the environment and situation, but it opens the door for a much more secure setup than a single key account.

Keeping the spending keys totally offline is another substantial improvement to Algorand account security for high-value accounts. Most of the attack vectors for compromising keys involve online scenarios, malware, or other network exploits. By never having the secrets on an online computer, the risk of key compromise is greatly reduced. Another way to improve the security of an Algorand account is to use a ledger hardware wallet, which will be the subject of a future blog post.

How to Use Multisig Accounts in Algorand

How to Use Multisig and Offline Keys with Algorand

Multisig accounts and offline keys provide a great deal of added security, but are not always simple to set up. To help you get started, I’ve outlined the steps you will need to take to create a multisig account with Algorand and store keys offline on an air-gapped device.

For this tutorial, you will need at least 3 USB drives:

  1. “Ubuntu” to serve as a bootable Ubuntu USB device
  2. “Key” to hold the algokey binary and private keys
  3. “Transfer” to move transaction files to and from the offline computer

If you are going to store significant funds in the account being created, make sure that they USB drives are new, so there is no chance of any unwanted data or malware on the drives. It may seem excessive to use so many USB drives, but in the case of the private key drive, it is important that it never is plugged into a computer that is on the internet.

While the ideal approach to using multisig keys offline is to have a separate, dedicated laptop or computer, that will not be necessary to complete this tutorial. I will demonstrate how to use a bootable USB device in place of a dedicated offline machine.

Setting Up Your USB Drives

1. Download and Install the Algorand Node Software

On your online computer, download and install the Algorand node software.  This software will be used to interact with the Algorand network. Installation instructions for Algorand node software on different platforms can be found here: https://developer.algorand.org/docs/introduction-installing-node

For my examples, the online computer will be running Ubuntu 18.04 with Algorand installed using the debian package from the official repositories using the following instructions: https://developer.algorand.org/docs/installing-ubuntu

2. Create an Ubuntu Bootable USB Device

In order to create an Ubuntu bootable USB device, you can follow the instructions below depending on your OS:

Windows: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-windows#0

Mac: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-macos#0

3. Label Your USBs

When we need to perform sensitive signing actions on our offline computer, we will boot our computer using the bootable USB device you have just created.  Label it “Ubuntu” or something similar to identify it as the bootable USB device.

The second USB device will hold the algokey binary and the private keys that we will use to sign transactions on the offline computer.  Just label this USB drive “Keys” or similar for now. Do not plug this drive into the online computer.  This USB drive should never be plugged into any computer other than the offline computer.  We will put the algokey binary on it in a later step.

The third USB drive will be used to transfer files to and from the online and offline computers.  Label the drive “Transfer” or similar.

4. Copy the Algokey Binary to the Transfer USB

The algokey binary is part of the algorand node installation, but is a standalone executable that can be copied to a different computer running the same architecture / operating system without needing to perform a full node installation.

From our online computer that has a node installation based on the debian package, the algokey binary can be found at /usr/bin/algokey.  Copy the algokey binary to the Transfer USB drive. We will later move algokey to the offline computer and ultimately to the Keys drive, but we will do that once we have the offline computer set up.

Creating an Algorand Multisig Account (Offline)

Let’s start by creating a new 3-of-5 multisig account that we will use to store Algo securely.

Before we start issuing commands, we need to use the Ubuntu USB drive to boot into our offline computer.  Insert the Ubuntu USB drive into your computer and reboot the machine. You may need to enter the bios of your computer to tell the computer to boot from the USB device.

Once you are booted from the USB, you want to indicate that you want to try Ubuntu (not install Ubuntu).

After the computer is booted, you will be logged into an ephemeral Ubuntu desktop without networking.  This will be our offline environment for signing transactions.  Create a folder on the desktop that we will be working in. In the examples below, I called mine “tx.” Insert the Transfer USB device and copy the algokey binary to the tx directory.  Open a terminal window to the tx directory and change the permissions of algokey to make it executable, and test running it to make sure everything is working:

 

ubuntu@ubuntu:~$ cd Desktop/tx
ubuntu@ubuntu:~/Desktop/tx$ ll
total 22472
drwxr-xr-x 2 ubuntu ubuntu       60 Sep 2 10:48 .
drwxr-xr-x 4 ubuntu ubuntu      100 Sep 2 10:48 ..
-rw-r--r-- 1 ubuntu ubuntu 23011080 Sep  2 10:48 algokey
ubuntu@ubuntu:~/Desktop/tx$ chmod 755 algokey 
ubuntu@ubuntu:~/Desktop/tx$ ./algokey -h
CLI for managing Algorand keys

Usage:
  algokey [flags]
  algokey [command]
 
Available Commands:
  export      Export key file to mnemonic and public key
  generate    Generate key
  help        Help about any command
  import      Import key file from mnemonic
  multisig    Add a multisig signature to transactions from a file using a private key
  sign        Sign transactions from a file using a private key

Flags:
  -h, --help   help for algokey

Use "algokey [command] --help" for more information about a command.
ubuntu@ubuntu:~/Desktop/tx$ 

 

We are going to use the algokey utility to create 5 accounts with 5 associated private keys.  These accounts will later be combined to form one 3 of 5 multisig account. We perform this account creation step on the offline machine so that we can record the 5 secrets securely, and so that these secrets are never online.  We will do this by running the “algokey generate” command 5 times.

 

ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: expire wear husband fancy now until laundry token strong dignity arrow valley post raven pudding farm twin chalk cloud tenant cart off shop abandon trophy
Public key: OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: lucky dust hub crew barely leave gas crew canvas exhibit margin mixed impose air wasp chat athlete sketch ozone humble parent rail remind abandon host
Public key: P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: draft mule stamp run absent congress leopard notice minute hungry fresh physical flee favorite cram green salad promote remember route assume gentle early absorb during
Public key: JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: primary tone inquiry video bicycle satisfy combine pony capable stamp design cable hub defy soup return calm correct cram buyer perfect swim tone able math
Public key: GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: seminar screen join potato illegal vacuum predict measure cable reject crazy document edit erosion decline giggle neutral theory orient keen slow walnut reject absorb rain
Public key: ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU
ubuntu@ubuntu:~/Desktop$ 

 

A few important things to note.  First, you will get different accounts and secrets when you run “algokey generate.”  DO NOT USE THE ACCOUNTS LISTED ABOVE, they are example accounts created for this tutorial, and most importantly, the spending keys are right here on this webpage.  Anyone reading this post can spend the funds in these accounts or any multisig account based on these accounts.

Second, note that every time you run “algokey generate,” you get a valid single key account with a public key and a private key.  In Algorand, you will often hear the public key referred to as the address, and the private key as the spending key or mnemonic.

Third, observe that the private key mnemonic has 25 words which is quite unusual.  In other crypto systems, you will typically see word lists that encode a seed phrase or secret using 12 or 24 words.  Algorand uses 25 words, so make sure you get all 25 words. If you plan to use something like Cryptosteel to store the seed phrase, the 25th word will overflow a single plate, which is designed to hold only 24 words.

The public and private keys need to be securely recorded for your accounts.  One way to do this would be to write them down on 5 separate pieces of paper, store them on 5 separate USB drives, etc.  Having a paper backup is a good idea in case the USB drives fail. To more securely store them on the USB drive, they can be saved in a file which is then encrypted using pgp or similar, and the encryption passphrase is then securely stored separate from the drive.

In terms of distribution, you could put the keys in 5 different locations or give them to 5 different people for safekeeping.  There are many ways to securely store these keys, including bank safety deposit boxes, cryptosteel plates, and other options.

For purposes of this tutorial, I will put all 5 keys in a file on the same USB drive labelled “Keys,” but this is not recommended for production use.  It is important to number the keys 1 – 5. The order of the keys will be important when we go to set up the multisig account in the next step.

Set Up a Wallet and Multisig Account (Online)

For this step, you need to go back to the online computer with the online Algorand node.  If you are sharing the same computer for both online and offline needs, remove all USB drives and reboot the computer to bring it back to its online state.  Log in to the computer and open a terminal. We will first create a wallet using the goal command:

 

purestake@algo-node:~$ goal wallet new MyWallet
Please choose a password for wallet 'MyWallet':
Please confirm the password:
Creating wallet...
Created wallet 'MyWallet'
Your new wallet has a backup phrase that can be used for recovery.
Keeping this backup phrase safe is extremely important.
Would you like to see it now? (Y/n): n
purestake@algo-node:~$ 

 

In the example above, I named the wallet “MyWallet,” but you can name it whatever you want.  I also specified a password for the wallet. The reason I elected not to see the backup phrase is that I do not plan on having any secrets on this online node.  I’m only going to use it for looking at balances and sending transactions which have already been signed elsewhere.

The next step is to use the goal command to create a new 3-of-5 multisig account using the keys we generated in the previous step.  This will add the multisig account to the wallet and let the wallet know what the constituent parts of the multisig account are. But the private keys for the account will not be in the wallet, and the wallet will have no control of or ability to spend funds in the multisig account.  By putting the multisig account in the wallet we can work with the multisig account on this node even if we plan to sign multisig account transactions with our spending keys on the offline machine. If you skip this step, transaction files you create for the multisig account on this node will be invalid as the node doesn’t know what the multisig account is and what the component parts of the account are.

 

purestake@algo-node:~$ goal account multisig new OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4 GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU -T 3
Please enter the password for wallet 'MyWallet':
Created new account with address FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY
purestake@algo-node:~$ goal account list
[offline]       Unnamed-0 FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY      0 microAlgos [3/5 multisig] *Default
purestake@algo-node:~$

 

Note that the 5 public keys we created on the offline computer are listed here as arguments to the goal account multisig new command.  Be very careful, as the order of the keys matters. Changing the order of the public keys results in a different multisig address. Recall that we numbered the keys 1-5: always list them in that order, so you will get consistent results.

The “T” flag specifies the threshold, or how many of the associated spending keys in the multisig account need to sign transactions.  In this case we specify 3 making this a 3-of-5 multisig account. The resulting address of the multisig account is: FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY.  The “goal account list” command confirms that this is a 3/5 multisig account with 0 Algo in it.

Now you have successfully created a 3-of-5 multisig account (albeit, with no balance). Next week, I will publish a follow-up tutorial that demonstrates how to sign a transaction with your newly-created multisignature keys, enabling you to spend funds, bid on auctions, and more.