Posts

Validating on Polkadot: The First 11 Days Banner Image

11 Days Validating on Kusama: First Impressions & Emerging Power Dynamics

It has been 11 days since we joined the active validator set for Kusama, and I wanted to share some initial thoughts on the experience in case this is helpful to other validators, nominators, or other participants thinking about engaging with Kusama or Polkadot.

The first impression to convey is that interest in Kusama and Polkadot is high. Currently there are 140 validators that have signaled their intent to validate. Since Kusama switched from PoA to PoS thus switching block production from a limited set of Web3 Foundation-run validators to a decentralized set of validators, the number of validator slots has incrementally increased from 20 to 50, 60, 75, and currently stands at 100. At no time have there been any empty slots and there are currently 40 validators waiting for an opportunity to validate.

This stands in contrast to many other projects that have struggled to recruit enough competent validators to launch their networks. This is a really good sign for Polkadot as they near their MainNet launch.

Parity Team Actively Addresses Bumps in the Road

The process of launching Kusama has and continues to flush out issues.

There have been several point releases: 0.6.7, 0.6.8, 0.6.9, including an issue with 0.6.8 that led to database corruption for some validators. There are performance issues actively being worked on now, which will undoubtedly lead to more releases. Some validators have been slashed or removed from the active set, either due to issues with the software or a failure to run nodes properly.

However, the number of issues has really been relatively small. In each case, the Parity team has been very responsive in diagnosing and fixing issues. All things considered, for a system as complex as Kusama, this has been a very smooth launch process.

Two Primary Types of Validators

The validators in the active set are ones that meet the minimum effective staked KSM levels needed to be in the top 100.

Many of these appear to be representing DOT holders who could claim KSM based on their DOT holdings and thus, have large bonded amounts. I’m inferring this from the fact that transfers are not yet enabled, so large positions would have to come from claimed KSM.

The other set of validators are those that are not existing DOT holders and received KSM grants from W3F to be able to validate. The grants were 10 KSM, so I’m assuming that many validators with bond amounts less than 10 KSM are likely in this bucket. Many of these validators have received nominations, presumably from W3F and possibly others, to get into the active set.

There hasn’t been enough time for validators to really start to market themselves to try to attract nominations. This will presumably start to happen as the Kusama launch process continues to unfold, transfers are enabled, and the validator limit is potentially raised further.

LEARN MORE ABOUT STAKING ON KUSAMA WITH PURESTAKE

NPoS Validator Strategies

While it is too early to tell what strategies validators will use to go to market, there is one notable strategy that has emerged: the “sprawl” strategy.

Cryptium Labs is currently running 19 of the 100 validators in the active set. This is far more than anyone else on the network at this time. In NPoS (Nominated Proof of Stake) this is not only allowed, but perhaps expected. Given the fact that validators are compensated a flat fee for their service, running as many validators as can get into the active set is an economically rational validator strategy.

However, for some, the realization that large players could occupy may of the available slots was disheartening. Here is an exchange from the Riot rooms (where most of the discussions are happening) that illustrates the sentiment:

Fredy from DragonStake started with:

@derfredy:matrix.org

I wonder how the core ( Gav Bill | W3F federico … ) feel about the current adrianbrink | Cryptium Labs sybil attack.

Once we enable the TXs, the whole slots table could be filled with just 2 or 3 independent validators teams. Any concern?

Adrian from Cryptium Labs was quick to respond:

adrianbrink | Cryptium Labs

[snip…] public blockchains need to be designed so that they are secure against rational actors. Security based on altruism isn’t going to last long.
Btw, I’m not suggesting that Polkadot consensus is insecure. Maybe there needs to be more education about it though

And finally from Gavin:

Gav

i think it’s reasonable for w3f to use its KSM to keep the validator community pluralistic.

[snip…]

w3f has its funds;
w3f should act in whatever way it feels is best for the network;
having an active validator community with well-dispersed knowledge is good for kusama;
w3f should use some of its funds to help keep lesser staked validators of high reputation engaged.

This short exchange cuts right to the heart of some of the interesting ideas and dynamics around NPoS and how it will play out. Some good questions that this exchange raises: Will the NPoS design ultimately favor a smaller set of larger validators occupying multiple slots, or will it drive greater validator diversity? Is there such a thing as economically rational behavior that conforms to the protocol, but that nevertheless should be sanctioned or discouraged by social norms and convention?

NPoS is meant to be an improvement over standard DPoS in networks like Cosmos or Tezos. Its design does appear to be intended to discourage or prevent the concentration of stake behind any one validator, as doing this would lead to less staking returns for rationally-motivated nominators.

It is also meaningfully different from standard DPoS (Delegated Proof of Stake) because it has a separation between political power and validator services. This could guard against scenarios where, for example, validators are run for free to gather political power, as appears to be the case with the largest Cosmos validator. Many feel this leads to a weaker validator set, as it becomes difficult to fund legitimate validator businesses.

But if validator power can still be expressed in NPOS by allowing organizations or entities to have more than one — or perhaps dozens — of validators, it seems that some of the decentralization benefit of NPoS may not be as great as many believed would be the case.

I sympathize with Fredy from DragonStake’s point of view that the health of the network is greater with a more diverse set of validators, and that smaller validators shouldn’t have to rely on the goodwill of the W3F to have a shot at making it into the active set. And while W3F’s commitment to validator diversity is admirable, I also agree with Adrian from Cryptium Labs that what happens on these platforms is largely determined by the actions of rational economic actors playing by the rules codified in the protocol. Even if you have a set of social norms you try to enforce in your community, the permissionless nature of these systems means that someone can always come along and ignore your community and norms and do anything that the protocol allows.

It is always hard to predict how these systems will play out. But it seems likely that larger and better-established validator companies will pursue a Cryptium-style strategy on Polkadot. We may not see this yet, as they don’t want to tip their hand or take on the infrastructure expense on Kusama where the opportunity for profit is not possible. It will be interesting to see if, in the end, there is more or less validator diversity in Polkadot with NPoS versus Cosmos, Tezos, and other networks employing the simpler DPoS mechanism.

Decentralized Networks Are Magic

In the end, what has happened over the last 10 days demonstrates the magic of these new decentralized platforms. Probably something on the order of 100 organizations or people from different backgrounds, locations, skills all came to the table with complicated setups of software and infrastructure to help launch and support a network.

This network is something larger than any one participant could have created and wouldn’t be possible without the contributions of all of the participants. I can think of no better example of the power of platforms like Polkadot to organize people and activity in ways that weren’t previously possible, to allow anyone to join, to compensate the participants for their contributions, and to create something emergent and higher order as a result.

Keep an eye out as transfers will be enabled on Kusama soon, and I expect further shifts of stake and in the active validator set once that happens. Also feel free to leave me feedback on Riot if you agree (or disagree) with anything in this post: @mechanicalwatch:matrix.org.

Interested in staking with PureStake on Kusama? Nominate this address:

GhoRyTGK583sJec8aSiyyJCsP2PQXJ2RK7iPGUjLtuX8XCn

Banner for Demystifying Algorand Rewards

Demystifying Algorand Rewards Distribution: A Look at How & When Algorand Token Rewards Are Calculated

The Algorand Proof of Stake network issues rewards to its token-holders in order to stimulate and grow the network — but it’s not always clear how these returns are calculated. In this article, I’ll outline the basic framework of the Algorand rewards distribution model, and walk through some potential impacts it could have on the rewards calculations for current token-holders.

What are Network Rewards?

Traditional Proof of Work (PoW) based blockchains (BTC, ETH) rely on miners to validate transactions and record them in consecutive blocks on the ledger. In this effort, the miners compete against each other by solving complex mathematical problems. The miner with the winning block gets a certain amount of tokens rewarding them for their work in return.

This basic economic principle provides the incentive to support the required infrastructure to run the network. One of the major flaws of PoW networks is the unproductive effort put into competing for mining blocks, resulting in relatively slow performance and huge amounts of energy wasted.

The Algorand blockchain, like many newer blockchain networks, is solving the inherent issues of PoW with the concept of Proof of Stake. Unlike PoW miners, PoS networks rely on validators to verify transactions written to each block on the chain.

Algorand chooses its validators using a process called sortition with a selection preference towards validators holding the largest amount of tokens, referred to as stake. Just like miners, the validators receive rewards for their work, but unlike PoW, no resources are wasted on unnecessary, competing work.

The rewards are typically a percentage yield of return on the amount of stake a validator is holding. That way, the economic incentives for validators are directly linked to the amount of stake as a motive for supporting the network i.e. the higher your stake, the more interest you have in contributing to the health of the network, and the more rewards you earn.

How are Rewards Distributed on Algorand?

Algorand currently deploys a more liberal rewards scheme in order to benefit every token holder whether or not their tokens are staked and participating in the consensus protocol. The idea behind this is to stimulate the adoption and growth of the network by rewarding all token holders equally. To support the operation of the network during this initial period, Algorand has issued token grants to early backers for running network nodes to help bootstrap a scalable and reliable initial infrastructure backbone.

As the network grows over time, I anticipate that the rewards distribution will shift favoring active stake-holders and validators.

Algorand Rewards Distribution Schedule

The current rewards distribution is determined and funded by the Algorand Foundation. You can read a detailed explanation of the overall token dynamics here.

The first 6M blocks on the Algorand blockchain have been divided into 12 reward periods of 500,000 blocks each. Each period is funded by an increasing amount of reward tokens to offset the increasing total supply of tokens. The token supply periodically increases due to early backer grant vesting and token auctions facilitated by the foundation.

 

Period Start Date (estimated) Starting Block Ending Block Rewards Pool (Algo) Block Reward (Algo)
1 6/10/2019 1 500,000 10,000,000 20
2 7/7/2019 500,001 1,000,000 13,000,000 26
3 7/31/2019 1,000,001 1,500,000 16,000,000 32
4 8/25/2019 1,500,001 2,000,000 19,000,000 38
5 9/20/2019 2,000,001 2,500,000 22,000,000 44
6 10/15/2019 2,500,001 3,000,000 25,000,000 50
7 11/10/2019 3,000,001 3,500,000 28,000,000 56
8 12/5/2019 3,500,001 4,000,000 31,000,000 62
9 12/31/2019 4,000,001 4,500,000 34,000,000 68
10 1/25/2020 4,500,001 5,000,000 36,000,000 72
11 2/20/2020 5,000,001 5,500,000 38,000,000 76
12 3/16/2020 5,500,001 6,000,000 38,000,000 76
Algorand Network Rewards Schedule

Distribution Mechanics

At the time of writing, we have just entered the sixth period of the current rewards distribution schedule (see the highlighted row in the table above). According to the schedule, 50 Algo are distributed as network rewards in each block, approximately every 4.4 seconds.

Algorand rewards are calculated each block based on the account balance of every to-or-from address recorded on the blockchain. The minimum account balance that is eligible for receiving rewards is currently 1 Algo.

If you monitor any eligible account, you will notice that the account balance doesn’t get the rewards applied at every block, but after a certain number of blocks instead. This is a function of the smallest unit of Algo that can be disbursed by the system, which is 1 MicroAlgo (10-6 Algo).

Since the minimum balance is currently 1 Algo and the smallest reward is 1 MicroAlgo, you can calculate the number of rounds it will take for rewards to be credited using this formula: Formula to calculate Algorand rewards: rounds equals ten to the negative sixth power divided by the product of block reward times total supply

With a supply of 2,103,868,588.605500 Algo at Block 2550212 and a Block Reward of 50 Algo, a reward of 1 microAlgo is dispersed for each Algo held every 42 rounds or roughly every 3 minutes. This will obviously change as the total supply grows.

Rewards APR vs APY

When you check the account balance of any address, the node API will return the balance that was last recorded on-chain plus any accrued rewards. The combined balance will not be written to the blockchain until the address appears in an actual transaction. In other words, rewards are only committed to the account balance on the blockchain when the address appears in a to-or-from address as part of a transaction. This is expected behavior, as ongoing, immediate on-chain updates of the balances of all eligible accounts on every round would pose a serious performance challenge.

There is a small but important consequence of this behavior. As pointed out earlier, rewards are calculated based on the recorded on-chain balance for each account. That means that none of the accrued rewards are included in this ongoing calculation. In essence, it is not compounding the accrued rewards interest. For accounts with large account balances, the difference in reward returns can be significant.

The current daily rewards percentage return based on the assumption of 1 microAlgo every 3 minutes is:

Equation to Calculate Daily Rewards Percentage: the number of minutes in a day which is 24 times 60 divided by the product of 3 minutes times ten to the negative sixth power times one hundred percent equals 0.048 percent

At the time of this post, an account that is holding 1M Algo would generate about 480 Algo per day or 172,000 Algo per year in rewards with an effective APR of 17.52%. If the rewards were compounded on a daily basis the effective APY would increase to 19.15%. These numbers will change over time as the rewards schedule changes, the total supply grows and transaction fees may increase.

Rewards Compounding

Compounding rewards is simple. Since rewards are calculated from the last recorded balance on the blockchain, the easiest way to force rewards compounding is to send a zero Algo payment transaction to the target address on a frequent, recurring basis. This transaction will trigger the commit of all accrued rewards and record them to the on-chain balance of the account.

So what is the ideal compounding frequency if you want to maximize your rewards?

For one, it doesn’t make sense to send compounding transactions more frequently than the number of rounds it takes for rewards to be disbursed, currently >3 minutes. Secondly, every transaction has an associated transaction fee, currently 1,000 microAlgo. This means the cost of these transactions shouldn’t outweigh the gains achieved by compounding.

Given that:

Rewards APR 17.52%
Transaction Fee 0.001

 

Net Compounding Rewards Interest at Specific Account Balances
Account Balance 50,000,000.000000 5,000,000.000000 50,000.000000
Annual Simple Rewards Interest 8,760,000.000000 876,000.000000 8,760.000000
trx Frequency trx/Year trx Charge Net Comp Rewards Int Rewards APY Net Comp Rewards Int Rewards APY Net Comp Rewards Int Rewards APY
10 min 52,560 52.560 9,574,154.528546 19.15% 957,368.148855 19.15% 9,521.647089 19.04%
Hour 8,760 8.760 9,574,111.351504 19.15% 957,403.251150 19.15% 9,565.360112 19.13%
2 x Day 730 0.730 9,572,971.479139 19.15% 957,296.490914 19.15% 9,572.242209 19.14%
Day 365 0.365 9,571,719.996054 19.14% 957,171.671105 19.14% 9,571.355361 19.14%
Week 52 0.052 9,556,683.398054 19.11% 955,668.293005 19.11% 9,556.631450 19.11%
Month 12 0.012 9,498,812.777423 19.00% 949,881.266942 19.00% 9,498.800789 19.00%

The cells highlighted in the table above show the maximum return achieved at certain compounding frequencies for specific account balances under the current conditions. It also illustrates that there are diminishing returns at higher rates and that, for most cases, a daily compounding transaction will suffice to get the core benefit of compounding rewards.

Conclusion

Clearly some frequency of rewards compounding always makes sense for long-held account balances (e.g. staked balances). Since rewards are currently equally distributed across all accounts whether they are staked or not, the returns represent more of an inflationary rate across the entire supply rather than an interest return for node operators or staked balances.

It is also important to note that the compounding gains are significantly exaggerated at this time due to the already high rewards interest and the low transaction fees. You will need to evaluate what amount of compounding makes sense for you, based on changing parameters and held account balances.

PureStake automatically compounds all customer accounts that are staking with us. Please reach out to us if you would like to learn more about this and our services.

Crypto Infrastructure and DevOps Best Practices

DevOps Practices for Crypto Infrastructure, Part I: Version Control, Full Stack Automation, and Secrets Management

When standing up services that will have cryptographic interactions with a blockchain, the DevOps infrastructure and practices you employ will dictate a lot about the security and reliability of those services. In this two-part series of posts, I will introduce core DevOps principles that will help guide crypto infrastructure creation. I’ll also share different DevOps infrastructure aspects that I have to worked well for me, and could be helpful to other teams looking to stand up crypto as-a-service offerings.

Cloud vs Roll-Your-Own

For many infrastructure elements, you must choose whether to go with a cloud provider such as AWS, Azure, or Google, or to roll-your-own in a colocated data center with self-managed software. In crypto and blockchain, there are some specific requirements, particularly relating to key security, which may factor into requirements around hardware security modules (HSMs), physical servers, and tiers of colocated data centers (more on this later).

But in general, all other things being equal, if there is an option from one of the three major cloud providers for an as-a-service offering vs rolling your own with purchased hardware and self managed software, there are a lot of reasons to go for the cloud option.

In my experience, it is very easy to underestimate the investment and labor required to self-manage infrastructure elements in a high-quality way over time. Especially when the software is open source, the temptation is always to just pull down the software and start running it. Focus tends to be on the cost of the hardware instead of the cost of the cloud service. The DevOps staff that is required to manage, upgrade, performance tune, patch and evolve this infrastructure over time is almost always underestimated by startup teams and becomes baggage as the team and the company grows. For any piece of infrastructure, you really have to ask yourself if this is the best use of your team’s time.

In most cases, you will want to focus your energy on things that you only you can do, and purchase services where possible from a reputable cloud provider. The three major cloud providers (AWS, Azure, Google) all have large and highly specialized teams surrounding each of their as-a-service offerings. For smaller companies, there is no way you are going to do a better job with management and security than these cloud provider teams for base/commodity offerings.

My take: go with a cloud provider or (better yet) more than one cloud provider, so you can and focus on building and running things that you can’t purchase as a service and that are unique to your offering.

SECURE, RELIABLE CRYPTO INFRASTRUCTURE FROM PURESTAKE

Version Control

In recent years, the idea of infrastructure-as-code has become a leading principle in DevOps. This is part of a larger evolution of DevOps that continues to shift the discipline towards looking more and more like a software development practice. A core part of any software development practice is storing all your software artifacts in a version control repository. Artifacts can include source code, configuration files, data files, and in general any of the inputs needed to build your software and your infrastructure environment. It seems like a given, but I have seen operational environments where not all of the artifacts necessary to build the environments were stored in source control.

The benefits of storing everything under version control is that you have a unique version for a given state of the artifacts used to build your environments. This allows for the repeatable build of environments, the implementation of processes around change to these artifacts, and the ability to roll back to any previous known good state in case there are issues. High-quality and cost-effective cloud-based services such as GitHub make this an easy choice to serve as a foundation for DevOps activity.

Full Stack Automation

One of the best things about using the cloud for your infrastructure is the programmability and APIs that the cloud vendors provide. These APIs can be used to automate the entire application stack from base layer network, DNS, storage, compute, up to operating systems and serverless functions, and all the way through to the custom code in your application. Taking an infrastructure-as-code approach means having software artifacts in your source code repository and a build process that can create an entire application environment in a fully automated way. This automation can be used to drive the initial build and incremental change to development, test, and production environments.

There are good tooling options these days to achieve this kind of infrastructure automation. At the base infrastructure level, there are solutions native to cloud provider environments such as AWS CloudFormation or Google Cloud Deployment Manager. We are fans of Terraform as it allows for the management of infrastructure in AWS, Azure, and Google from the same codebase with provider-specific modules and extensions. Once the base level infrastructure has been provisioned, packer images combined with configuration management tools like Ansible, Chef, or Puppet can be used configure host-based services.

There are a lot of benefits to be had from automating the full application stack. Automation eliminates the chance of manual errors and allows for a repeatable process. It also can drive the same stack into dev, test, and prod, thus minimizing the chances of environmental differences leading to surprises. Automation can also be used to support blue/green production deploys in which an entire new environment is built with updated code and then traffic is cut over from the existing to the new environment in a controlled fashion. In addition, it is easy to roll back in this model if there is a problem with the new environment.

Full stack automation also lends itself to the switch from thinking about servers as unique elements with individual character to managing servers as interchangeable elements. It becomes a straightforward proposition to rip and replace troublesome infrastructure and to use tightly-focused servers rather than sprawling snowflakes that acquire dozens of responsibilities and take on a life of their own.

Secrets Management

When you have an automated environment it is very important that the secrets that are part of your application are managed carefully. Secrets could include service passwords, API tokens, database passwords, and cryptographic keys. The management of crypto keys is particularly critical for crypto infrastructure where private keys are present, such as exchange infrastructure and validators on proof of stake networks. Read my recent blog to learn more about crypto key management using multisig accounts and offline keys.

However, a lot of the same principles apply to infrastructure, application, and crypto secrets. You want to make sure that these secrets are not in your source code repo, but rather that they are obtained at build or, better yet, at runtime in the different environments in which your application is running.

Software and platform native tools that help protect secrets in production environments include AWS KMS/CloudHSM, Azure Key Vault and Hashicorp Vault if you are looking for something cross platform. Some very sensitive secrets such as crypto private keys can benefit from hardware key management systems such as YubiHSM2 and Azure Dedicated HSM based on Safenet Luna hardware. The downside is that hardware solutions are generally less cloud-friendly than software ones and, while they may improve key security, some aspects of security are worsened by taking a hardware approach over a more automatable cloud-native software approach. The infrastructure costs and surface area that needs to be managed can also be far higher when taking a hardware-centric approach.

Intel SGX is a promising hardware technology that allows processes to run in secure enclaves.  A process running in a secure enclave is totally isolated from the host operating system. What this means is that, if you have access to the guest operating system, you cannot read the memory of the process running in the SGX enclave even if you have root privileges.  I am excited by the use of SGX enclaves combined with e.g. Hashicorp Vault to improve the security of software and cloud native secrets management. SGX is available today via Azure Trusted Compute but has the downside of requiring coding to the SGX APIs. We eagerly await further developments of the AWS Nitro architecture which we believe will greatly improve the security of software and cloud native secrets management. Nitro is the AWS version of providing hardware support for isolation of customer workloads on shared infrastructure.

Topics to Cover in Part II

There are many aspects to consider when thinking about secure and reliable infrastructure for crypto based applications.  We’ve only touched on a handful of areas in this article. Here are some additional areas I cover in part II:

  • Authentication
  • Authorization and Roles
  • Networking
  • Monitoring
  • Logging

Looking for further information about infrastructure for crypto-based applications? Contact us today

Participation Keys in Algorand Blog Banner Image

Participation Keys in Algorand

What Are Algorand Participation Keys?

In Algorand, there are 2 types of nodes: relay nodes and participation nodes. Relay nodes serve as network hubs in Algorand, relaying protocol messages very quickly and efficiently between participation nodes. Participation nodes support the consensus mechanism in Algorand by proposing and validating new blocks. Participation keys live on participation nodes and are used to sign consensus protocol messages.

A participation key in Algorand is distinct and totally separate from a spending key. When you have an account in Algorand there is an associated spending key (or multiple keys in the case of a multi-sig account). The spending key is needed to spend funds in the account. A participation key, on the other hand, is associated with an account and is used to bring stake online on the network. Importantly, participation keys cannot be used to spend funds in the associated account, they can only be used for helping to support the consensus protocol.

Participation Keys Are Good

Having distinct keys for spending the Algo in an account, and staking the Algo in an account, results in several key security improvements.

In any crypto network, protecting the spending keys is of the utmost importance. Situations that require having spending keys on an internet connected computer are inherently dangerous and always contain the risk of loss of funds.

In Algorand, the spending key never has to be online. The spending key can be kept on an airgapped computer or other offline setup and only used for signing transactions offline. The participation key, in contrast, lives on the participation node and signs protocol messages, but the participation key cannot spend any funds in the account.

This separation of duties in 2 different keys improves the security of Algorand infrastructure substantially. Spending keys can always be kept totally offline and an attacker, if they are able to compromise an internet connected participation node, cannot spend or steal any of the funds in the associated account.

Of course, this doesn’t mean that participation keys shouldn’t be highly protected and secured. If an attacker does compromise a participation key, they can stand up a second participation node with the same participation key. This will result in protocol messages being double-signed, which the network will see as malicious behavior and will treat the node / associated stake as offline.

There is no bonding or slashing in Algorand, and staking rewards are still coming in the future, but regardless: being forced offline due to double signing is undesirable and means that the stake in question will no longer be supporting the consensus mechanism.

Participation Key Mechanics

My examples assume Algorand Node v1 software is installed and running in a participation node configuration on the Algorand MainNet. The software is installed using the Debian package on Ubuntu 18.04, with a standard non-multi-sig Algorand account with some Algo in it, and a separate offline computer with the spending key for the account.

To create a participation key you will need to use the “goal addpartkey” command and specify the account that you want to create the part key for and a validity range:

goal account addpartkey -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY --roundFirstValid 789014 --roundLastValid 4283414

A few things to note. The account specified in the -a flag in the command above (WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY) is made up and you would need to replace it with your account. Do not use this account as it, and the associated spending key, are not real. Any funds sent to this address will be permanently lost.

The validity range is specified in rounds. Rounds are equivalent to blocks in Algorand. So if you, for example, want to have a key that is valid from now until a point in the future, you need to find the current block height for the roundFirstValid and a future block height for the roundLastValid flag corresponding to the validity range you want.

To find the current block height you can use the “goal node status” command:

derek@algo-node:~$ goal node status Last committed block: 789014 Time since last block: 2.4s Sync Time: 0.0s Last consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0 Next consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0 Round for next consensus protocol: 789015 Next consensus protocol supported: true Genesis ID: mainnet-v1.0 Genesis hash: wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=

The last committed block, which is the same as the current block height, is reported as 789014, so we use that for our roundFirstValid. Figuring out the right value for the roundLastValid is a little more involved.

First, you have to determine what time range you want. It is a good practice to rotate participation keys and not to create a key with a really long validity range. In our example, we will use a time range of 6 months. What round corresponds to 6 months from now?

To figure that out, we have to do a little math. 6 months is approximately 182 days. So 182 days x 24 hours / day x 60 min / day x 60 sec / min = 15724800 seconds. At the time of writing, each round in Algorand takes about 4.5 sec. So 15724800 seconds / 4.5 seconds per block = 3494400 blocks. Now we need to add 3494400 to the current block height to get the height 6 months from now. E.g. 3494400 + 789014 = 4283414. This is where the 4283414 in the command above comes from for the roundLastValid.
As the network grows, the 4.5 second block time may not be a safe assumption. This may make the validity range slightly different than 6 months. You need to monitor for key validity and make sure to put a new key in place before the old one expires.

Once the addpartkey command has executed, you can find the participation key at:

/var/lib/algorand/mainnet-v1.0/WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA.789014.4283414.partkey

It’s beyond the scope of this article, but this file is actually a sqlite database with N number of keys in it which will be internally rotated through automatically during the validity window. This is an additional security measure that is part of Algorand, where the keys used to sign protocol messages are rotated as rounds progress.

With the participation key created, the next step is to bring the account online. An account being online in Algorand means that the Algo in the account is supporting the consensus mechanism. We bring an account online by using the “goal account changeonlinestatus” command. Note that this action requires that you have a small amount of Algo in the account to pay for the transaction. If you have the spending key for the account directly on the participation node you can simply run this command

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1

However, having the spending key on the participation node is not recommended and kind of defeats the whole purpose of having participation keys in the first place. It is much better to have an airgapped and totally offline computer that has the spending key on it. The process is a little more involved with this setup, but it is much more secure. With this setup you would issue the following command instead:

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1 -t online.tx

This will produce a transaction file called online.tx in the current directory which has an unsigned transaction to bring the account online. This transaction file then needs to be securely moved to the airgapped computer with the spending key on it. Once on the airgapped computer you can use the algokey utility to sign the transaction file. The command would be:

algokey sign -k spendingkeyfile -t online.tx -o online.tx.signed

Note that algokey is standalone and does not need a running Algorand node. Also, the spendingkeyfile is the file that has the spending key for the account. This file can be created by algokey when you first set up your account.

There is also an option to specify the spending key mnemonic instead of a file, but I find this option worse as it leaves the mnemonic in the shell history, etc. The result of this command is that online.tx.signed will be created in the current directory. This file contains the signed online transaction and it needs to be securely moved back to the running participation node.

Once you have online.tx.signed back on the participation node you can send it to the network with the following command:

goal clerk rawsend -f online.tx.signed

Wait a little bit for the transaction to be processed, and your account should now be online. The creation of a transaction file, movement to the airgapped machine to sign the transaction, movement of the signed transaction back to the online node, and then sending the signed transaction to the network is a general pattern for sending transactions in Algorand without ever putting your spending key online.

Final Thoughts on Participation Keys in Algorand

The design of Algorand using separate keys for spending funds and for participating in network consensus improves the security of nodes running on the Algorand network substantially by protecting spending keys and removing the need for them to ever be online. I think this was a good design choice and wouldn’t be surprised if other protocols adopt this approach.

 

Why We Started PureStake Blog Banner

Why We Started PureStake

Many of us at PureStake were just starting our careers in the mid-to-late 90s, during the first internet wave. Since then, we have spent the last 20-plus years building infrastructure, software, and cloud companies based on the possibilities opened up by the internet. I recall the atmosphere and feeling of those early internet days and, in the intervening years, I hadn’t experienced that feeling since until I started getting involved with crypto.

The crypto genie is out of the bottle, and it has unleashed forces which cannot be stopped or contained. We believe that using blockchains to move value in an open, low friction, low-cost way will have as large an impact on all of us as the internet has had in moving information in an open, low friction, low-cost way. We are only at the beginning of a historical shift where crypto networks and applications will disintermediate many existing companies, structures, and practices, replacing them with code.

While the strategic direction of this shift is clear, the particulars of how this shift will play out are harder to call. That said, we have several beliefs that we stand behind:

  1. The future will be a multi-chain future vs one-chain-to-rule-them-all. In this future, bitcoin will continue to have a foundational place in the ecosystem, but there will also be many other blockchains, each of them good at different things.
  2. Public and permissionless blockchains will lead the way in terms of innovation and interesting applications vs private and permissioned ones.
  3. Proof of Stake consensus protocols are a more scalable, more efficient, and ultimately more secure consensus mechanism versus more traditional Proof of Work consensus protocols. As decentralized currencies, networks, and applications continue to mature and get traction, we believe there is a large opportunity to provide infrastructure as a service to support participation in and development on these decentralized networks.

We are taking all of our experience building and running cloud services and applying it to crypto infrastructure. Given that this infrastructure will be directly handling value, the security and reliability of our services must come first (and features will sometimes have to come second).

We use a software-first approach to solving problems. Treating our infrastructure as code and using software engineering best practices to deliver change to our infrastructure is one example of this. We aim to hide infrastructural complexity from our users and customers. We want to provide them with services that are simple to consume, freeing them to focus on the reasons they want to interact with the blockchain vs the details and mechanics of how to interact with the blockchain.

We will engage closely with a select number of networks that we believe in. We want to focus our energy on fewer vs more networks to be able to go deep on them to understand how they work, their nuances, their APIs, and their infrastructure needs. As we build expertise on specific networks we will be giving back to those networks in the form of services, tools, and information that help the community. Our goal is to provide secure and reliable blockchain infrastructure that participants can depend on and that developers can build upon.

The first network we are focused on is Algorand. Algorand is currently in TestNet and will be launching their MainNet soon.

Why Algorand? We personally know many of the people on the Algorand team. They have an extremely talented engineering, research, and business operational team. We believe in Silvio Micali, Steve Kokinos, and the team they have assembled. We think they can execute on a complicated and difficult roadmap in a way that other projects have historically been challenged with.

Our experience with the Algorand software and network has been similarly very positive. The quality of the code, the security, and design innovations, and the the rich set of financial primitives have all made a big impression. The performance of the network we have seen on the TestNet without significant sacrifices to security or decentralization we believe will move the needle among public blockchains and blockchain design in general.

We are excited to be one of the companies helping to support the upcoming Algorand MainNet network launch and look forward to engaging with participants and developers in the Algorand community.

Stay tuned for updates on our journey by signing up for our newsletter, or feel free to contact us if you are developing an Algorand application or need help with blockchain infrastructure.