How to Set Up a Ledger Nano S with an Algorand Account

Security Check-up: How to Use Ledger Nano S to Secure Algorand Accounts

Key management can be very stressful to cryptocurrency investors and users who control a large amount of crypto funds. Despite your best efforts, it can be very easy to compromise a key that has had any exposure to the internet, or to hijack a user’s phone, in order to steal their digital assets. So it’s no surprise that hardware wallets have taken over as a popular (and effective) counter-measure for hackers seeking to gain access to your private keys.

Hardware wallets do have some limitations, though. In this article, I’ll briefly review some important points to consider before employing a hardware wallet, and then I’ll provide a step-by-step walkthrough on how to set up a Ledger Nano S in order to better secure a cryptocurrency account (in this case, an Algorand account).

Why You Should (or Shouldn’t) Use a Ledger

Hardware wallets such as the Ledger Nano S offer significant advantages over software-based wallets.

First, there’s the physical security element. By storing private keys for an account in a secure element within the hardware device, it becomes very difficult for an attacker to steal the private key for your account without having physical access to the device and knowledge of the PIN for the device.

By keeping the private key on a hardware wallet, you also reduce the risk of malware and other online attacks from compromising your account spending keys, since the keys never leave the physical device.

HOWEVER, hardware wallets can be finicky and difficult to use.

For example, in Algorand, only the command line tools support the Ledger device.  So that means that you cannot use a Ledger with the Algorand mobile wallet, at least as of right now.  Only the Ledger Nano S is supported; the Nano X is not yet supported. Using a Ledger on Algorand means you are limited to apps that specifically have Ledger support.

The Ledger also only supports a single key, so multisig configurations will require multiple ledger devices which adds complexity.  Ledger support is planned for the Algorand mobile wallet application at some point, but it is not clear when this will happen. To use a Ledger with your Algorand account today requires comfort at the command line.

So, to recap:

Pros

of Using a Hardware Wallet to Secure an Algorand Account

Cons

of Using a Hardware Wallet to Secure an Algorand Account

  • Difficult to steal the key without physical access to the device
  • Less likely to fall victim to malware & other online attacks
  • Applications must explicitly support wallet hardware.  Currently only command line supported in Algorand for Nano S
  • Limited multisig support requiring complex multi-device and multi-step process

 

How to Set Up the Ledger Nano S for Use With Algorand

To use a Nano S to secure an Algorand account, you first have to go through the basic setup of the Nano S. For this article, I’m going to assume that you are starting with a fresh Ledger Nano S that will only be used to store ALGOs securely.

To start, you will download the Ledger Live application to your computer. Ledger Live is what you use to manage the applications on your Ledger device. You can download Ledger Live here.

Once you install it and plug your Nano S into your computer, click the “Get Started” button. You should see this screen:

 

Get Started with a Ledger Nano S on Ledger Live Screenshot

 

When initializing a new device, the first step is to choose a PIN code:

 

Choose Your PIN Code in Ledger Live

 

You should follow the steps in the Ledger Live application and on your Ledger Nano S device. There are 2 buttons at the top of the device. Hitting both buttons simultaneously acts as the “Enter” option.

Again, since I’m assuming this is a new device, you will want to elect to configure it as new. This will wipe out any previous configurations on the device.

 

How to Set Up a Ledger Nano S as a New Device

 

The next step is to choose a PIN code.  This is a critical step to preventing someone who steals the device from being able to use it to access your funds.

Follow the prompts on the device and Ledger Live app closely, and use the left and right buttons on the device to select a PIN code.  It should at least be 6 digits long. Hitting both buttons advances you to the next position. The selecting the check mark will indicate that you are done.

 

Choose a PIN Code for Your Ledger Nano S

 

The next step is to write down the recovery phrase for the device. This is critical. In case you ever lose the device or if it malfunctions, you will be able to restore the account to another device. The Ledger Live app walks you through this:

 

Write Down the Recovery Phrase in Ledger Live So You Can Recover Your Ledger Nano S Later

 

You will need to write down all 24 words of the recovery phrase, and you will be tested to make sure you have written down all the words.

 

Confirm the Recovery Phrase of a Ledger Nano S

 

Once you have verified the recovery phrase, the base setup of the Ledger is complete.

Next, you need to use the Ledger Live application to install the Algorand application onto the device. Go to the manager section of the Ledger Live app, search for “algorand” and click to install the Algorand application.

 

Install the Algorand Application in the Manager Section of Ledger Live

 

Once the application is installed on the ledger, you should see the Algorand app as shown below.

 

Install Algorand on Ledger Live to Use with Your Ledger Nano S

 

The installation of the Algorand app on the ledger created an Algorand account and stored the private key of the account on the secure element of the Ledger device. The private key never leaves the device, but you can see the account address if you go into the Algorand app on the device under “Address”:

 

View a Public Account Address on a Ledger Nano S

 

Using the Ledger From the Algorand Command Line

Now that we have the ledger configured with the Algorand app, it is ready to use with an Algorand node installation.

In my examples below, I have an installation of Algod installed from the Debian package running under Ubuntu 18.04 with a synced blockchain. I have plugged in the Ledger device to the computer with Algod on it.

The first thing we can do is look to see that the Ledger device has been recognized. We can do this with the “goal wallet list” command:

 

ubuntu@ubuntu:/var/lib/algorand$ sudo goal wallet list -d /var/lib/algorand
##################################################
Wallet:    Ledger Nano S (serial 0001) (default)
ID:    0001:000a:00
##################################################
ubuntu@ubuntu:/var/lib/algorand$ sudo goal account list -d /var/lib/algorand
[offline]    Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE    Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE    0 microAlgos
ubuntu@ubuntu:/var/lib/algorand$

 

Note that the ledger shows up as a wallet on this computer once it is plugged in. I did not create this wallet with the “goal wallet new” command. It was created for me when I plugged in the ledger device. Issuing the “goal account list” command shows the single account on the device and the balance of that account, which is 0. I also did not create this account with the “goal account” command, it simply came along with the wallet that was automatically created.

When you list the accounts, if you get the error message “Error processing command: Exchange: unexpected status 680.” This means that you need to unlock the Ledger with your PIN. It should work after that.

In this example, the Algod node is on the TestNet. In order to try out a transaction, let’s use the TestNet dispenser to give our Ledger account some testAlgo:

 

Issue Algo Using the Algorand Dispenser

 

Using the dispenser, we issue 100 testAlgo to our account. After dispensing the Algo we can verify the balances using the “goal account list” command again:

 

ubuntu@ubuntu:~$ sudo goal account list -d /var/lib/algorand
[offline] Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE 100000000 microAlgos
ubuntu@ubuntu:~$

 

Note that the account now has 100,000,000 microAlgos or 100 Algo in it. Now that we have a balance, let’s try sending a transaction from this account. To do this, we will use the “goal clerk send” command to send 1 Algo to another account:

 

ubuntu@ubuntu:~$ sudo goal clerk send -a 1000000 -f Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE -t OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA --note "" -d /var/lib/algorand

 

Note that the –note option with the empty string is needed, as the Ledger does not support values for the notes field and it will complain if you don’t explicitly specify the notes field to be blank.

Once you issue this command, you will be prompted on the ledger to sign the transaction. Recall that the private signing key for this account never leaves the secure element of the ledger, so the signing action happens on the ledger device:

 

How to Initiate a Transaction on a Ledger Nano S

 

There are a bunch of details about the transaction that you are shown on the ledger device including sender, firstvalid round, lastvalid round, genesis id, genesis hash, receiver, and amount. You will ultimately be asked if you want to sign the transaction:

 

Sign a Transaction on a Ledger Nano S

 

If you click yes, you will see progress on the Algod command line:

 

Sent 1000000 MicroAlgos from account Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE to address OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA, transaction ID: TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q. Fee set to 1000
Transaction TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q still pending as of round 2060606
Transaction TSTO3YZJAJJFL433VTMWGPEA6FKAEW34JYP32RAM7DCZV7ITIP6Q committed in round 2060608
ubuntu@ubuntu:~$

 

Note that if you take too long, the operation can timeout on the Algod side, requiring you to start over.

Once we have completed the transaction we can view the balances for the account once again:

 

ubuntu@ubuntu:~$ sudo goal account list -d /var/lib/algorand
[offline] Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE Y2I3YF5AHFBMRUNKKXY6VPOT6QITCQMXSB5RDM2LG2IE74HGLDIROANCNE 98999000 microAlgos
ubuntu@ubuntu:~$

 

You can see that our account that had 100 Algo in it now has 98.999000 Algo in it. 1 Algo was sent to OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA, and there was a 1000 microAlgo transaction fee on top of that getting us to the resulting balance.

 

A multisig transaction example using Algo and an Algorand wallet

A Multisig Transaction Example: 5 Steps to Sending Algo Securely with an Algorand Multisig Account

Multisig transactions are a great deal more secure than single-key transactions, and for good reason: you’re removing a single point of failure, and distributing signing responsibility to a greater number of keys. However, this can add a great deal of effort to the transaction process when it comes time to send your funds to another account.

In this article, I’ll walk you through an offline multisig transaction example using an existing Algorand account (follow the steps in this article if you have not already created an account). While this example shows you how to spend funds, the same steps will apply to registering a participation key, bidding on auctions, and (in the future) voting.

To begin, let’s review all the things we have so far:

  • An online computer with a working Algorand node installation.
  • An “Ubuntu” bootable USB drive, which can be used for an offline computer.
  • A “Keys” USB drive with algokey and a file with the keys for our multisig account
  • A “Transfer” USB drive for transferring files between the online and offline computers

We will use these components to securely send a transaction from our multisig account while keeping our spending keys totally offline. The process will be:

  1. Create an unsigned transaction on the online computer
  2. Move this transaction file to the offline computer
  3. Sign the transaction on the offline computer
  4. Move the signed transaction back to the online computer
  5. Send it to the network

1. Prep Spend Transaction and Save Out to tx File (Online)

The process starts on the online computer. We will prepare an unsigned transaction file that describes the transaction we want to execute. Our transaction will be to send 1 Algo from the multisig account we created in this post to a destination account with address: 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY.

Open a terminal on the online computer and issue the goal node status command:

purestake@algo-node:~$ goal node status
Last committed block: 1630913
Time since last block: 2.7s
Sync Time: 0.0s
Last consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0
Next consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0
Round for next consensus protocol: 1630914
Next consensus protocol supported: true
Genesis ID: mainnet-v1.0
Genesis hash: wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=
purestake@algo-node:~$

The goal node status command returns information about the node and its view of the blockchain. Make a note of the “Last commited block” value, which we will need when we construct our transaction file. The reason is that transaction files are only valid for up to 1000 rounds or blocks. So we need to specify a validity range with the last commited block as the starting value for the range. The goal clerk send command can be used to create the transaction file:

purestake@algo-node:~$ goal clerk send -a 1000000 -f FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY -t 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY --firstvalid 1630913 --lastvalid 1631912 -o transaction.tx
Please enter the password for wallet 'MyWallet':
purestake@algo-node:~$

The goal clerk command above creates a file called transaction.tx in the working directory with an unsigned transaction that will send 1 Algo from FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY to 5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY with a validity range of block 1630913 to 1631912.

Note that the amount is specified in microalgo as 1000000. All the algorand command line tools generally take Algo amounts in microalgo, or millionths of an Algo. Be very careful when specifying amounts as arguments to these commands. Without commas to create visual separation, it is very easy to make a mistake with an extra or missing zero.

I used the previously-recorded block height value of 1630913 as the firstvalid argument. To come up with the lastvalid argument value I added 999 to the firstvalid value. Generated transactions can have a maximum validity of 1000 blocks. Blocks are being finalized every ~4.5 sec currently, so this means that the transaction file will be valid for roughly 75 min. This can make timing tricky, depending on the coordination needed to actually sign the transaction. However, you can specify validity ranges out into the future if you need more time to perform the signing action.

Inspect tx file (online)

It is always a good practice to check the transaction file for correctness before proceeding to subsequent steps. The file is a binary file so opening it in a text editor is not useful. But we can use the “goal clerk inspect” command to look at its contents. To inspect the file contents, run the following command:

purestake@algo-node:~$ goal clerk inspect transaction.tx
transaction.tx[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA"
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE"
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4"
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

purestake@algo-node:~$

The first section shows that this is a transaction from a multisig account, and the 5 public keys for the multisig account are present. In the bottom section, you can find details about the transaction that we specified on the command line, such as the amount, firstvalid, lastvalid, the destination address, etc. The fee value is the fee for sending the transaction, which currently defaults to 1000 microalgo.

2. Copy tx File to the Air-gapped Machine

The transaction file transaction.tx is ready to be signed. But recall that we don’t have the spending keys on this online computer. The spending keys are on a USB device and will be used for signing on the offline computer. The unsigned transaction file cannot be used to send funds without being signed with 3 of the 5 spending keys associated with the multisig account, so it is reasonably safe to copy this file.

Copy transaction.tx to the “Transfer” USB drive.

As a next step, reboot the computer using the Ubuntu USB drive into an offline state and plug in the Transfer and Keys USB drives.

3. Sign tx File on the Air-gapped Machine (Offline)

Once we are booted to the offline Ubuntu desktop, we will perform the signing action for the transactions. We will create a folder on the Ubuntu desktop called tx and copy into it the algokey, the text file containing the keys, and transaction.tx. To sign transaction.tx, open a terminal to the tx folder on the desktop and issue the algokey multisig command to sign the transaction file:

ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction1.tx.signed -m "expire wear husband fancy now until laundry token strong dignity arrow valley post raven pudding farm twin chalk cloud tenant cart off shop abandon trophy"
ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction2.tx.signed -m "lucky dust hub crew barely leave gas crew canvas exhibit margin mixed impose air wasp chat athlete sketch ozone humble parent rail remind abandon host"
ubuntu@ubuntu:~/Desktop/tx$ ./algokey multisig -t transaction.tx -o transaction3.tx.signed -m "draft mule stamp run absent congress leopard notice minute hungry fresh physical flee favorite cram green salad promote remember route assume gentle early absorb during"

These 3 algokey multisig commands each perform a private key signing action on the provided transaction.tx that we created in a previous step, as outlined in this blog post. The private key is supplied on the command line as a mnemonic, and each invocation creates a different signed transaction output file, transaction1.tx.signed, transaction2.tx.signed, and transaction3.tx.signed.

4. Move tx Files Back From the Air-Gapped Machine

With the signed transaction files in hand, copy transaction1.tx.signed, transaction2.tx.signed, and transaction3.tx.signed to the Transfer USB, remove the Ubuntu bootable USB and the Keys USB, and reboot the computer back to its regular online mode. Once it is booted, log in and copy the 3 signed transaction files from the Transfer USB to a directory on the computer. In my case, I just put the files in my user’s home directory.

Merge the Signatures Back to a Single tx File

We can inspect one of the signed transaction files using the same goal clerk inspect command that we used before to inspect the unsigned transaction.tx file. Issue the following command:

purestake@algo-node:~$ goal clerk inspect transaction1.tx.signed
transaction1.tx.signed[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA",
"s": "M7dVRrm9zmcE0dLkZTMX7JTjk/tsZdIgLn0qQuL9sGDDCnPZfiKRE9kpBYpSyfZ9uWvtCijJzJIInIbtNijRBg=="
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE"
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4"
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

This looks very similar to the unsigned transaction.tx that we inspected before, but note that the first public key (pk) in the top section now has an “s” value. This “s” value is the signature that was created using the private key for that address. It is not the private key itself, but it demonstrates that we had knowledge of the private key. The other 2 files look similar, but have an “s” value for public keys 2 and 3. What we need to do is merge all of these signatures into the same transaction file which we will call transaction.tx.signed. We can do this using the goal clerk multisig merge command like this:

purestake@algo-node:~$ goal clerk multisig merge -o transaction.tx.signed transaction1.tx.signed transaction2.tx.signed transaction3.tx.signed
purestake@algo-node:~$

We now have a merged signed transaction file called transaction.tx.signed in the working directory.

Inspect tx File Before Sending (Online)

Now let’s inspect the resulting merged transaction.tx.signed file:

purestake@algo-node:~$ goal clerk inspect transaction.tx.signed
transaction.tx.signed[0]
{
"msig": {
"subsig": [
{
"pk": "OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA",
"s": "M7dVRrm9zmcE0dLkZTMX7JTjk/tsZdIgLn0qQuL9sGDDCnPZfiKRE9kpBYpSyfZ9uWvtCijJzJIInIbtNijRBg=="
},
{
"pk": "P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE",
"s": "rjITXvqzQwFWZ5shfXjhkxpcAkPSJquv9s2gLACLljHKnaoYefTGUXjfKZHtGZixFIAGPWr22DMrk/rcdnf8CA=="
},
{
"pk": "JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4",
"s": "w08AQ3gJr9W8qVmV1HN4o7okFjU/ozWIHGs3kn4cWjRkx/j1xO3wv+bL5X7fFjt208zaFuacE0y6jKIIc2p3DQ=="
},
{
"pk": "GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE"
},
{
"pk": "ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU"
}
],
"thr": 3,
"v": 1
},
"txn": {
"amt": 1000000,
"fee": 1000,
"fv": 1630913,
"gen": "mainnet-v1.0",
"gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"lv": 1631912,
"note": "y0+1BZ82wxY=",
"rcv": "5DJNGUEXNRUKAQODHGO3KS2HXOHN4YMSLSZQGEAH5L3WMMDFKMZEQMURUY",
"snd": "FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY",
"type": "pay"
}
}

You can see that the first three public keys all have “s” value signatures. We only need signatures for the first three public keys because this is a 3-of-5 multisig and three valid signatures meets the threshold for the account. If we had signed with a fourth or fifth key, this wouldn’t cause any problems, but it isn’t necessary. This transaction is ready to be broadcast to the network.

5. Broadcast tx to the Network (Online)

To broadcast the signed transaction to the network we can use the goal clerk rawsend command:

purestake@algo-node:~$ goal clerk rawsend -f transaction.tx.signed
Raw transaction ID AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB issued
Transaction AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB still pending as of round 1631216
Transaction AAG34CUUNSMZJNNRKYV22UNOPFBQB57XPWNEF6D4CGSOT7ZRPSEB committed in round 1631218
purestake@algo-node:~$

The transaction id for your transaction will be unique and different than what you see above. Algorand finalizes blocks in under 5 seconds, so you shouldn’t have to wait long for the transaction to broadcast to the network. Once committed, you can check account balances to make sure you see the balance change that is expected. Once used, the transaction file cannot be used again. Sending it again will result in an error.

Conclusion: More Complex, But More Secure, Too

This multisig transaction example shows that the setup of a multisig account and execution of a transaction are a lot more complicated than just using a single spending key account directly on an online computer with an Algorand node installation. But, by using a multisig account, we can substantially improve the security of the setup and greatly reduced the risk of the private keys and thus the funds in the account from being compromised.

The Algorand multisig features can be used to create multiple keys which can be used to split the secrets needed to spend funds across different people and locations. The exact number, locations, storage, and people will vary according to the environment and situation, but it opens the door for a much more secure setup than a single key account.

Keeping the spending keys totally offline is another substantial improvement to Algorand account security for high-value accounts. Most of the attack vectors for compromising keys involve online scenarios, malware, or other network exploits. By never having the secrets on an online computer, the risk of key compromise is greatly reduced. Another way to improve the security of an Algorand account is to use a ledger hardware wallet, which will be the subject of a future blog post.

How to Use Multisig Accounts in Algorand

How to Use Multisig and Offline Keys with Algorand

Multisig accounts and offline keys provide a great deal of added security, but are not always simple to set up. To help you get started, I’ve outlined the steps you will need to take to create a multisig account with Algorand and store keys offline on an air-gapped device.

For this tutorial, you will need at least 3 USB drives:

  1. “Ubuntu” to serve as a bootable Ubuntu USB device
  2. “Key” to hold the algokey binary and private keys
  3. “Transfer” to move transaction files to and from the offline computer

If you are going to store significant funds in the account being created, make sure that they USB drives are new, so there is no chance of any unwanted data or malware on the drives. It may seem excessive to use so many USB drives, but in the case of the private key drive, it is important that it never is plugged into a computer that is on the internet.

While the ideal approach to using multisig keys offline is to have a separate, dedicated laptop or computer, that will not be necessary to complete this tutorial. I will demonstrate how to use a bootable USB device in place of a dedicated offline machine.

Setting Up Your USB Drives

1. Download and Install the Algorand Node Software

On your online computer, download and install the Algorand node software.  This software will be used to interact with the Algorand network. Installation instructions for Algorand node software on different platforms can be found here: https://developer.algorand.org/docs/introduction-installing-node

For my examples, the online computer will be running Ubuntu 18.04 with Algorand installed using the debian package from the official repositories using the following instructions: https://developer.algorand.org/docs/installing-ubuntu

2. Create an Ubuntu Bootable USB Device

In order to create an Ubuntu bootable USB device, you can follow the instructions below depending on your OS:

Windows: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-windows#0

Mac: https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-macos#0

3. Label Your USBs

When we need to perform sensitive signing actions on our offline computer, we will boot our computer using the bootable USB device you have just created.  Label it “Ubuntu” or something similar to identify it as the bootable USB device.

The second USB device will hold the algokey binary and the private keys that we will use to sign transactions on the offline computer.  Just label this USB drive “Keys” or similar for now. Do not plug this drive into the online computer.  This USB drive should never be plugged into any computer other than the offline computer.  We will put the algokey binary on it in a later step.

The third USB drive will be used to transfer files to and from the online and offline computers.  Label the drive “Transfer” or similar.

4. Copy the Algokey Binary to the Transfer USB

The algokey binary is part of the algorand node installation, but is a standalone executable that can be copied to a different computer running the same architecture / operating system without needing to perform a full node installation.

From our online computer that has a node installation based on the debian package, the algokey binary can be found at /usr/bin/algokey.  Copy the algokey binary to the Transfer USB drive. We will later move algokey to the offline computer and ultimately to the Keys drive, but we will do that once we have the offline computer set up.

Creating an Algorand Multisig Account (Offline)

Let’s start by creating a new 3-of-5 multisig account that we will use to store Algo securely.

Before we start issuing commands, we need to use the Ubuntu USB drive to boot into our offline computer.  Insert the Ubuntu USB drive into your computer and reboot the machine. You may need to enter the bios of your computer to tell the computer to boot from the USB device.

Once you are booted from the USB, you want to indicate that you want to try Ubuntu (not install Ubuntu).

After the computer is booted, you will be logged into an ephemeral Ubuntu desktop without networking.  This will be our offline environment for signing transactions.  Create a folder on the desktop that we will be working in. In the examples below, I called mine “tx.” Insert the Transfer USB device and copy the algokey binary to the tx directory.  Open a terminal window to the tx directory and change the permissions of algokey to make it executable, and test running it to make sure everything is working:

 

ubuntu@ubuntu:~$ cd Desktop/tx
ubuntu@ubuntu:~/Desktop/tx$ ll
total 22472
drwxr-xr-x 2 ubuntu ubuntu       60 Sep 2 10:48 .
drwxr-xr-x 4 ubuntu ubuntu      100 Sep 2 10:48 ..
-rw-r--r-- 1 ubuntu ubuntu 23011080 Sep  2 10:48 algokey
ubuntu@ubuntu:~/Desktop/tx$ chmod 755 algokey 
ubuntu@ubuntu:~/Desktop/tx$ ./algokey -h
CLI for managing Algorand keys

Usage:
  algokey [flags]
  algokey [command]
 
Available Commands:
  export      Export key file to mnemonic and public key
  generate    Generate key
  help        Help about any command
  import      Import key file from mnemonic
  multisig    Add a multisig signature to transactions from a file using a private key
  sign        Sign transactions from a file using a private key

Flags:
  -h, --help   help for algokey

Use "algokey [command] --help" for more information about a command.
ubuntu@ubuntu:~/Desktop/tx$ 

 

We are going to use the algokey utility to create 5 accounts with 5 associated private keys.  These accounts will later be combined to form one 3 of 5 multisig account. We perform this account creation step on the offline machine so that we can record the 5 secrets securely, and so that these secrets are never online.  We will do this by running the “algokey generate” command 5 times.

 

ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: expire wear husband fancy now until laundry token strong dignity arrow valley post raven pudding farm twin chalk cloud tenant cart off shop abandon trophy
Public key: OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: lucky dust hub crew barely leave gas crew canvas exhibit margin mixed impose air wasp chat athlete sketch ozone humble parent rail remind abandon host
Public key: P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: draft mule stamp run absent congress leopard notice minute hungry fresh physical flee favorite cram green salad promote remember route assume gentle early absorb during
Public key: JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: primary tone inquiry video bicycle satisfy combine pony capable stamp design cable hub defy soup return calm correct cram buyer perfect swim tone able math
Public key: GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE
ubuntu@ubuntu:~/Desktop$ ./algokey generate
Private key mnemonic: seminar screen join potato illegal vacuum predict measure cable reject crazy document edit erosion decline giggle neutral theory orient keen slow walnut reject absorb rain
Public key: ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU
ubuntu@ubuntu:~/Desktop$ 

 

A few important things to note.  First, you will get different accounts and secrets when you run “algokey generate.”  DO NOT USE THE ACCOUNTS LISTED ABOVE, they are example accounts created for this tutorial, and most importantly, the spending keys are right here on this webpage.  Anyone reading this post can spend the funds in these accounts or any multisig account based on these accounts.

Second, note that every time you run “algokey generate,” you get a valid single key account with a public key and a private key.  In Algorand, you will often hear the public key referred to as the address, and the private key as the spending key or mnemonic.

Third, observe that the private key mnemonic has 25 words which is quite unusual.  In other crypto systems, you will typically see word lists that encode a seed phrase or secret using 12 or 24 words.  Algorand uses 25 words, so make sure you get all 25 words. If you plan to use something like Cryptosteel to store the seed phrase, the 25th word will overflow a single plate, which is designed to hold only 24 words.

The public and private keys need to be securely recorded for your accounts.  One way to do this would be to write them down on 5 separate pieces of paper, store them on 5 separate USB drives, etc.  Having a paper backup is a good idea in case the USB drives fail. To more securely store them on the USB drive, they can be saved in a file which is then encrypted using pgp or similar, and the encryption passphrase is then securely stored separate from the drive.

In terms of distribution, you could put the keys in 5 different locations or give them to 5 different people for safekeeping.  There are many ways to securely store these keys, including bank safety deposit boxes, cryptosteel plates, and other options.

For purposes of this tutorial, I will put all 5 keys in a file on the same USB drive labelled “Keys,” but this is not recommended for production use.  It is important to number the keys 1 – 5. The order of the keys will be important when we go to set up the multisig account in the next step.

Set Up a Wallet and Multisig Account (Online)

For this step, you need to go back to the online computer with the online Algorand node.  If you are sharing the same computer for both online and offline needs, remove all USB drives and reboot the computer to bring it back to its online state.  Log in to the computer and open a terminal. We will first create a wallet using the goal command:

 

purestake@algo-node:~$ goal wallet new MyWallet
Please choose a password for wallet 'MyWallet':
Please confirm the password:
Creating wallet...
Created wallet 'MyWallet'
Your new wallet has a backup phrase that can be used for recovery.
Keeping this backup phrase safe is extremely important.
Would you like to see it now? (Y/n): n
purestake@algo-node:~$ 

 

In the example above, I named the wallet “MyWallet,” but you can name it whatever you want.  I also specified a password for the wallet. The reason I elected not to see the backup phrase is that I do not plan on having any secrets on this online node.  I’m only going to use it for looking at balances and sending transactions which have already been signed elsewhere.

The next step is to use the goal command to create a new 3-of-5 multisig account using the keys we generated in the previous step.  This will add the multisig account to the wallet and let the wallet know what the constituent parts of the multisig account are. But the private keys for the account will not be in the wallet, and the wallet will have no control of or ability to spend funds in the multisig account.  By putting the multisig account in the wallet we can work with the multisig account on this node even if we plan to sign multisig account transactions with our spending keys on the offline machine. If you skip this step, transaction files you create for the multisig account on this node will be invalid as the node doesn’t know what the multisig account is and what the component parts of the account are.

 

purestake@algo-node:~$ goal account multisig new OBONCJ4D4WEUYFWRDLZEJOMAN22HWZGZPAEWSPK7S6VOIHDCAFR3ACUSTA P7ZEFUIWTABXLMC77P3DAE5ZMU7BDY3HZ4KF7ZXSPTCYKZ4AOCKGRZTCUE JPPERBQVBGKHMKTVZUOQKSZHVDYMC3AYYD6NHT355HEZHZXW5CLNUIMJT4 GW5J5C2X7L7F2NIWISELS5EQI74Y5W6VDZ2W45NLIYY256EUYLKORY7AJE ANQADWSXUDMOHYYOVAKII3COO3KIBBXXLFF2RPSCFIVXQJZOZ76DKR5YPU -T 3
Please enter the password for wallet 'MyWallet':
Created new account with address FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY
purestake@algo-node:~$ goal account list
[offline]       Unnamed-0 FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY      0 microAlgos [3/5 multisig] *Default
purestake@algo-node:~$

 

Note that the 5 public keys we created on the offline computer are listed here as arguments to the goal account multisig new command.  Be very careful, as the order of the keys matters. Changing the order of the public keys results in a different multisig address. Recall that we numbered the keys 1-5: always list them in that order, so you will get consistent results.

The “T” flag specifies the threshold, or how many of the associated spending keys in the multisig account need to sign transactions.  In this case we specify 3 making this a 3-of-5 multisig account. The resulting address of the multisig account is: FHYPIFKSTJTUT4QCR4MJWZUY2Y4URVFHF2HIXGZ4OHEZSIEQ5Q64IIGEMY.  The “goal account list” command confirms that this is a 3/5 multisig account with 0 Algo in it.

Now you have successfully created a 3-of-5 multisig account (albeit, with no balance). Next week, I will publish a follow-up tutorial that demonstrates how to sign a transaction with your newly-created multisignature keys, enabling you to spend funds, bid on auctions, and more.

Three Keys Will Unlock a 3-of-5 Multisig Account

How the Use of Multisig Accounts and Offline Keys Improves Security

Many blockchain networks have added native support for multisig accounts as a more secure option compared to the default single-key accounts that most people use.

While a multisig approach has many far-reaching advantages, the primary benefit is drastically improved key security: to complete a transaction, you will need two or more valid keys. This eliminates single points of failure while opening the door for additional security measures to protect account funds.

One of those measures is the ability to keep multiple private account keys in different physical locations, completely offline, on an air-gapped machine, or in cold storage. By combining a multisig account with offline or cold key storage, you can greatly increase the security of accounts, particularly those storing and managing larger amounts of digital currency.  A regular account has a single spending key associated with it that is needed in order to spend funds and sign other transactions. By contrast, a multisig account has multiple keys associated with it.

Jerry Brito, Executive Director of Coin Center, provides a great overview of multisignature technology and the increased security it provides to users of cryptocurrencies in this video from the Federalist Society.

 

 

Determining the Number of Keys for Your Multisig Account

When you set up a multisig account, you will need to specify how many total keys will be part of the account, and how many of those keys are needed in order to spend funds in the account.  Sometimes this is referred to as M-of-N keys, where N is the total number of keys that are part of the account, and M is the number of keys needed to spend funds from the account. In practice, M is typically less than the total number of keys that the account has. For example, I could create a 2-of-3 multisig account where there are three private keys for the account, but any two of them can be used to spend funds in the account.

The 2-of-3 setup has good security properties.  Consider this: if you secure each of the three keys in different locations, even if one of the keys is compromised, it alone cannot be used to steal funds.  If you know that one of the keys has been compromised, you can use the other two keys to move the funds to another account, thus moving your funds to safety. The same scenario applies to the (possibly more likely scenario) where one of the keys is lost. You can use the other two keys to transfer the funds to a new account, thereby invalidating the lost key.

Requiring multiple people to sign a transaction provides an opportunity to implement a governance process around the movement of funds.  For these reasons, multisig accounts are very often used to provide an additional layer of security for funds in an account.

Consider the previous 2-of-3 multisig example where, instead of three different locations, you give each of the three keys to a different person for safekeeping. A single person cannot decide to spend the funds in the multisig account — they will need at least a second user to agree before any funds can be spent. In a corporate setting, dividing authority in this way and requiring a quorum can be a useful tool for governance processes.

Choosing the correct values for M and N will be specific to the scenario, and multiple factors need to be considered. Examples of factors include: the amount of funds or assets in the account, the number of people involved in managing the funds in the account, and the frequency with which funds need to be accessed or moved. Very common configurations include 2-of-3 and 3-of-5, but higher numbers such as 7-of-10 are not uncommon.

As with many security-related things, there general is a tradeoff between security and convenience. A single key account is the most convenient but the least secure. A 7-of-10 multisig account, where you put each of the 10 keys on a Cryptosteel in 10 different bank safety deposit boxes in 10 different cities, has a lot more security. But assembling 7 of these 10 keys to send a transaction is a lot more work than the single-key setup. You have to decide the right place to be on the convenience-vs-security spectrum for any particular use case.

Always Keep Secrets Offline

The spending keys associated with a multisig account are best kept totally offline for storage, and only used on an air-gapped machine.  This machine should not have an internet connection and should never have been on the internet. Any time a computer is on the internet, there is a risk that data on that machine has been compromised, so the risks of an online attack are greatly reduced by never having had an internet connection at all.

One approach to storing keys offline is to have a dedicated computer or laptop whose only purpose is to keep the account secrets and to perform signing actions.  Theoretically, this would be the best and safest setup. However, this may not be practical or cost-effective for individuals who only have one computer, or don’t want to dedicate a computer to this task.

A less expensive (but still good) option is to use a bootable USB device.  An Ubuntu 18.04 LTS bootable USB can serve well as an air-gapped machine. If you boot your computer with an Ubuntu bootable USB drive, you will have a full yet ephemeral Ubuntu installation without networking and that has never been on the internet.

Learn how to create your own multisig account using this step-by-step tutorial I created.

Buy vs. Build Infrastructure-as-a-Service Blog Featured Image

Advantages of Buying Blockchain Infrastructure-as-a-Service vs. BUIDLing It Yourself

In everyday life, we are faced with decisions to either buy readymade solutions or to build something from scratch.
Whether it’s a large purchase like buying a home, or something considerably smaller like choosing between two couches, there are pros and cons to each side. Do you want to stay up until 2:30 in the morning putting together a couch? For the right price, a lot of folks would say, “Absolutely,” while others would say, “No shot!”

With the rise of blockchain as a viable platform, the business community seems to be posed with this question at all levels. Cost, effort, risk, focus, and quality all factor into every decision a company makes, including whether to build the infrastructure that run these applications and platforms, or to pursue a third party vendor that offers blockchain infrastructure-as-a-service.

Risk vs. Reward

The allure of blockchain is real: the technology as a whole promises dramatic cost savings (up to 70%!) to banks and financial institutions. Since up to two-thirds of those costs are attributable to infrastructure, it’s imperative to pursue an infrastructure strategy that captures as much of that cost savings as possible.

But blockchain projects can be deceivingly costly. A recent report of government-sponsored blockchain projects revealed that the median project cost is $10-13 million.

At first glance, building infrastructure in-house seems like the most cost-effective way to approach the blockchain: there are no licensing fees, and your company is in complete control.

Of course, there are always trade-offs: an in-house infrastructure project is very taxing on your organization’s resources.

SECURE INFRASTRUCTURE-AS-A-SERVICE FOR ALGORAND NODES

Your team must have the time and operational skillset to build out a secure infrastructure that is scalable enough to support your blockchain network of choice. Those skills are hard to come by: blockchain skills are among the most sought-after. The rates of a freelance blockchain developer hovers around $81-100 per hour in the US, sometimes going as high as $140+ per hour.

It’s easy to underestimate how much time it will take to create, and whether your team really has the skills and ability to create the offering.

In addition to infrastructure, you’ll also need to build or secure vendors that can address storage needs, network speeds, encryption, smart contract development, UX/UI, and more. Each of those initiatives is going to require additional dedicated budget.

The question then becomes: what kind of advantage does this create? Much of this will depend upon the number of partially or fully decentralized applications (DApps) that you plan to run on it, and how many of them can and will share the same underlying infrastructure.

The ‘aaS’ Revolution

When evaluating your options for a new project, the project planning stage is always tricky. You’ll need to do a full scoping of the project, allocate responsibilities, and create a vendor vetting process.

In the past it was easy: you went out, bought some software and hardware, and got to building. But once the internet made it easier for companies to provide ‘as-a-service’ offerings, it added a layer of complexity for IT and engineering teams as to what options made sense for their project or organization. Salesforce began to displace massive Oracle on-premises implementations. Broadsoft started to displace PBXs and made phone closets a thing of the past. The list goes on and on with applications that replaced their on-prem brethren from years prior because the ongoing maintenance and upkeep was a headache for IT teams to manage. Why keep all of the infrastructure under your management when you could push all that work onto an infrastructure-as-a-service company’s plate, since their primary focus was supporting that exact technology?

This is great from an IT perspective, but what about for engineers and developers? Don’t they need to be able to store their code and applications locally? Don’t they need to own all of the pieces that tie in to their application? Oh and security, THE SECURITY!

Sorry for being dramatic, but the answer is no. These are all valid concerns, however, many of them can be addressed by working with the right service provider that suits your needs.

Uber is a great example of leveraging third-party service platforms to create an application. Did they need to go out and create a maps platform? Nope, they used Google for routing and tracking. Did they need to go out and spin up their own messaging and voice servers? Nope, they use Twilio for their communication services. They took a buy-centric approach which enabled them to focus on their core application and remove the need to focus on things outside of their core skill set.

How We Apply This to Blockchain

How difficult is it to build? How costly is it to manage? Do we have the skillset to support it? These are all questions that companies ask themselves when looking at making an investment for any kind of infrastructure.

On top of the infrastructure, it only takes a few minutes to realize that DevOps is really hard to do well. Making sure that the investments you’re making align with your team’s skill set is critical for your success. So if you’re looking around, saying “We need to bring in DevOps engineers for our Algorand project,” then HARD STOP! Check out below.

PureStake was created with this exact use case in mind. We provide secure and scalable blockchain infrastructure-as-a-service to help everyone from investors to developers better interact with the Algorand network. We’ve recently launched an API service that will provide an onramp to Algorand for any application looking to build on their pure proof of stake network. We offer a variety of subscriptions so that, regardless of size or budget (we have free, and free is good), you’ll be able to utilize our service and start interacting with the Algorand network within minutes.

 

Teal Windows Background Graphic

Getting Started with the Algorand REST API and the PureStake API Service

Since Algorand’s MainNet launch, PureStake has focused on building and delivering highly performant infrastructure to support early adopters of the Algorand network. Earlier this year, we launched an Algorand infrastructure service offering that delivers relay and participation nodes as a service targeted at early supporters and customers that want to be active participants in the network. However, we found that these services are not ideally matched to the needs of developers building applications on top of the Algorand network. So we’ve released a new API service that is specifically designed to help developers get started with Algorand REST APIs quickly and easily.

The Need for an Algorand API Service

PureStake’s Algorand infrastructure services are centered around managing the lifecycle of Algorand relay and participation nodes in an automated and secure way. Managed relay and participation nodes make sense for customers that want to — or have an obligation to — support running the network, but don’t fulfill the needs of developers who are writing applications that interact with the Algorand blockchain.

For DApp developers, the nodes are a means to an end — a way of reading data from or sending transactions to the network. They need a simpler alternative to running their own nodes, which can be costly and time-consuming.

The PureStake API service simplifies interactions with the Algorand network by hiding the complexity of running and managing nodes from the user.

Why Running Algorand Nodes Can Be Challenging

Developers always have the option of downloading and running their own nodes. However, running Algorand nodes requires both significant infrastructure investment and the right operational skills.

For example, most development use cases dictate running full archival transaction indexer nodes to achieve the best possible performance for querying transactions. The storage and sync time requirements for this type of node quickly increase as the block height increases. In the case of the Algorand MainNet, which has been live for about two months and has a block height of 1.4M blocks (as of August 2019), a transaction indexer node requires at least 20GB of storage. However, since the index database grows as a function of the number of transactions, we can expect the storage growth rate to significantly increase as transaction volume on the MainNet increases, further expanding the storage requirements.

The PureStake API Service Simplifies Interactions with the Algorand REST APIs

The API service is a natural extension of PureStake’s Algorand infrastructure platform that we built to support Algorand relay and participation nodes. Our platform uses an infrastructure-as-code approach to deploy security, networking, cloud configurations, compute, storage, and other elements into cloud data centers in an automated fashion.

The API service, which was built on top of this platform, is spread across multiple cloud data centers and features an API network layer, an API management layer, a caching layer, a load balancing layer, and a node backend layer. Each of these layers is fully redundant and managed/monitored 24×7.

GET STARTED WITH PURESTAKE’S API SERVICE FOR ALGORAND

How the API Service Works

The API network access layer is supported by a worldwide edge network with many peering points for request ingress, where requests are then privately routed to one of our POPs. In the POP, the API management layer then handles authentication, authorization, accounting, and further routing of API requests. It will check received requests for a valid API key header, whether the request is valid according to the account requests limits, and other security checks. It then logs the request, which is used to power end user-facing features such as endpoint utilization charts in the API dashboard. The management layer then routes API requests to backend services that can handle them.

Some queries will be handled by a high-performance cache. Other queries will be routed to the load balancer layer, which has awareness of the node resources available and routes requests on to an available Algorand node. The node layer has pools of Algorand transaction indexer nodes that can be swapped out and maintained without any downtime. These nodes are patched and updated with the latest Algorand node software as new versions are available.

What is the Difference Between the PureStake APIs and the Algorand REST APIs?

The PureStake API Service supports the official Algorand node API in the same form as exposed by the Algorand node software, which adds consistency and makes it easy to move off our service and back to self-managed nodes if needed. This design choice was an intentional one, since many proprietary APIs create vendor lock-in for its users. The only differences in our API service and the official Algorand REST APIs are the addition of the X-API-Key header that we require to secure access to our service, and the removal of the API that provides metrics about the nodes themselves. Through this approach, our users have the freedom to move between API services and self-managed nodes as needed.

Currently, our API service supports the Algod API, but not the KMD API. The Algod API can be used for querying historical information from the blockchain, getting information about blocks and transactions, and sending transactions. The KMD API, by contrast, is used for wallet management, key management, and signing transactions with private keys. We have intentionally chosen not to expose the KMD API, as we do not want any customer secrets or keys on our servers. However, customers can manage secrets and sign transactions within their applications, and post signed transactions to our API.

How the PureStake API Service Impacts the Decentralization of the Algorand Network

An essential property of the Algorand network is its decentralization. PureStake is a centralized company providing a centralized service to access that decentralized network. At first glance, it may seem like a centralized service could threaten the decentralized nature of the network (particularly if all or most of the access to the Algorand network happens through the service). Similar concerns have been raised in the Ethereum community in relation to the large number of applications relying on the Infura service to access the Ethereum network. While it may seem counterintuitive, centralized services can actually serve to support and promote the best interests of decentralized networks such as Algorand.

The first thing to point out is that this decentralization risk is not a design or protocol-level risk. No one is forced to use the service. Anyone using the service can leave and run their own nodes at any time.

In fact, there is no reason decentralization can’t proceed normally with lighter weight nodes. Algorand is going to great lengths to make sure nodes supporting the consensus mechanism do not have large storage or other infrastructure requirements. So, if someone just wants to get current account balances, submit transactions, and support consensus, they can do this with non-archival participation nodes that have much lower requirements and may not require a service provider. In addition, the upcoming vault improvements to Algorand will greatly reduce the sync time for participation nodes as well. The developer use case specifically lends itself to larger infrastructures and a service provider approach.

Secondly, Algorand needs developers, applications, and utilization of the network to be successful in the long term. The PureStake API service makes the on-ramp for developers substantially easier and will help grow the utilization and traction of the network. While there may be a hypothetical form of centralization risk in the future if the service is wildly successful, this possible future risk is far outweighed by the direct benefits to the Algorand community in helping to get traction that drives transaction volume and network utilization. In a future with more developers, applications, and network utilization, we expect competitive developer-oriented services to enter the market, which will continue to fragment the market.

Future Expansion and Long-Term Vision for the API Service

Support for the base Algorand node API is the first step for our API service. In the future, potential enhancements could include:

Additional Query-Optimized Data Stores: Taking Algorand block and transaction data and loading it into relational or NoSQL datastores opens possibilities for much more performant queries across the historical data set. These optimized data stores could be used to improve the performance of node API requests or to power net-new APIs.

Eventing Infrastructure: The idea would be to provide support for subscriptions for certain types of events, and to receive callbacks whenever they occur. DApp developers frequently implement these backend infrastructural features to improve the performance of their applications.

Getting Started with the PureStake API Service

Users can register for a free account at https://developer.purestake.io/.

Once logged into the API Services application, users will have access to an API key that is unique to their user account in their dashboard. This API key needs to be added as the value of the X-API-Key request header for any API requests being made to the PureStake service.

There are examples of how to do this and how to use the API once you have logged in at https://developer.purestake.io/code-samples and also in our Github repo https://github.com/PureStake/api-examples.

Do you have a question or an idea for a useful enhancement to the API service? Feel free to reach out to us!

 

crypto infrastructure and devops best practices

DevOps Practices for Crypto Infrastructure, Part II: Authentication, Authorization, Networking, Monitoring, and Logging

Picking Up Where We Left Off

In part one of this two-part series, I discussed core DevOps principles that helped guide our crypto infrastructure here at PureStake, and discussed some unique considerations around version control, full stack automation, and secrets management. If you didn’t have a chance to read the first post yet, you can find it here.

In this second post, I will continue calling out principles and examining different areas that are important to consider when setting up and running secure and reliable crypto infrastructure.

Authentication and Authorization

One of the most important aspects of ensuring the security of your infrastructure is having the right authentication systems in place.

For logging into infrastructure and servers, I favor centralized/federated authentication directories over local ones. It is very important that DevOps staff have unique user accounts for logging into infrastructure, rather than using shared accounts.  Unique accounts provide a record of who logged into what, which is essential to understanding what is happening in your environment. Shared accounts, including direct use of the Administrator or root accounts on servers, become very challenging when you have turnover in your staff or, in the worst case, if there has been an incident.  It’s much cleaner to revoke access, assign rights, review past history, and understand what is happening with a centralized directory.

For the scope of authentication, I recommend a full separation of the corporate IT environment and the production infrastructure environments.  Fully separate directories are the best approach, even if your directory supports different groups and roles. This greatly reduces human error, resulting in too much — or incorrect — access.

However, that doesn’t mean that you shouldn’t use groups. Grouping users who need access to the infrastructure and assigning the appropriate roles to them is critical to being able to manage access in a reasonable way in crypto and other environments.  It is too complicated and too easy to make a mistake when assigning rights to individual users. Even for a small team, having at least a few roles will be appropriate, such as a role with full access for select senior DevOps staff, a role with limited access for junior DevOps staff, and perhaps a monitoring only role for managers and other technical staff.  The principle to keep in mind is that of least privilege, which states that users and groups should have as few rights as possible to do their job with a mechanism/process to escalate that can be logged/monitored. This also supports a closely-related concept of blast radius minimization. Having users with roles that employ the concept of least privilege will minimize the blast radius associated with an incident where user credentials or accounts have been compromised.

Using traditional passwords as a way to log into crypto infrastructure is not a good security practice.  Where passwords must be used, I recommend the use of a password manager such as Dashlane, which can be set up in a corporate configuration with shared groups, role-based access, and where a unique strong password can be used for each system.  Crypto environments require more security than this. At a minimum, all accounts must require two-factor authentication, where the first factor can be a traditional strong password, and an authenticator app is the second factor. A better setup replaces the authenticator app with a physical hardware device.

For identity management in Windows environments, Active Directory is the logical choice.  For Linux environments, OpenLDAP and Kerberos serve a similar function. Each cloud vendor has their own identity management scheme including AWS IAM, Azure AD, and Google Cloud IAM, each with their own nuances.  Google authenticator works very well as a second factor in 2FA setups. For a physical device second factor, YubiKey is an inexpensive option that plugs into a USB port on your computer. Requiring the YubiKey as one of the authentication factors means that the device must physically be in the possession of the user at the time of login.

Logging

A well-run infrastructure has good mechanisms in place to manage server, application, and as-a-service logs.  Logs are not only useful for troubleshooting infrastructure issues, but also provide the basis for audit control and intrusion detection.  You need reliable logs to understand what has happened in your environment.

The most important practice is to ship logs off the servers, containers, and other infrastructure elements to isolated, tamper-proof locations.  Authorization roles should be employed to isolate these log collection points to make them as tamper-proof as possible. Then the logs can be loaded into query optimized data stores to facilitate visibility, troubleshooting, and monitoring scenarios.

In particular, when running crypto nodes, sometimes logs are the only way to understand what is happening on the nodes. Critical error messages and log entries related to the crypto network protocol can be the only way you can understand that a node is running well or poorly.

In a Windows environment, events can be forwarded to an event collector.  In a Linux environment, rsyslog works well for forwarding syslog to regional and ultimately centralized data stores.  For log-based searching, troubleshooting, and time series analysis, Splunk is a Cadillac solution: tons of functionality, but at a very high price.  An alternative to Splunk is the open source ELK stack (Elasticsearch, Logstash, Kibana) which has gotten a lot better over time and offers a much less expensive way to search and troubleshoot infrastructure based on log data.

Monitoring and Alerting

If you want to run reliable crypto infrastructure, you have to know when services are not running well.  The principle here is that everything fails — but early detection allows infrastructure element failures to be remedied quickly.

With good redundancy in design, individual element failures ideally have little-to-no end user impact. For cases where there are failures that lead to end user service impact, strong automation will minimize the time to restore services.

Focusing on early detection, the best way to accomplish that is through the extensive use of monitoring at the different layers of the stack and from different locations. If you are in a colo environment and managing hardware, the monitoring of that hardware will likely require vendor specific tools and possibly the collection of SNMP traps.  For cloud environments, the providers offer native monitoring that is integrated with their service offerings. As an example, AWS offers CloudWatch for monitoring AWS based services.

There are a lot of elements in a crypto infrastructure that need to be monitored.  It’s important to choose a platform which will serve as the place where monitoring data is sent, where alerting thresholds are set and where alerts are managed.  As different monitoring checks are added over time, they can feed into that system. It is extremely difficult to manage alerts, maintenance downtime, and inventory completeness if you have multiple places to go to manage these items.

At the lower end of the stack, you will want to put basic checks in place for OS-level resources like CPU, memory, and disk.  Basic network checks would include ping/ICMP, TCP port exhaustion, and TCP service checks. Security events such as those that come off IDS and IPS systems could be fed in here as well.  Application-level checks can include HTTPS checks that hit a URL and look for status or error codes and messages.

For crypto-specific infrastructure checks, consider that the base crypto infrastructure consists of nodes.  Crypto nodes often expose status and query interfaces via a REST API, so querying that API on a regular basis to look for status and error codes is a good start, but you should be careful that you are not exposing that API to the wider internet.  Other checks specific to crypto nodes include looking at block height on nodes, and making sure that it has an expected value. Nodes should be producing blocks on a regular cadence and, depending on the node, role may be helping to support the consensus mechanism of the network.  Using monitoring to look for deviations from normal block production or consensus participation behavior is a good early warning indicator of trouble.

Once you have a view from inside of your environment, it is equally important to get a point of view from outside of your environment.  This means taking the perspective of your customers and seeing if your services are performing well from their viewpoint. I’ve experienced situations where all services are green from the internal point of view, but a WAN or internet issue means that certain customers are not able to use the service. A common cause is a physical line cut that creates bad network paths to your service until traffic is rerouted. Using a cloud provider with multiple external points of presence can help provide this outside in view of your services.

From an open source monitoring tools perspective, the old workhorse is Nagios and Nagios variants such as Checkmk, which I have used for years to monitor production environments.  These tools are starting to show their age, but they are battle-tested and reliable. A newer option getting good traction is Prometheus with its more modern-looking Grafana-based visualizations.  For a greenfield environment, Prometheus is a good choice.

Nagios/Prometheus work in a poll model, where servers provide data on a port and a centralized service routinely collects the data and makes it available.  DataDog is an example of an alternate model where the data is streamed from the server itself with an agent to a centralized location. For alerting operational staff when there are critical alarms, I have always found PagerDuty to be a good choice, but OpsGenie or VictorOps will provide similar functionality.  For external cloud based availability monitoring, ThousandEyes is a good choice, and something like Pingdom will get you basic external coverage for a low-cost entry point.

Concluding Thoughts

The two posts in this series have only scratched the surface of crypto DevOps practices.  Other areas that may be the subject of future posts include networking and vpcs, blue / green deployments, docker vs vms, load balancing and failover strategies, ids / ips, storage management for blockchain nodes, crypto key management strategies, personal security best practices for DevOps staff, and other topics.  Employing good practices across all of these areas are an important part of what it takes to provide secure and reliable crypto infrastructure.

Looking for further information about infrastructure for crypto based applications? Contact us today.

crypto infrastructure and devops best practices

DevOps Practices for Crypto Infrastructure, Part I: Version Control, Full Stack Automation, and Secrets Management

When standing up services that will have cryptographic interactions with a blockchain, the DevOps infrastructure and practices you employ will dictate a lot about the security and reliability of those services. In this two-part series of posts, I will introduce core DevOps principles that will help guide crypto infrastructure creation. I’ll also share different DevOps infrastructure aspects that I have to worked well for me, and could be helpful to other teams looking to stand up crypto as-a-service offerings.

Cloud vs Roll-Your-Own

For many infrastructure elements, you must choose whether to go with a cloud provider such as AWS, Azure, or Google, or to roll-your-own in a colocated data center with self-managed software. In crypto and blockchain, there are some specific requirements, particularly relating to key security, which may factor into requirements around hardware security modules (HSMs), physical servers, and tiers of colocated data centers (more on this later).

But in general, all other things being equal, if there is an option from one of the three major cloud providers for an as-a-service offering vs rolling your own with purchased hardware and self managed software, there are a lot of reasons to go for the cloud option.

In my experience, it is very easy to underestimate the investment and labor required to self-manage infrastructure elements in a high-quality way over time. Especially when the software is open source, the temptation is always to just pull down the software and start running it. Focus tends to be on the cost of the hardware instead of the cost of the cloud service. The DevOps staff that is required to manage, upgrade, performance tune, patch and evolve this infrastructure over time is almost always underestimated by startup teams and becomes baggage as the team and the company grows. For any piece of infrastructure, you really have to ask yourself if this is the best use of your team’s time.

In most cases, you will want to focus your energy on things that you only you can do, and purchase services where possible from a reputable cloud provider. The three major cloud providers (AWS, Azure, Google) all have large and highly specialized teams surrounding each of their as-a-service offerings. For smaller companies, there is no way you are going to do a better job with management and security than these cloud provider teams for base/commodity offerings.

My take: go with a cloud provider or (better yet) more than one cloud provider, so you can and focus on building and running things that you can’t purchase as a service and that are unique to your offering.

SECURE, RELIABLE CRYPTO INFRASTRUCTURE FROM PURESTAKE

Version Control

In recent years, the idea of infrastructure-as-code has become a leading principle in DevOps. This is part of a larger evolution of DevOps that continues to shift the discipline towards looking more and more like a software development practice. A core part of any software development practice is storing all your software artifacts in a version control repository. Artifacts can include source code, configuration files, data files, and in general any of the inputs needed to build your software and your infrastructure environment. It seems like a given, but I have seen operational environments where not all of the artifacts necessary to build the environments were stored in source control.

The benefits of storing everything under version control is that you have a unique version for a given state of the artifacts used to build your environments. This allows for the repeatable build of environments, the implementation of processes around change to these artifacts, and the ability to roll back to any previous known good state in case there are issues. High-quality and cost-effective cloud-based services such as GitHub make this an easy choice to serve as a foundation for DevOps activity.

Full Stack Automation

One of the best things about using the cloud for your infrastructure is the programmability and APIs that the cloud vendors provide. These APIs can be used to automate the entire application stack from base layer network, DNS, storage, compute, up to operating systems and serverless functions, and all the way through to the custom code in your application. Taking an infrastructure-as-code approach means having software artifacts in your source code repository and a build process that can create an entire application environment in a fully automated way. This automation can be used to drive the initial build and incremental change to development, test, and production environments.

There are good tooling options these days to achieve this kind of infrastructure automation. At the base infrastructure level, there are solutions native to cloud provider environments such as AWS CloudFormation or Google Cloud Deployment Manager. We are fans of Terraform as it allows for the management of infrastructure in AWS, Azure, and Google from the same codebase with provider-specific modules and extensions. Once the base level infrastructure has been provisioned, packer images combined with configuration management tools like Ansible, Chef, or Puppet can be used configure host-based services.

There are a lot of benefits to be had from automating the full application stack. Automation eliminates the chance of manual errors and allows for a repeatable process. It also can drive the same stack into dev, test, and prod, thus minimizing the chances of environmental differences leading to surprises. Automation can also be used to support blue/green production deploys in which an entire new environment is built with updated code and then traffic is cut over from the existing to the new environment in a controlled fashion. In addition, it is easy to roll back in this model if there is a problem with the new environment.

Full stack automation also lends itself to the switch from thinking about servers as unique elements with individual character to managing servers as interchangeable elements. It becomes a straightforward proposition to rip and replace troublesome infrastructure and to use tightly-focused servers rather than sprawling snowflakes that acquire dozens of responsibilities and take on a life of their own.

Secrets Management

When you have an automated environment it is very important that the secrets that are part of your application are managed carefully. Secrets could include service passwords, API tokens, database passwords, and cryptographic keys. The management of crypto keys is particularly critical for crypto infrastructure where private keys are present, such as exchange infrastructure and validators on proof of stake networks. Read my recent blog to learn more about crypto key management using multisig accounts and offline keys.

However, a lot of the same principles apply to infrastructure, application, and crypto secrets. You want to make sure that these secrets are not in your source code repo, but rather that they are obtained at build or, better yet, at runtime in the different environments in which your application is running.

Software and platform native tools that help protect secrets in production environments include AWS KMS/CloudHSM, Azure Key Vault and Hashicorp Vault if you are looking for something cross platform. Some very sensitive secrets such as crypto private keys can benefit from hardware key management systems such as YubiHSM2 and Azure Dedicated HSM based on Safenet Luna hardware. The downside is that hardware solutions are generally less cloud-friendly than software ones and, while they may improve key security, some aspects of security are worsened by taking a hardware approach over a more automatable cloud-native software approach. The infrastructure costs and surface area that needs to be managed can also be far higher when taking a hardware-centric approach.

Intel SGX is a promising hardware technology that allows processes to run in secure enclaves.  A process running in a secure enclave is totally isolated from the host operating system. What this means is that, if you have access to the guest operating system, you cannot read the memory of the process running in the SGX enclave even if you have root privileges.  I am excited by the use of SGX enclaves combined with e.g. Hashicorp Vault to improve the security of software and cloud native secrets management. SGX is available today via Azure Trusted Compute but has the downside of requiring coding to the SGX APIs. We eagerly await further developments of the AWS Nitro architecture which we believe will greatly improve the security of software and cloud native secrets management. Nitro is the AWS version of providing hardware support for isolation of customer workloads on shared infrastructure.

Topics to Cover in Part II

There are many aspects to consider when thinking about secure and reliable infrastructure for crypto based applications.  We’ve only touched on a handful of areas in this article. Here are some additional areas I cover in part II:

  • Authentication
  • Authorization and Roles
  • Networking
  • Monitoring
  • Logging

Looking for further information about infrastructure for crypto-based applications? Contact us today

Participation Keys in Algorand Blog Banner Image

Participation Keys in Algorand

What Are Algorand Participation Keys?

In Algorand, there are 2 types of nodes: relay nodes and participation nodes. Relay nodes serve as network hubs in Algorand, relaying protocol messages very quickly and efficiently between participation nodes. Participation nodes support the consensus mechanism in Algorand by proposing and validating new blocks. Participation keys live on participation nodes and are used to sign consensus protocol messages.

A participation key in Algorand is distinct and totally separate from a spending key. When you have an account in Algorand there is an associated spending key (or multiple keys in the case of a multi-sig account). The spending key is needed to spend funds in the account. A participation key, on the other hand, is associated with an account and is used to bring stake online on the network. Importantly, participation keys cannot be used to spend funds in the associated account, they can only be used for helping to support the consensus protocol.

Participation Keys Are Good

Having distinct keys for spending the Algo in an account, and staking the Algo in an account, results in several key security improvements.

In any crypto network, protecting the spending keys is of the utmost importance. Situations that require having spending keys on an internet connected computer are inherently dangerous and always contain the risk of loss of funds.

In Algorand, the spending key never has to be online. The spending key can be kept on an airgapped computer or other offline setup and only used for signing transactions offline. The participation key, in contrast, lives on the participation node and signs protocol messages, but the participation key cannot spend any funds in the account.

This separation of duties in 2 different keys improves the security of Algorand infrastructure substantially. Spending keys can always be kept totally offline and an attacker, if they are able to compromise an internet connected participation node, cannot spend or steal any of the funds in the associated account.

Of course, this doesn’t mean that participation keys shouldn’t be highly protected and secured. If an attacker does compromise a participation key, they can stand up a second participation node with the same participation key. This will result in protocol messages being double-signed, which the network will see as malicious behavior and will treat the node / associated stake as offline.

There is no bonding or slashing in Algorand, and staking rewards are still coming in the future, but regardless: being forced offline due to double signing is undesirable and means that the stake in question will no longer be supporting the consensus mechanism.

Participation Key Mechanics

My examples assume Algorand Node v1 software is installed and running in a participation node configuration on the Algorand MainNet. The software is installed using the Debian package on Ubuntu 18.04, with a standard non-multi-sig Algorand account with some Algo in it, and a separate offline computer with the spending key for the account.

To create a participation key you will need to use the “goal addpartkey” command and specify the account that you want to create the part key for and a validity range:

goal account addpartkey -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY --roundFirstValid 789014 --roundLastValid 4283414

A few things to note. The account specified in the -a flag in the command above (WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY) is made up and you would need to replace it with your account. Do not use this account as it, and the associated spending key, are not real. Any funds sent to this address will be permanently lost.

The validity range is specified in rounds. Rounds are equivalent to blocks in Algorand. So if you, for example, want to have a key that is valid from now until a point in the future, you need to find the current block height for the roundFirstValid and a future block height for the roundLastValid flag corresponding to the validity range you want.

To find the current block height you can use the “goal node status” command:

derek@algo-node:~$ goal node status Last committed block: 789014 Time since last block: 2.4s Sync Time: 0.0s Last consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0 Next consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0 Round for next consensus protocol: 789015 Next consensus protocol supported: true Genesis ID: mainnet-v1.0 Genesis hash: wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=

The last committed block, which is the same as the current block height, is reported as 789014, so we use that for our roundFirstValid. Figuring out the right value for the roundLastValid is a little more involved.

First, you have to determine what time range you want. It is a good practice to rotate participation keys and not to create a key with a really long validity range. In our example, we will use a time range of 6 months. What round corresponds to 6 months from now?

To figure that out, we have to do a little math. 6 months is approximately 182 days. So 182 days x 24 hours / day x 60 min / day x 60 sec / min = 15724800 seconds. At the time of writing, each round in Algorand takes about 4.5 sec. So 15724800 seconds / 4.5 seconds per block = 3494400 blocks. Now we need to add 3494400 to the current block height to get the height 6 months from now. E.g. 3494400 + 789014 = 4283414. This is where the 4283414 in the command above comes from for the roundLastValid.
As the network grows, the 4.5 second block time may not be a safe assumption. This may make the validity range slightly different than 6 months. You need to monitor for key validity and make sure to put a new key in place before the old one expires.

Once the addpartkey command has executed, you can find the participation key at:

/var/lib/algorand/mainnet-v1.0/WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA.789014.4283414.partkey

It’s beyond the scope of this article, but this file is actually a sqlite database with N number of keys in it which will be internally rotated through automatically during the validity window. This is an additional security measure that is part of Algorand, where the keys used to sign protocol messages are rotated as rounds progress.

With the participation key created, the next step is to bring the account online. An account being online in Algorand means that the Algo in the account is supporting the consensus mechanism. We bring an account online by using the “goal account changeonlinestatus” command. Note that this action requires that you have a small amount of Algo in the account to pay for the transaction. If you have the spending key for the account directly on the participation node you can simply run this command

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1

However, having the spending key on the participation node is not recommended and kind of defeats the whole purpose of having participation keys in the first place. It is much better to have an airgapped and totally offline computer that has the spending key on it. The process is a little more involved with this setup, but it is much more secure. With this setup you would issue the following command instead:

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1 -t online.tx

This will produce a transaction file called online.tx in the current directory which has an unsigned transaction to bring the account online. This transaction file then needs to be securely moved to the airgapped computer with the spending key on it. Once on the airgapped computer you can use the algokey utility to sign the transaction file. The command would be:

algokey sign -k spendingkeyfile -t online.tx -o online.tx.signed

Note that algokey is standalone and does not need a running Algorand node. Also, the spendingkeyfile is the file that has the spending key for the account. This file can be created by algokey when you first set up your account.

There is also an option to specify the spending key mnemonic instead of a file, but I find this option worse as it leaves the mnemonic in the shell history, etc. The result of this command is that online.tx.signed will be created in the current directory. This file contains the signed online transaction and it needs to be securely moved back to the running participation node.

Once you have online.tx.signed back on the participation node you can send it to the network with the following command:

goal clerk rawsend -f online.tx.signed

Wait a little bit for the transaction to be processed, and your account should now be online. The creation of a transaction file, movement to the airgapped machine to sign the transaction, movement of the signed transaction back to the online node, and then sending the signed transaction to the network is a general pattern for sending transactions in Algorand without ever putting your spending key online.

Final Thoughts on Participation Keys in Algorand

The design of Algorand using separate keys for spending funds and for participating in network consensus improves the security of nodes running on the Algorand network substantially by protecting spending keys and removing the need for them to ever be online. I think this was a good design choice and wouldn’t be surprised if other protocols adopt this approach.

 

Why We Started PureStake Blog Banner

Why We Started PureStake

Many of us at PureStake were just starting our careers in the mid-to-late 90s, during the first internet wave. Since then, we have spent the last 20-plus years building infrastructure, software, and cloud companies based on the possibilities opened up by the internet. I recall the atmosphere and feeling of those early internet days and, in the intervening years, I hadn’t experienced that feeling since until I started getting involved with crypto.

The crypto genie is out of the bottle, and it has unleashed forces which cannot be stopped or contained. We believe that using blockchains to move value in an open, low friction, low-cost way will have as large an impact on all of us as the internet has had in moving information in an open, low friction, low-cost way. We are only at the beginning of a historical shift where crypto networks and applications will disintermediate many existing companies, structures, and practices, replacing them with code.

While the strategic direction of this shift is clear, the particulars of how this shift will play out are harder to call. That said, we have several beliefs that we stand behind:

  1. The future will be a multi-chain future vs one-chain-to-rule-them-all. In this future, bitcoin will continue to have a foundational place in the ecosystem, but there will also be many other blockchains, each of them good at different things.
  2. Public and permissionless blockchains will lead the way in terms of innovation and interesting applications vs private and permissioned ones.
  3. Proof of Stake consensus protocols are a more scalable, more efficient, and ultimately more secure consensus mechanism versus more traditional Proof of Work consensus protocols. As decentralized currencies, networks, and applications continue to mature and get traction, we believe there is a large opportunity to provide infrastructure as a service to support participation in and development on these decentralized networks.

We are taking all of our experience building and running cloud services and applying it to crypto infrastructure. Given that this infrastructure will be directly handling value, the security and reliability of our services must come first (and features will sometimes have to come second).

We use a software-first approach to solving problems. Treating our infrastructure as code and using software engineering best practices to deliver change to our infrastructure is one example of this. We aim to hide infrastructural complexity from our users and customers. We want to provide them with services that are simple to consume, freeing them to focus on the reasons they want to interact with the blockchain vs the details and mechanics of how to interact with the blockchain.

We will engage closely with a select number of networks that we believe in. We want to focus our energy on fewer vs more networks to be able to go deep on them to understand how they work, their nuances, their APIs, and their infrastructure needs. As we build expertise on specific networks we will be giving back to those networks in the form of services, tools, and information that help the community. Our goal is to provide secure and reliable blockchain infrastructure that participants can depend on and that developers can build upon.

The first network we are focused on is Algorand. Algorand is currently in TestNet and will be launching their MainNet soon.

Why Algorand? We personally know many of the people on the Algorand team. They have an extremely talented engineering, research, and business operational team. We believe in Silvio Micali, Steve Kokinos, and the team they have assembled. We think they can execute on a complicated and difficult roadmap in a way that other projects have historically been challenged with.

Our experience with the Algorand software and network has been similarly very positive. The quality of the code, the security, and design innovations, and the the rich set of financial primitives have all made a big impression. The performance of the network we have seen on the TestNet without significant sacrifices to security or decentralization we believe will move the needle among public blockchains and blockchain design in general.

We are excited to be one of the companies helping to support the upcoming Algorand MainNet network launch and look forward to engaging with participants and developers in the Algorand community.

Stay tuned for updates on our journey by signing up for our newsletter, or feel free to contact us if you are developing an Algorand application or need help with blockchain infrastructure.