Blockchain networks are dramatically changing the financial technology world as we know it, and new chains are launching all the time.

Read articles from PureStake infrastructure experts about existing and emerging blockchain networks.


Wheels on Luggage: Crypto Mass Adoption is Not a Technology Problem

Wheels on Luggage: Crypto Mass Adoption is Not a Technology Problem

Why isn’t broader crypto mass adoption happening?

At some point, everyone working in the crypto space has asked themselves why mass adoption hasn’t happened. Some common assumptions for why this is the case are that the technology is not mature enough, that it isn’t scalable enough, and that the dev tools are so much worse than web 2.0 tools.

All these things are true, but I would argue that technology is not the only reason for a lack of mass adoption.

Technology is a necessary, but not a sufficient, condition for larger scale adoption. When I hear arguments that crypto mass adoption is primarily a technology problem, I think of a story about suitcases that I once heard Robert Schiller of Yale give in a lecture.

Adding Wheels to Luggage

The story goes like this: the idea of general vacation travel has existed since the mid-19th century, when the advent of larger scale rail and steamship travel made it possible for a broader audience. People since then have needed luggage for their travel.

At first, luggage consisted of trunks. For most of the 20th century, travelers used suitcases that looked something like this:

A traditional suitcase

Image courtesy of Michael Kammerer, Wikipedia

Throughout all the decades of hauling around luggage, it wasn’t until 1972 that Bernard Sadow was granted a patent for putting wheels on a suitcase.

Bernard Sadow's Patent Image of Wheels and a Strap on a Suitcase

Image sourced from the patent listing

An early ad for Bernard Sadow's wheeled suitcase design

Image sourced from this blog















Putting wheels on suitcases seems ridiculously obvious and useful, but adoption for the new rolling suitcase in the 70s was challenging. They were seen as “wimpy” and not masculine enough for men to adopt. And the design had challenges — it tended to fishtail behind you as you dragged it forward, making it hard to maneuver.

These design problems weren’t solved in a satisfactory way until Robert Plath, a Northwest Airlines pilot, had the idea of putting two wheels on the edge of the case with a long telescoping handle. He was granted a patent for the Rollaboard in 1991:

Robert Plath's patented design for a wheeled suitcase with telescoping handle

Image sourced from the patent listing

1991! Think about that for a minute. Isn’t it incredible that people labored to carry around luggage by hand for over 100 years without thinking to put wheels on it?

The key to the Rollaboard’s mass adoption was that Plath specifically targeted flight staff as early adopters. Other travelers would see pilots and flight attendants wheeling their luggage through airports and onto airplanes. This had a two-fold effect: it provided a highly visible showcase for the utility of the technology, but it also helped overcome the stigma by associating it with “professional” travelers.

There are important lessons in this story as we think about mass adoption of crypto, or for any technology for that matter. There is no arguing that the technology (wheel) was well-developed and well-understood by literally everyone. The obvious questions: why did it take 100 years for someone to come up with the idea of putting wheels directly on luggage? And once it was discovered, why were there still adoption challenges? What ultimately was the key to mass adoption?

Is Crypto Heading Toward 100 Years Without Wheels?

This story clearly shows that technology can exist for very long periods of time before anyone thinks to apply it to a particular problem. This application doesn’t happen automatically or inevitably. It can be right in front of you and still not be obvious. It is only obvious in retrospect.

Given this fact, the argument that “we just need to keep building and scaling crypto infrastructure and technology, and that adoption will follow” seems quite wrong. By that logic, we could be waiting 100 years for broader crypto adoption to happen.

Technology is a necessary, but not a sufficient, condition for broader mass adoption of a particular product or solution. For crypto, I would argue that we have already met basic functional product requirements. That the scalability of the technology, while obviously important and needing further work, is not the main challenge we are facing.

What is also needed are changing conditions that make the solution more relevant. In the case of wheeled luggage this was the expansion of air travel and airports where people had to self transport their own luggage over longer and longer distances. For crypto I think this is the increasing digitization of everything we do (software “eating the world”), with value-bearing systems being some of the last systems to have open and natively internet-centric versions. It feels like those conditions are increasingly met.

So what is it that is preventing the mass adoption of crypto that we are all waiting for? I think it boils down to needing a catalyst. Something that will serve the same purpose that the flight crews with their Rollaboards did for wheeled luggage. A scenario involving relatable people of how they use crypto to solve a problem better and more efficiently than other methods.

I’ve spent time thinking about what this scenario would be for crypto, and who the key early adopters could be. I don’t know what the answers are. But I don’t think hodling Bitcoins as a store of value is the catalyst. It has to be something that regular people can relate to, and that can change their existing mental models about how things are supposed to work. My best guess at this point is that Millennials on the other side of an increasing wealth gap will drive this change. Maybe DeFi and the returns it can generate over and above existing investments can act as a catalyst. But whatever it ends up being, I’m entirely confident that it will happen.

How to Become a Polkadot Validator Blog Banner Image

Here Are the 4 Factors That Convinced Us to Become a Polkadot Validator

We have spent the last several months researching existing and soon-to-be-launched public blockchains, and we have come to the conclusion that Polkadot is an extremely ambitious and interesting project. Many teams are already building projects to be ready by the time the MainNet launches.

Given PureStake’s infrastructure and DevOps expertise as a company, the obvious way for us to engage is as a validator helping to secure the network. From there, we may expand to additional services within the Polkadot ecosystem.

This post will go into some of the rationale that led us to this decision. One of the most important points that influenced us is that the Polkadot vision aligns well with our vision of a multichain future. We also think that developer adoption is key to the success of next generation chains, and Polkadot is well positioned in this regard.

How Polkadot is Different

Too many chains are trying to do everything and be good at everything. The idea of blockchains that can talk to each other opens the door for specialization. Much like the unix philosophy, individual Polkadot parachains can focus on doing one thing and doing it well. And larger effects can be achieved through composability of different components.

Polkadot has an ability to accelerate innovation by significantly reducing the barriers to blockchain development and allowing a rich ground for experimentation. The Polkadot MainNet launch is fast approaching and we are excited to provide secure and reliable validation services for the network. What follows are the reasons we choose Polkadot over all the other networks out there.

Top 4 Reasons PureStake is Validating on Polkadot

Reason #1: Formidable Ecosystem & Leadership

Even though Polkadot hasn’t launched yet, they have already amassed an impressive ecosystem of notable developers, validators, partners, and projects, not least of which is Parity itself.

In addition to leading the development of Polkadot, Parity has a strong history and track record of delivering crypto infrastructure projects at production-grade performance and quality levels. The Parity Ethereum client is currently supporting a large part of the production Ethereum MainNet, and thus already supporting billions of dollars of crypto value. The Parity team, including Gavin Wood, are very close to Ethereum and familiar with all of its shortcomings; as a result, they are well-positioned to address Ethereum’s critical scalability challenges with Polkadot.

While Polkadot obviously has not yet built a community the size of the Ethereum’s, it has already generated tons of information, documentation, chat groups activity, and videos that make it relatively easy for newcomers to get up-to-speed.

At the launch of the Polkadot MainNet, they will have substantial scalability and programmability advantages over Ethereum 1.0. Until Ethereum 2.0 becomes a reality, Polkadot seems well-positioned to gain developer traction, including by stealing away some market share from Ethereum.

Reason #2: Flexible Underlying Framework (Substrate)

Polkadot is built on Substrate, an impressive developer framework that can be used to build a Polkadot-compatible blockchain. Substrate is a very powerful framework for developing blockchain applications and it provides a lot of choices to developers who are looking to build a decentralized applications.

If you want full control over your blockchain, you can use Substrate Core to build an application-specific blockchain that won’t even be part of the Polkadot network. Developing a blockchain this way will be much faster than rolling your own, as it handles many of the low-level subsystems that you will need out-of-the-box.

Substrate gives you flexibility, though. Rather than using Substrate Core, you could pull from the SMRL library of modules to plug in already-developed functionality for things like accounts and balances, fungible tokens, consensus mechanisms, and smart contract functionality. Alternatively, you could opt for the highest level of abstraction, Substrate Node, to get up-and-running with a custom blockchain very quickly and efficiently.

The quality and functionality of Substrate will almost certainly help draw developers in and spur adoption of Polkadot.

Reason #3: Scalable Design

Polkadot implements a Proof of Stake-based consensus mechanism on its main relay chain that uses a scheme called Nominated Proof of Stake. Proof of Stake-based consensus mechanisms offer several significant advantages over Proof of Work and other consensus algorithms, including scalability.

Right now, on the Polkadot Alexander TestNet, blocks are being produced roughly every 6 seconds. This is significantly faster than Ethereum (which is currently producing blocks every 13-13.5 seconds) and provides a scalable foundation for the rest of the system.

Another way that Polkadot achieves scalability is by parallelizing execution using parachains. Each parachain can have its own blockchain, and each of these parachains connects back to the main relay chain. By parallelizing execution into many parachains, Polkadot will inherently be much more scalable than a single chain network like Ethereum — at least until Ethereum 2.0 and its very similar concept of shards is realized.

Parachains will allow for more transaction throughput through parallelization, but separating transactions into different parachains also can provide economic scalability. Developers of applications occupying parachain slots have control over the economics of transactions. They have the ability to make certain classes of transactions less expensive, or perhaps even free, as opposed to a single economic model that is in use on more traditional single blockchain systems. This will allow developers to optimize their applications on Polkadot to achieve cost scalability when deployed.

Reason #4: Solid Security Posture

There are many parts of the Polkadot design that provide compelling security advantages, but there are two examples that stand out.

Stash Accounts and Controller Accounts

In our experience running crypto infrastructure at PureStake, a lot of time is spent worrying about key security, particularly keys that need to be warm or hot and online, versus cold and offline.

For a Polkadot validator, there are three different types of accounts and keys involved in the setup: a stash account, a controller account, and a session account. The stash account can be totally cold and offline — where you keep your funds. The controller account is warm, but needs to hold only a very minimal set of funds to perform certain specific transactions. And the session account is hot, but has no funds in it.

This design is much more secure than almost any other crypto network, since it allows you to store essentially all of your funds cold and offline.

Shared Validators

The shared validator security model in Polkadot provides security-as-a-service for all of the parachains.

This is quite different from Cosmos (which is the other major next-gen network) that enables parallelized application-specific blockchains. In Cosmos, each zone is on its own to recruit validators for security.

There are a lot of next-gen blockchains launching in 2019 and 2020 with some form of Delegated Proof of Stake which need professional validators to help secure their networks. There simply aren’t enough professional validators to go around and secure all of these networks. By having a shared security model, Polkadot has removed a big barrier to launching a parachain which should speed up adoption of the network.

What’s Next for PureStake as an Early Polkadot Validator

To date, PureStake has been providing node, API, and other infrastructure services for blockchain networks, including supporting the Algorand MainNet launch in June.

Now we are expanding our services to become a validator on Polkadot and the Kusama BetaNet (in preparation for the Polkadot MainNet launch). We will leverage a lot of what we’ve already built to support the Algorand network — the skills on the team, existing infrastructure, and code — to deliver highly reliable and secure validator services for Polkadot stakers. That includes:

  • Base compute infrastructure
  • Base storage infrastructure including blockchain snapshot / restore
  • Base network, VPC, VPN infrastructure
  • Authentication and authorization services
  • DevOps automation stack
  • Multi-cloud approach across AWS, Azure, and Google
  • Elastic load-balancer and firewall infrastructure
  • IDS, IPS, vulnerability management services
  • OS patch management and automation
  • Key and secrets management infrastructure
  • Monitoring and alerting infrastructure
  • Log collection and analysis infrastructure
  • DevOps and SecOps processes and reporting

Since only minimal work is needed to port the elements above, we can focus our energy on elements that need more adaptation to support Polkadot validation. Some of these areas include:

  • Validator infrastructure design: create our version of the standard sentry / validator design to support the validation requirements in Polkadot’s NPoS design
  • Extend our cloud automation: support the VPC, VPN, and other networking elements that are part of the validator design
  • Update our DevOps automation at the node and blockchain storage levels to support Polkadot-specific requirements
  • Enhance our monitoring, alerting, and logging / log analysis for Polkadot
  • Add support for Polkadot keys and secrets so they can be managed securely
  • Train our DevOps team on all things Polkadot so they can effectively manage and troubleshoot the services

We expect the PureStake validator infrastructure to be ready in time for the Kusama BetaNet switch to Proof of Stake. If you’d like to learn more about our Polkadot validator or other services we are planning, drop us a line.

Buy vs. Build Infrastructure-as-a-Service Blog Featured Image

Advantages of Buying Blockchain Infrastructure-as-a-Service vs. BUIDLing It Yourself

In everyday life, we are faced with decisions to either buy readymade solutions or to build something from scratch.

Whether it’s a large purchase like buying a home, or something considerably smaller like choosing between two couches, there are pros and cons to each side. Do you want to stay up until 2:30 in the morning putting together a couch? For the right price, a lot of folks would say, “Absolutely,” while others would say, “No shot!”

With the rise of blockchain as a viable platform, the business community seems to be posed with this question at all levels. Cost, effort, risk, focus, and quality all factor into every decision a company makes, including whether to build the infrastructure that run these applications and platforms, or to pursue a third party vendor that offers blockchain infrastructure-as-a-service.

Risk vs. Reward

The allure of blockchain is real: the technology as a whole promises dramatic cost savings (up to 70%!) to banks and financial institutions. Since up to two-thirds of those costs are attributable to infrastructure, it’s imperative to pursue an infrastructure strategy that captures as much of that cost savings as possible.

But blockchain projects can be deceivingly costly. A recent report of government-sponsored blockchain projects revealed that the median project cost is $10-13 million.

At first glance, building infrastructure in-house seems like the most cost-effective way to approach the blockchain: there are no licensing fees, and your company is in complete control.

Of course, there are always trade-offs: an in-house infrastructure project is very taxing on your organization’s resources.


Your team must have the time and operational skillset to build out a secure infrastructure that is scalable enough to support your blockchain network of choice. Those skills are hard to come by: blockchain skills are among the most sought-after. The rates of a freelance blockchain developer hovers around $81-100 per hour in the US, sometimes going as high as $140+ per hour.

It’s easy to underestimate how much time it will take to create, and whether your team really has the skills and ability to create the offering.

In addition to infrastructure, you’ll also need to build or secure vendors that can address storage needs, network speeds, encryption, smart contract development, UX/UI, and more. Each of those initiatives is going to require additional dedicated budget.

The question then becomes: what kind of advantage does this create? Much of this will depend upon the number of partially or fully decentralized applications (DApps) that you plan to run on it, and how many of them can and will share the same underlying infrastructure.

The ‘aaS’ Revolution

When evaluating your options for a new project, the project planning stage is always tricky. You’ll need to do a full scoping of the project, allocate responsibilities, and create a vendor vetting process.

In the past it was easy: you went out, bought some software and hardware, and got to building. But once the internet made it easier for companies to provide ‘as-a-service’ offerings, it added a layer of complexity for IT and engineering teams as to what options made sense for their project or organization. Salesforce began to displace massive Oracle on-premises implementations. Broadsoft started to displace PBXs and made phone closets a thing of the past. The list goes on and on with applications that replaced their on-prem brethren from years prior because the ongoing maintenance and upkeep was a headache for IT teams to manage. Why keep all of the infrastructure under your management when you could push all that work onto an infrastructure-as-a-service company’s plate, since their primary focus was supporting that exact technology?

This is great from an IT perspective, but what about for engineers and developers? Don’t they need to be able to store their code and applications locally? Don’t they need to own all of the pieces that tie in to their application? Oh and security, THE SECURITY!

Sorry for being dramatic, but the answer is no. These are all valid concerns, however, many of them can be addressed by working with the right service provider that suits your needs.

Uber is a great example of leveraging third-party service platforms to create an application. Did they need to go out and create a maps platform? Nope, they used Google for routing and tracking. Did they need to go out and spin up their own messaging and voice servers? Nope, they use Twilio for their communication services. They took a buy-centric approach which enabled them to focus on their core application and remove the need to focus on things outside of their core skill set.

How We Apply This to Blockchain

How difficult is it to build? How costly is it to manage? Do we have the skillset to support it? These are all questions that companies ask themselves when looking at making an investment for any kind of infrastructure.

On top of the infrastructure, it only takes a few minutes to realize that DevOps is really hard to do well. Making sure that the investments you’re making align with your team’s skill set is critical for your success. So if you’re looking around, saying “We need to bring in DevOps engineers for our Algorand project,” then HARD STOP! Check out below.

PureStake was created with this exact use case in mind. We provide secure and scalable blockchain infrastructure-as-a-service to help everyone from investors to developers better interact with the Algorand network. We’ve recently launched an API service that will provide an on-ramp to Algorand for any application looking to build on their pure proof of stake network. We offer a variety of subscriptions so that, regardless of size or budget (we have free, and free is good), you’ll be able to utilize our service and start interacting with the Algorand network within minutes.


Participation Keys in Algorand Blog Banner Image

Participation Keys in Algorand

What Are Algorand Participation Keys?

In Algorand, there are 2 types of nodes: relay nodes and participation nodes. Relay nodes serve as network hubs in Algorand, relaying protocol messages very quickly and efficiently between participation nodes. Participation nodes support the consensus mechanism in Algorand by proposing and validating new blocks. Participation keys live on participation nodes and are used to sign consensus protocol messages.

A participation key in Algorand is distinct and totally separate from a spending key. When you have an account in Algorand there is an associated spending key (or multiple keys in the case of a multi-sig account). The spending key is needed to spend funds in the account. A participation key, on the other hand, is associated with an account and is used to bring stake online on the network. Importantly, participation keys cannot be used to spend funds in the associated account, they can only be used for helping to support the consensus protocol.

Participation Keys Are Good

Having distinct keys for spending the Algo in an account, and staking the Algo in an account, results in several key security improvements.

In any crypto network, protecting the spending keys is of the utmost importance. Situations that require having spending keys on an internet connected computer are inherently dangerous and always contain the risk of loss of funds.

In Algorand, the spending key never has to be online. The spending key can be kept on an airgapped computer or other offline setup and only used for signing transactions offline. The participation key, in contrast, lives on the participation node and signs protocol messages, but the participation key cannot spend any funds in the account.

This separation of duties in 2 different keys improves the security of Algorand infrastructure substantially. Spending keys can always be kept totally offline and an attacker, if they are able to compromise an internet connected participation node, cannot spend or steal any of the funds in the associated account.

Of course, this doesn’t mean that participation keys shouldn’t be highly protected and secured. If an attacker does compromise a participation key, they can stand up a second participation node with the same participation key. This will result in protocol messages being double-signed, which the network will see as malicious behavior and will treat the node / associated stake as offline.

There is no bonding or slashing in Algorand, and staking rewards are still coming in the future, but regardless: being forced offline due to double signing is undesirable and means that the stake in question will no longer be supporting the consensus mechanism.

Participation Key Mechanics

My examples assume Algorand Node v1 software is installed and running in a participation node configuration on the Algorand MainNet. The software is installed using the Debian package on Ubuntu 18.04, with a standard non-multi-sig Algorand account with some Algo in it, and a separate offline computer with the spending key for the account.

To create a participation key you will need to use the “goal addpartkey” command and specify the account that you want to create the part key for and a validity range:

goal account addpartkey -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY --roundFirstValid 789014 --roundLastValid 4283414

A few things to note. The account specified in the -a flag in the command above (WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY) is made up and you would need to replace it with your account. Do not use this account as it, and the associated spending key, are not real. Any funds sent to this address will be permanently lost.

The validity range is specified in rounds. Rounds are equivalent to blocks in Algorand. So if you, for example, want to have a key that is valid from now until a point in the future, you need to find the current block height for the roundFirstValid and a future block height for the roundLastValid flag corresponding to the validity range you want.

To find the current block height you can use the “goal node status” command:

derek@algo-node:~$ goal node status Last committed block: 789014 Time since last block: 2.4s Sync Time: 0.0s Last consensus protocol: Next consensus protocol: Round for next consensus protocol: 789015 Next consensus protocol supported: true Genesis ID: mainnet-v1.0 Genesis hash: wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=

The last committed block, which is the same as the current block height, is reported as 789014, so we use that for our roundFirstValid. Figuring out the right value for the roundLastValid is a little more involved.

First, you have to determine what time range you want. It is a good practice to rotate participation keys and not to create a key with a really long validity range. In our example, we will use a time range of 6 months. What round corresponds to 6 months from now?

To figure that out, we have to do a little math. 6 months is approximately 182 days. So 182 days x 24 hours / day x 60 min / day x 60 sec / min = 15724800 seconds. At the time of writing, each round in Algorand takes about 4.5 sec. So 15724800 seconds / 4.5 seconds per block = 3494400 blocks. Now we need to add 3494400 to the current block height to get the height 6 months from now. E.g. 3494400 + 789014 = 4283414. This is where the 4283414 in the command above comes from for the roundLastValid.
As the network grows, the 4.5 second block time may not be a safe assumption. This may make the validity range slightly different than 6 months. You need to monitor for key validity and make sure to put a new key in place before the old one expires.

Once the addpartkey command has executed, you can find the participation key at:


It’s beyond the scope of this article, but this file is actually a sqlite database with N number of keys in it which will be internally rotated through automatically during the validity window. This is an additional security measure that is part of Algorand, where the keys used to sign protocol messages are rotated as rounds progress.

With the participation key created, the next step is to bring the account online. An account being online in Algorand means that the Algo in the account is supporting the consensus mechanism. We bring an account online by using the “goal account changeonlinestatus” command. Note that this action requires that you have a small amount of Algo in the account to pay for the transaction. If you have the spending key for the account directly on the participation node you can simply run this command

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1

However, having the spending key on the participation node is not recommended and kind of defeats the whole purpose of having participation keys in the first place. It is much better to have an airgapped and totally offline computer that has the spending key on it. The process is a little more involved with this setup, but it is much more secure. With this setup you would issue the following command instead:

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1 -t online.tx

This will produce a transaction file called online.tx in the current directory which has an unsigned transaction to bring the account online. This transaction file then needs to be securely moved to the airgapped computer with the spending key on it. Once on the airgapped computer you can use the algokey utility to sign the transaction file. The command would be:

algokey sign -k spendingkeyfile -t online.tx -o online.tx.signed

Note that algokey is standalone and does not need a running Algorand node. Also, the spendingkeyfile is the file that has the spending key for the account. This file can be created by algokey when you first set up your account.

There is also an option to specify the spending key mnemonic instead of a file, but I find this option worse as it leaves the mnemonic in the shell history, etc. The result of this command is that online.tx.signed will be created in the current directory. This file contains the signed online transaction and it needs to be securely moved back to the running participation node.

Once you have online.tx.signed back on the participation node you can send it to the network with the following command:

goal clerk rawsend -f online.tx.signed

Wait a little bit for the transaction to be processed, and your account should now be online. The creation of a transaction file, movement to the airgapped machine to sign the transaction, movement of the signed transaction back to the online node, and then sending the signed transaction to the network is a general pattern for sending transactions in Algorand without ever putting your spending key online.

Final Thoughts on Participation Keys in Algorand

The design of Algorand using separate keys for spending funds and for participating in network consensus improves the security of nodes running on the Algorand network substantially by protecting spending keys and removing the need for them to ever be online. I think this was a good design choice and wouldn’t be surprised if other protocols adopt this approach.