Posts

How to Become a Polkadot Validator Blog Banner Image

Here Are the 4 Factors That Convinced Us to Become a Polkadot Validator

We have spent the last several months researching existing and soon-to-be-launched public blockchains, and we have come to the conclusion that Polkadot is an extremely ambitious and interesting project. Many teams are already building projects to be ready by the time the MainNet launches.

Given PureStake’s infrastructure and DevOps expertise as a company, the obvious way for us to engage is as a validator helping to secure the network. From there, we may expand to additional services within the Polkadot ecosystem.

This post will go into some of the rationale that led us to this decision. One of the most important points that influenced us is that the Polkadot vision aligns well with our vision of a multichain future. We also think that developer adoption is key to the success of next generation chains, and Polkadot is well positioned in this regard.

How Polkadot is Different

Too many chains are trying to do everything and be good at everything. The idea of blockchains that can talk to each other opens the door for specialization. Much like the unix philosophy, individual Polkadot parachains can focus on doing one thing and doing it well. And larger effects can be achieved through composability of different components.

Polkadot has an ability to accelerate innovation by significantly reducing the barriers to blockchain development and allowing a rich ground for experimentation. The Polkadot MainNet launch is fast approaching and we are excited to provide secure and reliable validation services for the network. What follows are the reasons we choose Polkadot over all the other networks out there.

Top 4 Reasons PureStake is Validating on Polkadot

Reason #1: Formidable Ecosystem & Leadership

Even though Polkadot hasn’t launched yet, they have already amassed an impressive ecosystem of notable developers, validators, partners, and projects, not least of which is Parity itself.

In addition to leading the development of Polkadot, Parity has a strong history and track record of delivering crypto infrastructure projects at production-grade performance and quality levels. The Parity Ethereum client is currently supporting a large part of the production Ethereum MainNet, and thus already supporting billions of dollars of crypto value. The Parity team, including Gavin Wood, are very close to Ethereum and familiar with all of its shortcomings; as a result, they are well-positioned to address Ethereum’s critical scalability challenges with Polkadot.

While Polkadot obviously has not yet built a community the size of the Ethereum’s, it has already generated tons of information, documentation, chat groups activity, and videos that make it relatively easy for newcomers to get up-to-speed.

At the launch of the Polkadot MainNet, they will have substantial scalability and programmability advantages over Ethereum 1.0. Until Ethereum 2.0 becomes a reality, Polkadot seems well-positioned to gain developer traction, including by stealing away some market share from Ethereum.

Reason #2: Flexible Underlying Framework (Substrate)

Polkadot is built on Substrate, an impressive developer framework that can be used to build a Polkadot-compatible blockchain. Substrate is a very powerful framework for developing blockchain applications and it provides a lot of choices to developers who are looking to build a decentralized applications.

If you want full control over your blockchain, you can use Substrate Core to build an application-specific blockchain that won’t even be part of the Polkadot network. Developing a blockchain this way will be much faster than rolling your own, as it handles many of the low-level subsystems that you will need out-of-the-box.

Substrate gives you flexibility, though. Rather than using Substrate Core, you could pull from the SMRL library of modules to plug in already-developed functionality for things like accounts and balances, fungible tokens, consensus mechanisms, and smart contract functionality. Alternatively, you could opt for the highest level of abstraction, Substrate Node, to get up-and-running with a custom blockchain very quickly and efficiently.

The quality and functionality of Substrate will almost certainly help draw developers in and spur adoption of Polkadot.

Reason #3: Scalable Design

Polkadot implements a Proof of Stake-based consensus mechanism on its main relay chain that uses a scheme called Nominated Proof of Stake. Proof of Stake-based consensus mechanisms offer several significant advantages over Proof of Work and other consensus algorithms, including scalability.

Right now, on the Polkadot Alexander TestNet, blocks are being produced roughly every 6 seconds. This is significantly faster than Ethereum (which is currently producing blocks every 13-13.5 seconds) and provides a scalable foundation for the rest of the system.

Another way that Polkadot achieves scalability is by parallelizing execution using parachains. Each parachain can have its own blockchain, and each of these parachains connects back to the main relay chain. By parallelizing execution into many parachains, Polkadot will inherently be much more scalable than a single chain network like Ethereum — at least until Ethereum 2.0 and its very similar concept of shards is realized.

Parachains will allow for more transaction throughput through parallelization, but separating transactions into different parachains also can provide economic scalability. Developers of applications occupying parachain slots have control over the economics of transactions. They have the ability to make certain classes of transactions less expensive, or perhaps even free, as opposed to a single economic model that is in use on more traditional single blockchain systems. This will allow developers to optimize their applications on Polkadot to achieve cost scalability when deployed.

Reason #4: Solid Security Posture

There are many parts of the Polkadot design that provide compelling security advantages, but there are two examples that stand out.

Stash Accounts and Controller Accounts

In our experience running crypto infrastructure at PureStake, a lot of time is spent worrying about key security, particularly keys that need to be warm or hot and online, versus cold and offline.

For a Polkadot validator, there are three different types of accounts and keys involved in the setup: a stash account, a controller account, and a session account. The stash account can be totally cold and offline — where you keep your funds. The controller account is warm, but needs to hold only a very minimal set of funds to perform certain specific transactions. And the session account is hot, but has no funds in it.

This design is much more secure than almost any other crypto network, since it allows you to store essentially all of your funds cold and offline.

Shared Validators

The shared validator security model in Polkadot provides security-as-a-service for all of the parachains.

This is quite different from Cosmos (which is the other major next-gen network) that enables parallelized application-specific blockchains. In Cosmos, each zone is on its own to recruit validators for security.

There are a lot of next-gen blockchains launching in 2019 and 2020 with some form of Delegated Proof of Stake which need professional validators to help secure their networks. There simply aren’t enough professional validators to go around and secure all of these networks. By having a shared security model, Polkadot has removed a big barrier to launching a parachain which should speed up adoption of the network.

What’s Next for PureStake as an Early Polkadot Validator

To date, PureStake has been providing node, API, and other infrastructure services for blockchain networks, including supporting the Algorand MainNet launch in June.

Now we are expanding our services to become a validator on Polkadot and the Kusama BetaNet (in preparation for the Polkadot MainNet launch). We will leverage a lot of what we’ve already built to support the Algorand network — the skills on the team, existing infrastructure, and code — to deliver highly reliable and secure validator services for Polkadot stakers. That includes:

  • Base compute infrastructure
  • Base storage infrastructure including blockchain snapshot / restore
  • Base network, VPC, VPN infrastructure
  • Authentication and authorization services
  • DevOps automation stack
  • Multi-cloud approach across AWS, Azure, and Google
  • Elastic load-balancer and firewall infrastructure
  • IDS, IPS, vulnerability management services
  • OS patch management and automation
  • Key and secrets management infrastructure
  • Monitoring and alerting infrastructure
  • Log collection and analysis infrastructure
  • DevOps and SecOps processes and reporting

Since only minimal work is needed to port the elements above, we can focus our energy on elements that need more adaptation to support Polkadot validation. Some of these areas include:

  • Validator infrastructure design: create our version of the standard sentry / validator design to support the validation requirements in Polkadot’s NPoS design
  • Extend our cloud automation: support the VPC, VPN, and other networking elements that are part of the validator design
  • Update our DevOps automation at the node and blockchain storage levels to support Polkadot-specific requirements
  • Enhance our monitoring, alerting, and logging / log analysis for Polkadot
  • Add support for Polkadot keys and secrets so they can be managed securely
  • Train our DevOps team on all things Polkadot so they can effectively manage and troubleshoot the services

We expect the PureStake validator infrastructure to be ready in time for the Kusama BetaNet switch to Proof of Stake. If you’d like to learn more about our Polkadot validator or other services we are planning, drop us a line.

Buy vs. Build Infrastructure-as-a-Service Blog Featured Image

Advantages of Buying Blockchain Infrastructure-as-a-Service vs. BUIDLing It Yourself

In everyday life, we are faced with decisions to either buy readymade solutions or to build something from scratch.

Whether it’s a large purchase like buying a home, or something considerably smaller like choosing between two couches, there are pros and cons to each side. Do you want to stay up until 2:30 in the morning putting together a couch? For the right price, a lot of folks would say, “Absolutely,” while others would say, “No shot!”

With the rise of blockchain as a viable platform, the business community seems to be posed with this question at all levels. Cost, effort, risk, focus, and quality all factor into every decision a company makes, including whether to build the infrastructure that run these applications and platforms, or to pursue a third party vendor that offers blockchain infrastructure-as-a-service.

Risk vs. Reward

The allure of blockchain is real: the technology as a whole promises dramatic cost savings (up to 70%!) to banks and financial institutions. Since up to two-thirds of those costs are attributable to infrastructure, it’s imperative to pursue an infrastructure strategy that captures as much of that cost savings as possible.

But blockchain projects can be deceivingly costly. A recent report of government-sponsored blockchain projects revealed that the median project cost is $10-13 million.

At first glance, building infrastructure in-house seems like the most cost-effective way to approach the blockchain: there are no licensing fees, and your company is in complete control.

Of course, there are always trade-offs: an in-house infrastructure project is very taxing on your organization’s resources.

SECURE INFRASTRUCTURE-AS-A-SERVICE FOR ALGORAND NODES

Your team must have the time and operational skillset to build out a secure infrastructure that is scalable enough to support your blockchain network of choice. Those skills are hard to come by: blockchain skills are among the most sought-after. The rates of a freelance blockchain developer hovers around $81-100 per hour in the US, sometimes going as high as $140+ per hour.

It’s easy to underestimate how much time it will take to create, and whether your team really has the skills and ability to create the offering.

In addition to infrastructure, you’ll also need to build or secure vendors that can address storage needs, network speeds, encryption, smart contract development, UX/UI, and more. Each of those initiatives is going to require additional dedicated budget.

The question then becomes: what kind of advantage does this create? Much of this will depend upon the number of partially or fully decentralized applications (DApps) that you plan to run on it, and how many of them can and will share the same underlying infrastructure.

The ‘aaS’ Revolution

When evaluating your options for a new project, the project planning stage is always tricky. You’ll need to do a full scoping of the project, allocate responsibilities, and create a vendor vetting process.

In the past it was easy: you went out, bought some software and hardware, and got to building. But once the internet made it easier for companies to provide ‘as-a-service’ offerings, it added a layer of complexity for IT and engineering teams as to what options made sense for their project or organization. Salesforce began to displace massive Oracle on-premises implementations. Broadsoft started to displace PBXs and made phone closets a thing of the past. The list goes on and on with applications that replaced their on-prem brethren from years prior because the ongoing maintenance and upkeep was a headache for IT teams to manage. Why keep all of the infrastructure under your management when you could push all that work onto an infrastructure-as-a-service company’s plate, since their primary focus was supporting that exact technology?

This is great from an IT perspective, but what about for engineers and developers? Don’t they need to be able to store their code and applications locally? Don’t they need to own all of the pieces that tie in to their application? Oh and security, THE SECURITY!

Sorry for being dramatic, but the answer is no. These are all valid concerns, however, many of them can be addressed by working with the right service provider that suits your needs.

Uber is a great example of leveraging third-party service platforms to create an application. Did they need to go out and create a maps platform? Nope, they used Google for routing and tracking. Did they need to go out and spin up their own messaging and voice servers? Nope, they use Twilio for their communication services. They took a buy-centric approach which enabled them to focus on their core application and remove the need to focus on things outside of their core skill set.

How We Apply This to Blockchain

How difficult is it to build? How costly is it to manage? Do we have the skillset to support it? These are all questions that companies ask themselves when looking at making an investment for any kind of infrastructure.

On top of the infrastructure, it only takes a few minutes to realize that DevOps is really hard to do well. Making sure that the investments you’re making align with your team’s skill set is critical for your success. So if you’re looking around, saying “We need to bring in DevOps engineers for our Algorand project,” then HARD STOP! Check out below.

PureStake was created with this exact use case in mind. We provide secure and scalable blockchain infrastructure-as-a-service to help everyone from investors to developers better interact with the Algorand network. We’ve recently launched an API service that will provide an on-ramp to Algorand for any application looking to build on their pure proof of stake network. We offer a variety of subscriptions so that, regardless of size or budget (we have free, and free is good), you’ll be able to utilize our service and start interacting with the Algorand network within minutes.

 

Participation Keys in Algorand Blog Banner Image

Participation Keys in Algorand

What Are Algorand Participation Keys?

In Algorand, there are 2 types of nodes: relay nodes and participation nodes. Relay nodes serve as network hubs in Algorand, relaying protocol messages very quickly and efficiently between participation nodes. Participation nodes support the consensus mechanism in Algorand by proposing and validating new blocks. Participation keys live on participation nodes and are used to sign consensus protocol messages.

A participation key in Algorand is distinct and totally separate from a spending key. When you have an account in Algorand there is an associated spending key (or multiple keys in the case of a multi-sig account). The spending key is needed to spend funds in the account. A participation key, on the other hand, is associated with an account and is used to bring stake online on the network. Importantly, participation keys cannot be used to spend funds in the associated account, they can only be used for helping to support the consensus protocol.

Participation Keys Are Good

Having distinct keys for spending the Algo in an account, and staking the Algo in an account, results in several key security improvements.

In any crypto network, protecting the spending keys is of the utmost importance. Situations that require having spending keys on an internet connected computer are inherently dangerous and always contain the risk of loss of funds.

In Algorand, the spending key never has to be online. The spending key can be kept on an airgapped computer or other offline setup and only used for signing transactions offline. The participation key, in contrast, lives on the participation node and signs protocol messages, but the participation key cannot spend any funds in the account.

This separation of duties in 2 different keys improves the security of Algorand infrastructure substantially. Spending keys can always be kept totally offline and an attacker, if they are able to compromise an internet connected participation node, cannot spend or steal any of the funds in the associated account.

Of course, this doesn’t mean that participation keys shouldn’t be highly protected and secured. If an attacker does compromise a participation key, they can stand up a second participation node with the same participation key. This will result in protocol messages being double-signed, which the network will see as malicious behavior and will treat the node / associated stake as offline.

There is no bonding or slashing in Algorand, and staking rewards are still coming in the future, but regardless: being forced offline due to double signing is undesirable and means that the stake in question will no longer be supporting the consensus mechanism.

Participation Key Mechanics

My examples assume Algorand Node v1 software is installed and running in a participation node configuration on the Algorand MainNet. The software is installed using the Debian package on Ubuntu 18.04, with a standard non-multi-sig Algorand account with some Algo in it, and a separate offline computer with the spending key for the account.

To create a participation key you will need to use the “goal addpartkey” command and specify the account that you want to create the part key for and a validity range:

goal account addpartkey -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY --roundFirstValid 789014 --roundLastValid 4283414

A few things to note. The account specified in the -a flag in the command above (WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLIHUC5KY) is made up and you would need to replace it with your account. Do not use this account as it, and the associated spending key, are not real. Any funds sent to this address will be permanently lost.

The validity range is specified in rounds. Rounds are equivalent to blocks in Algorand. So if you, for example, want to have a key that is valid from now until a point in the future, you need to find the current block height for the roundFirstValid and a future block height for the roundLastValid flag corresponding to the validity range you want.

To find the current block height you can use the “goal node status” command:

derek@algo-node:~$ goal node status Last committed block: 789014 Time since last block: 2.4s Sync Time: 0.0s Last consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0 Next consensus protocol: https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0 Round for next consensus protocol: 789015 Next consensus protocol supported: true Genesis ID: mainnet-v1.0 Genesis hash: wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=

The last committed block, which is the same as the current block height, is reported as 789014, so we use that for our roundFirstValid. Figuring out the right value for the roundLastValid is a little more involved.

First, you have to determine what time range you want. It is a good practice to rotate participation keys and not to create a key with a really long validity range. In our example, we will use a time range of 6 months. What round corresponds to 6 months from now?

To figure that out, we have to do a little math. 6 months is approximately 182 days. So 182 days x 24 hours / day x 60 min / day x 60 sec / min = 15724800 seconds. At the time of writing, each round in Algorand takes about 4.5 sec. So 15724800 seconds / 4.5 seconds per block = 3494400 blocks. Now we need to add 3494400 to the current block height to get the height 6 months from now. E.g. 3494400 + 789014 = 4283414. This is where the 4283414 in the command above comes from for the roundLastValid.
As the network grows, the 4.5 second block time may not be a safe assumption. This may make the validity range slightly different than 6 months. You need to monitor for key validity and make sure to put a new key in place before the old one expires.

Once the addpartkey command has executed, you can find the participation key at:

/var/lib/algorand/mainnet-v1.0/WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA.789014.4283414.partkey

It’s beyond the scope of this article, but this file is actually a sqlite database with N number of keys in it which will be internally rotated through automatically during the validity window. This is an additional security measure that is part of Algorand, where the keys used to sign protocol messages are rotated as rounds progress.

With the participation key created, the next step is to bring the account online. An account being online in Algorand means that the Algo in the account is supporting the consensus mechanism. We bring an account online by using the “goal account changeonlinestatus” command. Note that this action requires that you have a small amount of Algo in the account to pay for the transaction. If you have the spending key for the account directly on the participation node you can simply run this command

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1

However, having the spending key on the participation node is not recommended and kind of defeats the whole purpose of having participation keys in the first place. It is much better to have an airgapped and totally offline computer that has the spending key on it. The process is a little more involved with this setup, but it is much more secure. With this setup you would issue the following command instead:

goal account changeonlinestatus -a WHNXGKYOVIQADYS4VTYBG6SGWFIG6235C5LMXM76J3LHE475QJLA -o=1 -t online.tx

This will produce a transaction file called online.tx in the current directory which has an unsigned transaction to bring the account online. This transaction file then needs to be securely moved to the airgapped computer with the spending key on it. Once on the airgapped computer you can use the algokey utility to sign the transaction file. The command would be:

algokey sign -k spendingkeyfile -t online.tx -o online.tx.signed

Note that algokey is standalone and does not need a running Algorand node. Also, the spendingkeyfile is the file that has the spending key for the account. This file can be created by algokey when you first set up your account.

There is also an option to specify the spending key mnemonic instead of a file, but I find this option worse as it leaves the mnemonic in the shell history, etc. The result of this command is that online.tx.signed will be created in the current directory. This file contains the signed online transaction and it needs to be securely moved back to the running participation node.

Once you have online.tx.signed back on the participation node you can send it to the network with the following command:

goal clerk rawsend -f online.tx.signed

Wait a little bit for the transaction to be processed, and your account should now be online. The creation of a transaction file, movement to the airgapped machine to sign the transaction, movement of the signed transaction back to the online node, and then sending the signed transaction to the network is a general pattern for sending transactions in Algorand without ever putting your spending key online.

Final Thoughts on Participation Keys in Algorand

The design of Algorand using separate keys for spending funds and for participating in network consensus improves the security of nodes running on the Algorand network substantially by protecting spending keys and removing the need for them to ever be online. I think this was a good design choice and wouldn’t be surprised if other protocols adopt this approach.