5 Cloud Deployment Best Practices You Should Know About

Moving an entire stack into a cloud environment is challenging and undoubtedly requires a collaborative effort. Even seasoned IT professionals seem to miss a thing or two when they try to deploy their resources into a cloud infrastructure.

The cloud is generally designed in a way that allows almost anyone to set up the infrastructure. However, the deployment of an application is not a single-step process.

Depending on the number of modules, the complexity of the project can exponentially increase. But on the bright side, years of experience with data centre operations or hardware are no longer a prerequisite.

To help you seamlessly upgrade your application into the cloud, we’ve compiled some of the best practices for cloud deployment that you can use to optimize the deployment process. This should help you avoid a fair number of problems that other developers and enterprises have had to go through.

Also Read: Getting Started with AWS SDK and Node.js

1. Use a Deployment and Operational Checklist

While creating a cloud infrastructure, a company must have a well-defined purpose or goal and also a set of procedures for deployment.

Neither you nor your company wants to end up in a situation where you buy a service only to find out later that some of the details were overlooked.

Some basic pointers on what to include in your checklist :

Cloud Security – Who can access what?

Implementing and maintaining an authentication platform is one of the security best practices, it is a critical initial step irrespective of which cloud delivery model you choose to go with.

Amazon, for instance, offers a service that goes by the name Identity and Access Management (IAM). You can assign policies to all the members of your team for access control to determine who can access what.

For example, if you have a front-end developer, you should limit their access to only the cloud resources that they require rather than providing Administrator Access. Believe it or not, this is a popular practice that you should avoid.

Place Your Secrets in an ENVIRONMENT Variable

A medium-sized application will usually have a number of secret keys and API tokens. Do not hardcode them into your settings. Instead, consider loading it from an environment variable,

import os
SECRET_KEY = os.environ['SECRET_KEY']

Or from a separate file

with open('/etc/secret_key.txt') as f:
SECRET_KEY = f.read().strip()

Create a Pricing checklist

The cost of running your resources on the cloud should be within your bounds. You might run into unexpected charges if the cost factor is not taken into consideration beforehand. To significantly save storage costs, you can use the cloud calculator tools to calculate the costs upfront.

Azure, for instance, has an official calculator that’s integrated into its services. Apart from that, third-party vendors also offer price calculators for Azure. The popular among them include the NetApp ONTAP calculator and Petri pricing calculator.

You can also use programmatic techniques to automatically check for pricing and calculate your monthly costs.

Here is an example of how AWS pricing API works: https://aws.amazon.com/pricing/

You will get a JSON response that looks like this:

 {
 "formatVersion" : "v1.0",
 "publicationDate" : "2015-11-19T02:10:02Z",
 "offers" : {
 "AmazonS3" : {
 "offerCode" : "AmazonS3",
 "currentVersionUrl" : "/offers/v1.0/aws/AmazonS3/current/index.json"
 },
 "AmazonRedshift" : {
 "offerCode" : "AmazonRedshift",
 "currentVersionUrl" : "/offers/v1.0/aws/AmazonRedshift/current/index.json"
 },
 "AmazonEC2" : {
 "offerCode" : "AmazonEC2",
 "currentVersionUrl" : "/offers/v1.0/aws/AmazonEC2/current/index.json"
 },
 "AmazonCloudWatch" : {
 "offerCode" : "AmazonCloudWatch",
 "currentVersionUrl" : "/offers/v1.0/aws/AmazonCloudWatch/current/index.json"
 }
 }
 }

2. Automate Deployments

The usual production mantra is “Automate Everything”. Automation ensures that the procedures are applied with consistency and reduces the chances of any potential errors that may occur.

>> Click here to read the article on automatic deployment using Jenkins

There are many strategies for automating deployments and what’s right for you will depend on your stack. A good approach would be to use an automation tool like Travis CLI, or Circle CLI. Here’s an excerpt from the official docs that describe how Travis CLI works:

When you run a build, Travis CI clones your GitHub repository into a brand new virtual environment and carries out a series of tasks to build and test your code.

If one or more of those tasks fails, the build is considered broken. If none of the tasks fails, the build is passed, and Travis CI can deploy your code to a web server, or application host.

For a JavaScript application, you might have to take the following steps:

  1. First, execute the test npm script.
  2. Cache node_modules, for faster build times
  3. Build the project and perform any minification
  4. Deploy or run the deployment script if the commit was a tagged commit

Another important aspect to consider when automating your deployments is automating your performance testing as well. This will enable both, good performance and also efficient use of resources.

If you’re deploying a Node.js application, this is what your Travis CLI will look like:

sudo: false
 language: node_js
 node_js:
 - lts/*
 before_install:
 - curl -o- -L https://yarnpkg.com/install.sh | bash -s -- --version 1.9.4
 - export PATH=$HOME/.yarn/bin:$PATH
 - wget https://github.com/gohugoio/hugo/releases/download/v0.47.1/hugo_0.47.1_Linux-64bit.deb
 - sudo dpkg -i hugo*.deb
 cache: yarn
 before_script: yarn
 script: yarn test
 before_deploy: yarn build
 deploy:
 skip_cleanup: true
 provider: firebase
 on:
 tags: true
 token:
 Secure: token

3. Frequently Monitor and Log the Status of the Cloud

For on-premise deployment, it is common practice to list out in advance what needs to be monitored. However, cloud monitoring is a little different from that of traditional monitoring techniques.

If you’re going to stick with the default configuration, you might end up with large quantities of unwanted data.

Configure your cloud monitor and debugging tool to capture only the data that needs to be captured. When you layout the design to monitor cloud deployments, you can filter out information that is not related to your operations or network. This way you can adapt the monitoring to work with your company’s requirements.

While cloud monitors and logs can differ to a large extent in their functions and features, they provide most of the tools needed to create robust policies that help govern and monitor how the resources and services are being leveraged.

For example, Amazon’s CloudWatch and Azure Monitor are closely integrated into their cloud ecosystem. You can filter out the specific parameters that you need and create policies on top of them.

You will also need a track of the resources used and their purpose. Cloud service providers like Azure come with tagging options that can help keep track of this – the services a machine was created for and if they are used for backup, load balancing, production, or testing.

4. Blue-Green Deployment Strategy to Minimize Downtime

The Blue-green deployment strategy is a method that minimizes both downtime and risk by running two identical production environments – Blue and Green.

At any given point in time, only one of the two environments is live. The live environment services all ongoing production traffic. The basic principle of this technique is to have two environments that are easily interchangeable.

Understanding Blue-Green Deployment Process:

Let’s walk through the whole process. Blue is currently Live and Green is currently Idle. The new unstable released is pushed into Green whereas Blue continues to be in production.

After you have completely deployed and tested the new version of your software in Green, you can switch your router. Now all requests will be serviced by Green instead of Blue. At this stage, Green is Live and Blue is Idle.

As mentioned earlier, the major benefits of this deployment strategy are that it minimizes application downtime during deployment and reduces the risk quotient in case something unexpected occurs.

If this happens, you can immediately roll back the router to the original environment until you troubleshoot the problem in the second environment.

5. Keep Security In Mind While Designing Your Network

It is a known fact that a network that has been designed poorly will be almost impossible to secure. This problem usually arises when businesses allow organic growth of their network rather than the growth being planned and well thought out in advance.

Understanding the potential damage that can this lead to, developers have begun to employ the practice of dividing their network into services network and management network.

The services network contains all components that are needed to ensure ongoing services to users and customers, while the management network contains those aspects that are needed internally to assist in managing or administering the network.

The key benefit of separating the two networks is that it allows for greater assurance against unauthorized users gaining access to a company’s systems.

Another important part of network security is key management. Keys are either commonly available or have a root password that can be accessed by anyone. A number of hackers routinely check GitHub for keys that developers have accidentally checked in. The larger the number of keys, the bigger the struggle to ensure that they aren’t exposed.

Summary

The key aspect to remember is that there is no absolute right or wrong way to start your journey of adopting a cloud network. However, whichever way you may choose, following some of these basic tips and best practices should definitely make the journey smoother and lead to favourable results. 

Reference

https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html

Pankaj Kumar
Pankaj Kumar
Articles: 209