Infrastructure as code: Creating a multi-regional blog with Terraform on the OVH Public Cloud – Part 2
Welcome to the second post in our series of articles covering IaC with Terraform. In our first post, we laid out the groundwork for our project. It’s now time to get started configuring an instance, and see how we can use it to put a blog online!
In a field where DevOps predominantly hold the most expertise, it makes sense to patch an infrastructure the same way you would correct lines of code, with similar options in terms of automation, reproducibility and collaboration. And users can take this approach with IaC, which offers the ability to manage resources the same way you would work on coding your application.
This aim of this article series is to illustrate the advantages of this approach. Rather than getting lost with theoretical considerations, we preferred the idea of working on a concrete scenario, with various steps we can use to walk ourselves through the main principles of IaC. We chose Terraform, which responds particularly well to issues surrounding infrastructure abstraction in hybrid cloud environments. The snippets of these articles are extracts from a detailed guide, which you can download from GitHub.
Our first article covered the basics of how Terraform works, defined key terms such as provider, resource and variable, and provided information on how to get started creating your first deployment environment. We will start by booting an instance, and configuring our website. We will then work on securing the whole infrastructure, and setting up a first level of load balancing between two regions.
We’re not really looking to focus on considerations involving the use of a Content Management System (CMS) in this article, so we will create our blog using Hugo, a great static website generator which is very easy to set up.
Preparing the instance
To define your instance, open the file ‘main.tf’, where you will need to start by patching the provider (OpenStack, in our example) and specifying the version you use (1.5). This is to stop your infrastructure from experiencing any issues resulting from a major Terraform update. We strongly recommend keeping this tip in mind, because it could save you a lot of hassle. You will also need to add the region as a variable, in order to prepare for the usage of more than one OpenStack region.
Now we’re moving on to the instance itself. Several elements are required for you to launch it. Terraform resolves dependencies, and defines a corresponding action plan.
If your code is more or less similar to our example, you will see that Terraform authorises variable interpolation. For this reason, we will see the ‘var.count’ parameter appear, next to the ‘count’ property on our instance.
Available in JSON and HCL (HashiCorp Configuration Language), here we can use the interpolation syntax to enter the number of instances that need to be booted. Terraform will then iterate for as many times as the value entered in the count property. It can also be used for other things, like setting up logical operations (e.g. checking that a variable meets a condition before ordering the creation of a resource).
Configuring the system after startup
The machine’s configuration follows the same logic: the user data (user_data) is entered into a template that defines the scripts to be run, and the files to load when the instance is created. These templates are compiled by Terraform, following the resource dependence tree. For example, at this level, you can set Apache to receive a virtual host file prepared in advance.
You can also determine automatic provisioning parameters for each resource, in order to optimise efficiency or simply handle situations where data volume is too high. In this example, we’re requesting the contents of our blog to be copied onto each instance booted via SCP.
We are now able to create our first instance. However, to maximise its security, we strongly recommend setting up a TLS certificate and a first version of redundancy.
Generating a TLS certificate
Obviously, this wouldn’t involve manually deploying a Let’s Encrypt certificate on your Apache server. Instead, you can schedule automatic certificate generation and automation in Terraform, for when the instance is created.
This feature does not exist locally in Terraform, but you can get it by installing the ACME Provider plugin. Terraform will then create the account and certificate using dedicated resources, before requesting validation.
Now that you have a TLS certificate, why not secure your infrastructure even further with a second instance, hosted in another region? In this article, we’ll cover how to create a basic load balancing system using the Round Robin DNS, but later on, we’ll demonstrate just how much further you can go in terms of high availability with Terraform.
Configuring the Round Robin DNS
To deploy your instance in a new region, simply add the corresponding OpenStack provider (region B) in the ‘main.tf’ file, then repeat the configuration elements for this new instance (network port B, instance B, etc.). All you need to do at this stage is create a hierarchy between the two via the OVH provider, which will create a list of all the resources required to use our APIs. We can use the ‘ovh_domain_zone_record’ resource to specify our DNS zone, and all of its associated records.
Once you have completed all of these steps, you will have a fully operational blog, with a first version of load balancing between two instances situated in different regions. Throughout this process, we have gained a better understanding of how an instance works on Terraform, and how we can use these different properties. But you can do much more than this with Terraform — and we’ll cover that in the next article in this series!