Deploying an Azure App Service from scratch, including DNS and TLS

As many of you have probably gathered, over the past few weeks, I’ve been working on building a process for deploying an Azure App Service from scratch, including DNS and TLS in a single Terraform module.

Today, I write this post with success in my heart, and at the bottom, I provide copies of the necessary files for your own usage.

One of the biggest hurdles I faced was trying to integrate Cloudflare’s CDN services with Azure’s Custom Domain verification. Typically, I’ll rely on the options available in the GUI as the inclusive list of “things I can do” so up until now, if we wanted to stand up a multi-region App Service, we had to do the following:

  1. Build and deploy the App Service, using the hostname for HTTPS for each region (R1 and R2)

    e.g. (R1), (R2)
  2. Create the CNAME record for the service at Cloudflare pointing at R1, turning off proxying (orange cloud off)

    e.g. ->
  3. Add the Custom Domain on R1, using the CNAME verification method
  4. Once the hostname is verified, go back to Cloudflare and update the CNAME record for the service to point to R2

    e.g. ->
  5. Add the Custom Domain on R2, using the CNAME verification method
  6. Once the hostname is verified, go back to Cloudflare and update the CNAME record for the service to point to the Traffic Manager, and also turn on proxying (orange cloud on)

While this eventually accomplishes the task, the failure mode it introduces is that if you ever want to add a third (or fourth or fifth…) region, you temporarily have to not only direct all traffic to your brand new single instance momentarily to verify the domain, but you also have to turn off proxying, exposing the fact that you are using Azure (bad OPSEC).

After doing some digging however, I came across a Microsoft document that explains that there is a way to add a TXT record which you can use to verify ownership of the domain without a bunch of messing around with the original record you’re dealing with.

This is great because we can just add new awverify records for each region and Azure will trust we own them, but Terraform introduces a new wrinkle in that it creates the record at Cloudflare so fast that Cloudflare’s infrastructure often doesn’t have time to replicate the new entry across their fleet before you attempt the verification, which means that the lookup will fail and Terraform will die.

To get around this, we added a null_resource that just executes a 30 second sleep to allow time for the record to propagate through Cloudflare’s network before attempting the lookup.

I’ve put together a copy of our Terraform modules for your perusal and usage:

Using this module will allow you to easily deploy all of your micro-services in a Highly Available configuration by utilizing multiple regions.

Generate Terraform files for existing resources

You may find yourself in a position where a resource already exists in your cloud environment but was created in the respective provider’s GUI rather than in Terraform. You may feel a bit overwhelmed at first, but there are a few ways to generate Terraform files for existing resources, and we’re going to talk about the various ways today. This is also not an exhaustive list; if you have any other suggestions, please leave a comment and I’ll be sure to update this post.

Method 1 – Manual

Be warned, the manual method takes a little more time, but is not restricted to certain resource types. I prefer this method because it means that you’ll be able to see every setting that is already set on your resource with your own two eyes, which is good for sanity checking.

First, you’re going to want to create a .tf file with just the outline of the resource type you’re trying to import or generate.

For example, if I wanted to create the Terraform for a resource group called example-resource-group that had several tags attached to it, I would do:

resource "azurerm_resource_group" "example-resource-group" {

and then save it.

Next, I would go to the Azure GUI, find and open the resource group, and then open the ‘Properties’ section from the blade.

I would look for the Resource ID, for example /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group and copy it.

I would then open up a command prompt / terminal and import the state by running: terraform import azurerm_resource_group.example-resource-group /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group

Finally, and this is the crucial part, I would immediately run terraform plan. There may be required fields that you will need to fill out before this comamnd works, but in general, this will compare the existing state that you just imported to the blank resource in the .tf file, and show you all of the differences which you can then copy into your new Terraform file, and be confident that you have imported all of the settings.


# azurerm_resource_group.example-resource-group will be updated in-place
   ~ resource "azurerm_resource_group" "example-resource-group" {
         id       = "/subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group"
         location = "centralus"
         name     = "example-resource-group"
       ~ tags     = {
           ~ "environment" = "dev" -> null
           ~ "owner"       = "example.person" -> null
           ~ "product"     = "internal" -> null

A shortcut I’ve found is to just copy the entire resource section, and then replace all of the tildes (~) with spaces, and then find and remove all instances of -> null.

Method 2 – Az2tf (Azure only)

Andy Thomas (Microsoft employee) put together a tool called Az2tf which iterates over your entire subscription, and generates .tf files for most of the common types of resources, and he’s adding more all the time. Requesting a specific resource type is as simple as opening an issue and explaining which resource is missing. In my experience, he’s responded within a few hours with a solution.

Method 3 – Terraforming (AWS only)

Daisuke Fujita put together a tool called Terraforming that with a little bit of scripting can generate Terraform files for all of your AWS resources.

Method 4 – cf-terraforming (Cloudflare only)

Cloudflare put together a fantastic tool called cf-terraforming which rips through your Cloudflare tenant and generates .tf files for everything Cloudflare related. The great thing about cf-terraforming is that because it’s written by the vendor of the original product, they treat it as a first class citizen and keep it very up-to-date with any new resources they themselves add to their product. I wish all vendors would do this.

To sum things up, there are plenty of ways to generate Terraform files for existing resources. Some are more time consuming than others, but they all have the goal of making your environment less brittle and your processes more repeatable, which will save time, money, and most importantly stress, when an inevitable incident takes place.

Do you know of any other tools for these or other providers that can assist in bringing previously unmanaged resources under Terraform management? Leave a comment and we’ll add them to this page as soon as possible!

Using Terraform workspaces for fun and profit – Part 2

In the first part of this series, we built a Terraform system that uses a single set of files to maintain 3 separate environments, the next step is to automate as much of this process as possible.

To do that, we’re going to leverage Azure DevOps to create a build pipeline for each environment, which will kick off when we merge code into the respective environment’s git branch.

Step 1: Create a git repo to store the .tf files

First you’ll need to sign into Azure DevOps. Once signed in, you’ll want to either select an existing organization, or create a new one, and then select an existing project, or create a new one.

Once you’ve got a project open, the + symbol at the top of the left-hand panel and select ‘New repository’. Create a repository using the naming convention of your company (or if you don’t have a particular convention, I prefer Terraform.DevOps.IT-CompanyName)

Finally, if you have existing .tf files that already exist, create a branch called dev (git checkout -b dev), and then commit and push the files up to the repo.

Step 2: Use Azure Blob Storage as the backend provider to store your state files

Since the alternative is hosting a state file on a single location (like your laptop), which may die/be stolen/other bad things, we want to tell Terraform to keep its state files in the cloud for easy and shared access.

Open up the Azure portal and browse to the Storage Accounts section. Create a new Storage Account and use the default options. Unless your Terraform environment is absolutely massive (e.g. greater than 1 gigabyte of .tf files), this account will use next to no storage, certainly less than the threshold to be charged $0.01 per month. Once the the Storage Account is created, you’ll want to add the appropriate backend type to your Terraform library as defined here, and then run terraform init which will initialize the Storage Account to host your state file.

Step 3: Begin creating the Build pipeline

Now that we have our state stored in Azure Blob Storage, we can begin working on the actual pipeline. This is where the automation actually happens and you’ll see how much easier it makes things.

  1. Go back to Azure DevOps, under the Pipelines section on the left-hand panel, select ‘Builds’, and then click the ‘New pipeline’ button.
  2. When the window opens, choose the Classic editor link near the bottom.
  3. Select the correct project, repository, and branch (dev)
  4. Choose to start with an Empty job

Now that you’ve got a skeleton of a build, here’s where we actually start adding the steps to create a fully functional Terraform setup.

Next to ‘Agent job 1’, click the + sign. In the search field, enter Terraform, select the entry called ‘Terraform Build & Release Tasks’ by Charles Zipp, and then select ‘Get it free’. Follow the prompts and install it on DevOps (it doesn’t cost anything). Once it’s installed, just refresh the build page and click the + sign and search for Terraform again.

For each pipeline we build, we’ll first need to install the Terraform tools, so add a ‘Terraform Installer’ task. Select the new task and ensure that the version of Terraform being installed is exactly the same as the version you’re using on your local machine, so edit the ‘Version’ field and replace it with the correct value.

Add another task, so search for Terraform again, but this time select ‘Terraform CLI’. It will probably default to ‘terraform validate’ so select it so that we can update a few things. Since the first thing we need to do is initialize the environment by downloading all of the necessary providers, change the Display Name to something more descriptive like terraform init and change the Command dropdown to ‘init’. Leave everything else the same.

Add another Terraform CLI task, this time for the validation step. This will make sure that before we try to do anything with our .tf files that they’re valid and the syntax is correct. Since validate is the default option for the Terraform CLI task, we’re done with this one.

Add another task, but this time, since the Terraform add-on doesn’t support the workspace command (as of writing), we need to add a shellexec task, so search for that string and add it. Change the Display Name and Code to both read terraform workspace select dev.

You guessed it, add another Terraform CLI task for the plan. Update the Display Name to read terraform plan and then select plan from the Command dropdown. Here’s where we get to define the environment, so in the ‘Command Options’ text field, enter -var-file=environments/dev.tfvars -out=TFPlan -input=False This tells Terraform to use the variables that you’ve specified for the DEV environment, write the output to a file called TFPlan and don’t expect any input.

Almost there.

Add another task, this time searching for copy and selecting Microsoft’s Copy Files task. For the Source Folder, enter $(Build.SourcesDirectory) and for the Target Folder, enter $(Build.ArtifactStagingDirectory)

For the final task, search for publish and select Microsoft’s Publish Build Artifacts. The defaults here are fine.

All you have to do to finish up is click the dropdown arrow next to ‘Save and queue’ and select ‘Save’, accept the defaults, and click ‘Save’ again.

You now have a working DevOps Build Pipeline!

In the next part, we’ll take the artifacts published from this Build pipeline and use them to create a Release pipeline that will actually tell Terraform to make your changes.

Posts navigation