Generate Terraform files for existing resources

You may find yourself in a position where a resource already exists in your cloud environment but was created in the respective provider’s GUI rather than in Terraform. You may feel a bit overwhelmed at first, but there are a few ways to generate Terraform files for existing resources, and we’re going to talk about the various ways today. This is also not an exhaustive list; if you have any other suggestions, please leave a comment and I’ll be sure to update this post.

Method 1 – Manual

Be warned, the manual method takes a little more time, but is not restricted to certain resource types. I prefer this method because it means that you’ll be able to see every setting that is already set on your resource with your own two eyes, which is good for sanity checking.

First, you’re going to want to create a .tf file with just the outline of the resource type you’re trying to import or generate.

For example, if I wanted to create the Terraform for a resource group called example-resource-group that had several tags attached to it, I would do:

resource "azurerm_resource_group" "example-resource-group" {
}

and then save it.

Next, I would go to the Azure GUI, find and open the resource group, and then open the ‘Properties’ section from the blade.

I would look for the Resource ID, for example /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group and copy it.

I would then open up a command prompt / terminal and import the state by running: terraform import azurerm_resource_group.example-resource-group /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group

Finally, and this is the crucial part, I would immediately run terraform plan. There may be required fields that you will need to fill out before this comamnd works, but in general, this will compare the existing state that you just imported to the blank resource in the .tf file, and show you all of the differences which you can then copy into your new Terraform file, and be confident that you have imported all of the settings.

Example:

# azurerm_resource_group.example-resource-group will be updated in-place
   ~ resource "azurerm_resource_group" "example-resource-group" {
         id       = "/subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group"
         location = "centralus"
         name     = "example-resource-group"
       ~ tags     = {
           ~ "environment" = "dev" -> null
           ~ "owner"       = "example.person" -> null
           ~ "product"     = "internal" -> null
         }
     }

A shortcut I’ve found is to just copy the entire resource section, and then replace all of the tildes (~) with spaces, and then find and remove all instances of -> null.

Method 2 – Az2tf (Azure only)

Andy Thomas (Microsoft employee) put together a tool called Az2tf which iterates over your entire subscription, and generates .tf files for most of the common types of resources, and he’s adding more all the time. Requesting a specific resource type is as simple as opening an issue and explaining which resource is missing. In my experience, he’s responded within a few hours with a solution.

Method 3 – Terraforming (AWS only)

Daisuke Fujita put together a tool called Terraforming that with a little bit of scripting can generate Terraform files for all of your AWS resources.

Method 4 – cf-terraforming (Cloudflare only)

Cloudflare put together a fantastic tool called cf-terraforming which rips through your Cloudflare tenant and generates .tf files for everything Cloudflare related. The great thing about cf-terraforming is that because it’s written by the vendor of the original product, they treat it as a first class citizen and keep it very up-to-date with any new resources they themselves add to their product. I wish all vendors would do this.

To sum things up, there are plenty of ways to generate Terraform files for existing resources. Some are more time consuming than others, but they all have the goal of making your environment less brittle and your processes more repeatable, which will save time, money, and most importantly stress, when an inevitable incident takes place.

Do you know of any other tools for these or other providers that can assist in bringing previously unmanaged resources under Terraform management? Leave a comment and we’ll add them to this page as soon as possible!

Store a private key in Azure Key Vault for use in a Logic App

Today, I found myself in need of an automated SFTP connection that would reach out to one of our partners, download a file, and then dump it in to a Data Lake for further processing. This meant that I would need to store a private in Azure Key Vault for use in a Logic App. While this was mainly a straightforward process, there was a small hiccup that we encountered and wanted to pass along.

First, we went ahead and generated a public/private key pair using:

ssh-keygen -t rsa -b 4096

where rsa is the algorithm and 4096 is the length of the key in bits. We avoided the ec25519 and ecdsa algorithms as our partner does not support elliptic-curve cryptography. As this command was run on a Mac laptop which already has it’s own ~/.ssh/id_rsa[.pub] key pair, we chose a new filename and location /tmp/sftp to temporarily store this new pair.

The problem arose when we tried to insert the private key data into Key Vault as a secret: the Azure portal does not support multi-line secret entry, resulting in a non-standard and ultimately broken key entry.

The solution was to use the Azure CLI to upload the contents of the private key by doing:

az keyvault secret set --vault-name sftp-keyvault -n private-key -f '/tmp/sftp'

This uploaded the file correctly to the secret titled private-key, which means that we can now add a Key Vault action in our Logic App to pull the secret, without having to leave the key in plain view, and then use it as the data source for the private key field in SFTP - Copy File action.

As an aside, we also created a new secret called public-key and uploaded a copy of sftp.pub just so that 6 months from now if we need to recall a copy of it to send to another partner, it’s there for us to grab.

Integrating Azure Kubernetes Service and Azure Container Registry

At our office, we’ve been using Docker containers deployed to Azure App Service for Containers for all of our microservices, but after a few incidents in the past couple of weeks, we’ve decided to look into managing our own Kubernetes cluster, but we quickly found out that integrating Azure Kubernetes Service and Azure Container Registry took a little bit of tweaking.

The main issue is that having AKS authenticate against ACR requires setting up a service principal and then instructing AKS to use that SP. It’s not hard to set up, but there are a few steps and I wanted to bring them all to one place to make things easier for you in the future.

I’m going to describe this process from the perspective of someone who already has a container registry and Kubernetes cluster stood up and just need to tie the two together. There are plenty of tutorials on how to stand each of those services up themselves and so I’ll leave it as an exercise for the reader to get to this point.

You should also be aware that the AKS cluster and the ACR must be in the same subscription for this process to work.

Step 1: Create the service principal using the following BASH script

#!/bin/bash

echo -n "Ensure you are logged into Azure CLI before continuing and then press [ENTER]"
read UNUSEDVARIABLE
echo -n ""
echo -n "Enter the name of the container registry (*without* the .azurecr.io) and press [ENTER]: "
read ACR_NAME
echo -n "Enter the name of the service prinicpal you would like to create (e.g. test-sp-dev) and press [ENTER]: "
read SERVICE_PRINCIPAL_NAME

# Populate the ACR login server and resource id.

ACR_LOGIN_SERVER=$(az acr show --name $ACR_NAME --query loginServer --output tsv)
ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)

# Create acrpull role assignment with a scope of the ACR resource.

SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --role acrpull --scopes $ACR_REGISTRY_ID --query password --output tsv)

# Get the service principal client id.

CLIENT_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)

# Output used when creating Kubernetes secret.

echo "Service principal ID: $CLIENT_ID"
echo "Service principal password: $SP_PASSWD"

You will want to make note of this ID and password combination for future troubleshooting as well as in the instances where you cannot proceed through step 4 of this post in the same terminal instance.

Step 2: Install the aks cli from az cli

az aks install-cli

NOTE: You may need to run this command using sudo if you are on a Linux/MacOS/BSD computer

Step 3: Authenticate to your AKS cluster

Let’s assume that I have a Kubernetes cluster called nexxai-k8s-dev in a resource group also called nexxai-k8s-dev. I would run:

az aks get-credentials --resource-group nexxai-k8s-dev --name nexxai-k8s-dev
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

NOTE: The kubectl command just solves a permissions issue that appears if you try to access the Kubernetes dashboard using az aks browse --resource-group nexxai-k8s-dev --name nexxai-k8s-dev. If you plan on doing things only through the CLI, I’m not sure if this step is required and you may be able to skip it.

Step 4: Tell Kubernetes to use that Service Principal

Let’s assume that I have a container registry called nexxai-acr-dev and I want to name the secret acr-auth. I would run:

kubectl create secret docker-registry acr-auth --docker-server nexxai-acr-dev.azurecr.io --docker-username $CLIENT_ID --docker-password $SP_PASSWD --docker-email [email protected]

This command assumes you are working in the same terminal as when you executed the BASH script in step 1, and uses their variables. If you are not working in the same terminal session as in step 1, simply substitute $CLIENT_ID and $SP_PASSWD with their actual values.

Step 5: Profit!

At this point your new Kubernetes cluster is ready to talk to your container registry!

Now all you need to remember is that when you go to create a deployment, you need to provide the entire ACR address (including the .azurecr.io part of the domain) for the image setting, and you’ll need to define the imagePullSecrets block and set its name property to acr-auth in your YAML file like this:

apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app-dev-deploy
labels:
app: demo-app
spec:
replicas: 3
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: nexxai-acr-dev.azurecr.io/demo-app:latest
ports:
- containerPort: 4000
imagePullSecrets:
- name: acr-auth

And finally, by exposing the deployment using the command below, you’ll have a fully functional application that lives in Azure Kubernetes Service and uses Azure Container Registry to host its code.

kubectl expose deployment demo-app-dev-deploy --type=LoadBalancer --name=demo-app-dev

Congratulations!

Posts navigation