How to import a publicly-issued certificate into Azure Key Vault

Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your co-workers with Cat Facts. We learned a bunch about the different encoding formats of certificates and some of their restrictions, both within Azure Key Vault as well as with the certificate types themselves. Let’s get started!

Initially, we created an elliptic curve-derived (EC) private key (using elliptic curve prime256v1), and a CSR by doing:

openssl ecparam -out privatekey.key -name prime256v1 -genkey
openssl req -new -key privatekey.key -out request.csr -sha256

making sure to not include an email address or password. I am not actually clear on what the technical reasoning behind this is, but I saw it noted on several sites.

We submitted the CSR to our certificate authority (CA) and shortly thereafter got back a signed PEM file.

We next needed to create a single PFX/PKCS12-formatted, password-protected certificate, so we grabbed our signed certificate (ServerCertificate.crt) and our CA’s intermediate certificate chain (Chain.crt) and then did:

openssl pkcs12 -export -inkey privatekey.key -in ServerCertificate.crt -certfile Chain.crt -out Certificate.pfx

But when we went to import it into the Key Vault with the correct password, it threw a general “We don’t like this certificate” error. The first thing we did was check out the provided link and saw that we could import PEM-formatted certificates directly. I didn’t remember this being the case in the past, so maybe this is a new feature?

No problem. We concatenated the certificate and key files into a single, large text file (echo ServerCertificate.crt >> concat.crt ; echo privatekey.key >> concat.crt) which would create a file called concat.crt which itself would consist of the

-----------BEGIN CERTIFICATE-----------
-----------END CERTIFICATE-----------

section from the ServerCertificate.crt file as well as the

-----BEGIN EC PARAMETERS-----
-----END EC PARAMETERS-----

and

-----BEGIN EC PRIVATE KEY-----
-----END EC PRIVATE KEY-----

sections from the privatekey.key file.

We went to upload concat.crt to the Key Vault and again were given the same error as before however after re-reading the document, we were disappointed when we saw this quote:

We currently don’t support EC keys in PEM format.

Section: Formats of Import we support

It surprises me that Microsoft does not support elliptic curve-based keys in PEM format. I am not aware of any technical limitation on the part of the certificate so this seems very much like a Microsoft-specfic thing, however if anyone is able to provide insight into this, I’d love to hear it.

OK, we’ll generate an 2048-bit RSA-derived key and CSR, and then try again.

openssl genrsa -des3 -out rsaprivate.key 2048
openssl req -new -key rsaprivate.key -out RSA.csr

We uploaded the CSR to the CA as a re-key request, and waited.

When the certificate was finally issued (as cert.pem), we could now take the final steps to prepare it for upload to the Key Vault. We concatenated the key and certificate together (echo rsaprivate.key >> rsacert.crt ; echo cert.pem >> rsacert.crt) and went to upload it to the Key Vault.

And yet again, it failed. After a bunch of researching on security blogs and StackOverflow, it turns out that the default output format of the private key is PKCS1, and Key Vault expects it to be in PKCS8 format. So now time to convert it.

openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in rsaprivate.key -out rsaprivate8.key

Finally, we re-concatenated the rsaprivate8.key and cert.pem files into a single rsacert8.crt file (echo rsaprivate8.key >> rsacert8.crt ; echo cert.pem >> rsacert8.crt) which we could import into Key Vault.

It worked!

We now have our SSL certificate in our HSM-backed Azure Key Vault that we can apply to our various web properties without having to store the actual certificate files anywhere, which makes our auditors very happy.

Terraform: “Error: insufficient items for attribute “sku”; must have at least 1″

Last week, we were attempting to deploy a new Terraform-owned resource but every time we ran terraform plan or terraform apply, we got the error Error: insufficient items for attribute "sku"; must have at least 1. We keep our Terraform code in a Azure DevOps project, with approvals being required for any new commits even into our dev environment, so we were flummoxed.

Our first thought was that we had upgraded the Terraform azurerm provider from 1.28.0 to 1.32.0 and we knew for a fact that the azurerm_key_vault resource had been changed from accepting a sku {} block to simply requiring a sku_name property. We tried every combination of having either, both, and none of them defined, and we still received the error. We even tried downgrading back to 1.28.0 as a fallback, but it made no change. At this point we were relatively confident that it wasn’t the provider.

The next thing we looked for was any other resources that had a sku {} block defined. This included our azurerm_app_service_plans, our azure_virtual_machines, and our azurerm_vpn_gateway. We searched for and commented out all of the respective declarations from our .tf files, but still we received the error.

Now we were starting to get nervous. Nothing we tried would solve the problem, and we were starting to get a backlog of requests for new resources that we couldn’t deploy because no matter what we did, whether adding or removing potentially broken code, we couldn’t deploy any new changes. To say the tension on our team was palpable would be the understatement of the year.

At this point we needed to take a step back and analyze the problem logically, so we all took a break from Terraform to clear our minds and de-stress a bit. We started to suspect something in the state file was causing the problem, but we weren’t really sure what. We decided to take the sledgehammer approach and using terraform state rm, we removed every instance of those commented out resources we found above.

This worked. Now we could run terraform plan and terraform apply without issue, but we still weren’t sure why. That didn’t bode well if the problem re-occured; we couldn’t just keep taking a sledgehammer to the environment, it’s just too disruptive. We needed to figure out the root cause.

We opened an issue on the provider’s GitHub page for further investigation, and after some digging by other community members and Terraform employees themselves, it seems that Microsoft’s API returns a different response for App Service Plans than any other resource when it is found to be missing. An assumption was being made that it would be the same for all resources, but it turned out that this was a bad assumption to make.

This turned out to be the key for us. Someone had deleted several App Service Plans from the Azure portal (thinking they were not being used) and so our assumption is that when the provider is checking for the status of a missing App Service Plan, the broken response makes Terraform think it actually exists, even though there’s no sku {} data in it, causing Terraform to think that that specific data was missing.

Knowing the core problem, the error message Error: insufficient items for attribute "sku"; must have at least 1 kind of makes sense now: the sku attribute is missing at least 1 item, it just doesn’t make clear that the “insufficient items” are on the Azure side, not the Terraform / .tf side.

They’ve added a workaround in the provider until Microsoft updates the API to respond like all of the other resources.

Have you seen this error before? What did you do to solve it?

Add your AWS API key info in a Key Vault for Terraform

EDIT: Updated on July 10, 2019; modified second- and third-last paragraphs to show the correct process of retrieving the AWS_SECRET_ACCESS_KEY from the Key Vault and setting it as a protected environment variable

Our primary cloud is in Azure which makes building DevOps pipelines with automation scoped to a particular subscription very easy, but what happens when we want to deploy something in AWS, since storing keys in source control is A Very Bad Idea™?

Simple, we use Azure Key Vault.

First, we created a Key Vault specifically for this purpose called company-terraform which will specifically be used to store the various secrets for Terraform-based deployments. When you tie a subscription from Azure DevOps to an Azure subscription, it creates an “application” in the Azure Enterprise Applications list, so give that application Get and List permissions to this vault.

Next, we created a secret called AmazonAPISecretKey and then set the secret’s content to the actual API key you are presented when you enable programmatic access to an account in the AWS IAM console.

In our Azure DevOps Terraform build and release pipelines, we then added an Azure Key Vault step, selecting the appropriate subscription and Key Vault. Once selected, we added a Secrets filter AmazonAPISecretKey meaning that it will only ever fetch that secret on run; if you will be adding multiple secrets which will all be used in this particular pipeline, add them to this filter list.

Finally, we can now use the string $(AmazonAPISecretKey) in any shellexec or other pipeline task to authenticate against AWS, while never having to commit the actual key to a viewable source.

Since one of the methods the Terraform AWS provider can use to authenticate is by using the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, we will set them up so that DevOps can use them in its various tasks.

First, open your Build or Release pipeline and select the Variables tab. Create a new variable called AWS_ACCESS_KEY_ID and set the value to your access key ID (usually something like AK49FKF4034F42DZV2VRMD). Then create a second variable called AWS_SECRET_ACCESS_KEY which you can leave blank, but click the padlock icon next to it, to tell DevOps that its contents are secret and shouldn’t be shared.

Now create a shellexec task and add the following command to it, which will set the AWS_SECRET_ACCESS_KEY environment variable to the contents of the Key Vault entry we created earlier:

echo "##vso[task.setvariable variable=AWS_SECRET_ACCESS_KEY;]$(AmazonAPISecretKey)"

And there you have it! You can now reference your AWS accounts from within your Terraform structure without ever actually exposing your keys to prying eyes!

Posts navigation