How to import a publicly-issued certificate into Azure Key Vault

Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your co-workers with Cat Facts. We learned a bunch about the different encoding formats of certificates and some of their restrictions, both within Azure Key Vault as well as with the certificate types themselves. Let’s get started!

Initially, we created an elliptic curve-derived (EC) private key (using elliptic curve prime256v1), and a CSR by doing:

openssl ecparam -out privatekey.key -name prime256v1 -genkey
openssl req -new -key privatekey.key -out request.csr -sha256

making sure to not include an email address or password. I am not actually clear on what the technical reasoning behind this is, but I saw it noted on several sites.

We submitted the CSR to our certificate authority (CA) and shortly thereafter got back a signed PEM file.

We next needed to create a single PFX/PKCS12-formatted, password-protected certificate, so we grabbed our signed certificate (ServerCertificate.crt) and our CA’s intermediate certificate chain (Chain.crt) and then did:

openssl pkcs12 -export -inkey privatekey.key -in ServerCertificate.crt -certfile Chain.crt -out Certificate.pfx

But when we went to import it into the Key Vault with the correct password, it threw a general “We don’t like this certificate” error. The first thing we did was check out the provided link and saw that we could import PEM-formatted certificates directly. I didn’t remember this being the case in the past, so maybe this is a new feature?

No problem. We concatenated the certificate and key files into a single, large text file (echo ServerCertificate.crt >> concat.crt ; echo privatekey.key >> concat.crt) which would create a file called concat.crt which itself would consist of the

-----------BEGIN CERTIFICATE-----------
-----------END CERTIFICATE-----------

section from the ServerCertificate.crt file as well as the

-----BEGIN EC PARAMETERS-----
-----END EC PARAMETERS-----

and

-----BEGIN EC PRIVATE KEY-----
-----END EC PRIVATE KEY-----

sections from the privatekey.key file.

We went to upload concat.crt to the Key Vault and again were given the same error as before however after re-reading the document, we were disappointed when we saw this quote:

We currently don’t support EC keys in PEM format.

Section: Formats of Import we support

It surprises me that Microsoft does not support elliptic curve-based keys in PEM format. I am not aware of any technical limitation on the part of the certificate so this seems very much like a Microsoft-specfic thing, however if anyone is able to provide insight into this, I’d love to hear it.

OK, we’ll generate an 2048-bit RSA-derived key and CSR, and then try again.

openssl genrsa -des3 -out rsaprivate.key 2048
openssl req -new -key rsaprivate.key -out RSA.csr

We uploaded the CSR to the CA as a re-key request, and waited.

When the certificate was finally issued (as cert.pem), we could now take the final steps to prepare it for upload to the Key Vault. We concatenated the key and certificate together (echo rsaprivate.key >> rsacert.crt ; echo cert.pem >> rsacert.crt) and went to upload it to the Key Vault.

And yet again, it failed. After a bunch of researching on security blogs and StackOverflow, it turns out that the default output format of the private key is PKCS1, and Key Vault expects it to be in PKCS8 format. So now time to convert it.

openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in rsaprivate.key -out rsaprivate8.key

Finally, we re-concatenated the rsaprivate8.key and cert.pem files into a single rsacert8.crt file (echo rsaprivate8.key >> rsacert8.crt ; echo cert.pem >> rsacert8.crt) which we could import into Key Vault.

It worked!

We now have our SSL certificate in our HSM-backed Azure Key Vault that we can apply to our various web properties without having to store the actual certificate files anywhere, which makes our auditors very happy.

Terraform: “Error: insufficient items for attribute “sku”; must have at least 1″

Last week, we were attempting to deploy a new Terraform-owned resource but every time we ran terraform plan or terraform apply, we got the error Error: insufficient items for attribute "sku"; must have at least 1. We keep our Terraform code in a Azure DevOps project, with approvals being required for any new commits even into our dev environment, so we were flummoxed.

Our first thought was that we had upgraded the Terraform azurerm provider from 1.28.0 to 1.32.0 and we knew for a fact that the azurerm_key_vault resource had been changed from accepting a sku {} block to simply requiring a sku_name property. We tried every combination of having either, both, and none of them defined, and we still received the error. We even tried downgrading back to 1.28.0 as a fallback, but it made no change. At this point we were relatively confident that it wasn’t the provider.

The next thing we looked for was any other resources that had a sku {} block defined. This included our azurerm_app_service_plans, our azure_virtual_machines, and our azurerm_vpn_gateway. We searched for and commented out all of the respective declarations from our .tf files, but still we received the error.

Now we were starting to get nervous. Nothing we tried would solve the problem, and we were starting to get a backlog of requests for new resources that we couldn’t deploy because no matter what we did, whether adding or removing potentially broken code, we couldn’t deploy any new changes. To say the tension on our team was palpable would be the understatement of the year.

At this point we needed to take a step back and analyze the problem logically, so we all took a break from Terraform to clear our minds and de-stress a bit. We started to suspect something in the state file was causing the problem, but we weren’t really sure what. We decided to take the sledgehammer approach and using terraform state rm, we removed every instance of those commented out resources we found above.

This worked. Now we could run terraform plan and terraform apply without issue, but we still weren’t sure why. That didn’t bode well if the problem re-occured; we couldn’t just keep taking a sledgehammer to the environment, it’s just too disruptive. We needed to figure out the root cause.

We opened an issue on the provider’s GitHub page for further investigation, and after some digging by other community members and Terraform employees themselves, it seems that Microsoft’s API returns a different response for App Service Plans than any other resource when it is found to be missing. An assumption was being made that it would be the same for all resources, but it turned out that this was a bad assumption to make.

This turned out to be the key for us. Someone had deleted several App Service Plans from the Azure portal (thinking they were not being used) and so our assumption is that when the provider is checking for the status of a missing App Service Plan, the broken response makes Terraform think it actually exists, even though there’s no sku {} data in it, causing Terraform to think that that specific data was missing.

Knowing the core problem, the error message Error: insufficient items for attribute "sku"; must have at least 1 kind of makes sense now: the sku attribute is missing at least 1 item, it just doesn’t make clear that the “insufficient items” are on the Azure side, not the Terraform / .tf side.

They’ve added a workaround in the provider until Microsoft updates the API to respond like all of the other resources.

Have you seen this error before? What did you do to solve it?

Postman collections

An open letter to API developers

Dear API developer,

Let me first thank you for making the decision to offer your data for me to consume. In a time where many companies are holding on to their data as tightly as they can, it is commendable that your company is forward-thinking enough to realize that the world is better when we can share information and grow together.

Depending on your industry, and whether the data being offered was the core offering of your organization or the by-product of another process, I realize that it can be quite the fight to convince senior leadership that the data being offered is more valuable when it is made (semi-)public. I’m sure you had to sit through many boring meetings while extolling the virtues of sharing vs. hoarding, time which could have been better spent doing almost anything else.

You have gone through plenty of testing, ensuring that the data from your API is being returned correctly, that it is formatted logically, and that it is hopefully highly-available. You have fed it countless variations of request values, and confirmed that the responses match what is expected. And you probably did at least some – if not a majority – with Postman.

Why, then, are you making your customers rebuild their own Postman collection instead of just sharing the one you and your team have built? Not only does it save time on your consumer’s side, it ensures that as you update your API, they will immediately have the most up-to-date version of what a correctly formatted request looks like, rather than having to dig through some esoteric documentation that shows that a request must now be submitted in all lowercase, or must have the JSON body in a correct order. Even if it’s not to JSON API spec, I can quickly compare my request to yours and see my problem right away, saving me a call to you or your support team.

My personal suggestion is to add the cost of your Postman Pro licenses and maintenance of the collection to the monthly subscription costs your customer is paying for, but the financial decisions are ultimately up to you. All I ask is that you give me a way to immediately import your API definition to my Postman instance, and keep it up to date over the lifetime of our relationship.

If I can get up and running quickly, that is worth much much more to me, than having a few extra fields that I may or may not ever use. It will make me much more likely to obtain and retain your services, even if it’s not otherwise as fully-featured as your competitor.

With my jaw mostly unclenched,
nexxai

Posts navigation