How to import a publicly-issued certificate into Azure Key Vault

Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your co-workers with Cat Facts. We learned a bunch about the different encoding formats of certificates and some of their restrictions, both within Azure Key Vault as well as with the certificate types themselves. Let’s get started!

Initially, we created an elliptic curve-derived (EC) private key (using elliptic curve prime256v1), and a CSR by doing:

openssl ecparam -out privatekey.key -name prime256v1 -genkey
openssl req -new -key privatekey.key -out request.csr -sha256

making sure to not include an email address or password. I am not actually clear on what the technical reasoning behind this is, but I saw it noted on several sites.

We submitted the CSR to our certificate authority (CA) and shortly thereafter got back a signed PEM file.

We next needed to create a single PFX/PKCS12-formatted, password-protected certificate, so we grabbed our signed certificate (ServerCertificate.crt) and our CA’s intermediate certificate chain (Chain.crt) and then did:

openssl pkcs12 -export -inkey privatekey.key -in ServerCertificate.crt -certfile Chain.crt -out Certificate.pfx

But when we went to import it into the Key Vault with the correct password, it threw a general “We don’t like this certificate” error. The first thing we did was check out the provided link and saw that we could import PEM-formatted certificates directly. I didn’t remember this being the case in the past, so maybe this is a new feature?

No problem. We concatenated the certificate and key files into a single, large text file (echo ServerCertificate.crt >> concat.crt ; echo privatekey.key >> concat.crt) which would create a file called concat.crt which itself would consist of the

-----------BEGIN CERTIFICATE-----------
-----------END CERTIFICATE-----------

section from the ServerCertificate.crt file as well as the

-----BEGIN EC PARAMETERS-----
-----END EC PARAMETERS-----

and

-----BEGIN EC PRIVATE KEY-----
-----END EC PRIVATE KEY-----

sections from the privatekey.key file.

We went to upload concat.crt to the Key Vault and again were given the same error as before however after re-reading the document, we were disappointed when we saw this quote:

We currently don’t support EC keys in PEM format.

Section: Formats of Import we support

It surprises me that Microsoft does not support elliptic curve-based keys in PEM format. I am not aware of any technical limitation on the part of the certificate so this seems very much like a Microsoft-specfic thing, however if anyone is able to provide insight into this, I’d love to hear it.

OK, we’ll generate an 2048-bit RSA-derived key and CSR, and then try again.

openssl genrsa -des3 -out rsaprivate.key 2048
openssl req -new -key rsaprivate.key -out RSA.csr

We uploaded the CSR to the CA as a re-key request, and waited.

When the certificate was finally issued (as cert.pem), we could now take the final steps to prepare it for upload to the Key Vault. We concatenated the key and certificate together (echo rsaprivate.key >> rsacert.crt ; echo cert.pem >> rsacert.crt) and went to upload it to the Key Vault.

And yet again, it failed. After a bunch of researching on security blogs and StackOverflow, it turns out that the default output format of the private key is PKCS1, and Key Vault expects it to be in PKCS8 format. So now time to convert it.

openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in rsaprivate.key -out rsaprivate8.key

Finally, we re-concatenated the rsaprivate8.key and cert.pem files into a single rsacert8.crt file (echo rsaprivate8.key >> rsacert8.crt ; echo cert.pem >> rsacert8.crt) which we could import into Key Vault.

It worked!

We now have our SSL certificate in our HSM-backed Azure Key Vault that we can apply to our various web properties without having to store the actual certificate files anywhere, which makes our auditors very happy.

Azure Logic Apps and SQL Injection

Michael Howard of Microsoft put out a great post about how easy it is to inadvertently create massive security holes in the form of SQL Injection Vulnerabilities in your HTTP-accessible Azure Logic App by not using the ‘Execute a SQL Query’ action correctly. He also gives some simple examples of how to protect yourself in the process.

To summarize: if you are not using prepared statements or stored procedures, it is extremely trivial for an attacker to construct a query that does anything from truncate or drop tables, to changing data within the database, to getting full remote command execution using a command like SQL Server’s xp_cmdshell.

Please be extremely careful when you’re building your Logic Apps – they may be simple to build but that also means it’s just as simple to make a glaring security mistake that could cost your business time and money.

How to diagnose per-instance issues in Azure App Service

We have several micro-services that we run as App Services within Azure. In the past few weeks, multiple times we’ve experienced problems where a single instance has decided to go crazy but not crazy enough that Azure knows to take it out of rotation. Being able to diagnose these per-instance issues is imperative when it comes to offering a functioning App Service.

The first thing we’ve done is setup Alerts to monitor each App Service not just as a whole, but also per-instance. (NOTE: These are services we have not migrated to Terraform yet, so we have created these alerts manually, just like the services were as well. The alert definitions have been built into our Terraform stack and automatically get deployed as we move these micro-services over to the new Terraform-managed stack.) To get notified of broken single instances:

  1. Open the App Service in the Portal
  2. Under the ‘Monitoring’ section of the blade, select ‘Alerts’
  3. Select ‘New Alert Rule’
  4. Under ‘Condition’, choose ‘Add’
  5. Choose the ‘Http Server Errors’ signal
  6. Place a check in the box next to the ‘Instance’ dimension
  7. Set your Threshold settings according to your preference (for the record, we are using Dynamic / Medium / 5 minutes / Every 5 Minutes)
  8. Click ‘Done’
  9. Set up your Action Group accordingly
  10. Save the alert

Within 10 minutes, it will begin monitoring each instance which should give you better insight not just into the health of your application, but how each of the pieces that comprise your app are operating.

This is important because you may have 10 instances running a single App Service but the failure of a single instance may not create enough failures to trip an alert, depending on your routing scheme or traffic levels.

I personally believe that more visibility is always a positive, and so being able to detect per-instance issues in your Azure App Service can result in shorter outages and ultimately happier customers.

Posts navigation