We ran into an issue this morning where we needed to renew our Azure DevOps Service Connection’s expired secret but there is no officially supported way to do this. The error was AADSTS7000215 - invalid clientid or secret. Thankfully, it’s not that difficult to solve. Fake a change Open your project in ADO (https://dev.azure.com/[GROUP]/[PROJECT]) At […]
]]>We ran into an issue this morning where we needed to renew our Azure DevOps Service Connection’s expired secret but there is no officially supported way to do this. The error was AADSTS7000215 - invalid clientid or secret
. Thankfully, it’s not that difficult to solve.
Now, if you ever need to renew an Azure DevOps Service Connection’s expired secret, hopefully you can avoid wasting precious time by trying to figure out how to do it manually and just trick the system into doing it for you.
]]>Azure recently implemented a change to the API Management service whereby deleting the instance only puts it into a soft-deleted state rather than completely nuking it from orbit. This may be desirable for data recovery purposes but it means that if you run a terraform destroy on an environment with an APIM instance on it […]
]]>Azure recently implemented a change to the API Management service whereby deleting the instance only puts it into a soft-deleted state rather than completely nuking it from orbit. This may be desirable for data recovery purposes but it means that if you run a terraform destroy
on an environment with an APIM instance on it and then you try and rebuild that environment, it will fail due to the fact that the name you’re trying to use is being held onto by the previously removed instance. So since neither Azure CLI nor Az PowerShell natively support purging, I’m going to show you how to manually purge a soft-deleted Azure API Management instance.
NOTE: The below script uses the basic Az PowerShell tools but with a little elbow grease could be adapted to bash/zsh (provided you have a way of retrieving your Azure access token using OAuth).
$token = Get-AzAccessToken
$request = @{
Method = 'DELETE'
Uri = "https://management.azure.com/subscriptions/{subscriptionGuid}/providers/Microsoft.ApiManagement/locations/{region}/deletedservices/{apimName}?api-version=2020-06-01-preview"
Headers = @{
Authorization = "Bearer $($token.Token)"
}
}
Invoke-RestMethod @request
The only values you’ll need to supply are the subscriptionGuid
, region
, and apimName
in the Uri.
Now the next time you’re stuck wondering why you can’t tear down and rebuild your environments with your IaC tool of choice, you’ll know how to purge a soft-deleted Azure API Management instance.
Source: Microsoft docs
]]>Many SSL certificate authorities (CAs) do not natively support .PFX format certificates which means that if you plan on installing them on something like an Azure App Service, you may encounter issues. Today, let’s figure out how to convert a CRT SSL certificate chain to PFX format. First, let’s generate a private key and certificate […]
]]>Many SSL certificate authorities (CAs) do not natively support .PFX format certificates which means that if you plan on installing them on something like an Azure App Service, you may encounter issues. Today, let’s figure out how to convert a CRT SSL certificate chain to PFX format.
First, let’s generate a private key and certificate signing request. Run the following command, and answer the questions as accurately as possible. The private key file (domain.key
) should be kept secret and protected.
openssl req \
-newkey rsa:2048 -nodes -keyout domain.key \
-out domain.csr
Next, take the contents of domain.csr
(it is just a plaintext file with your answers and some other non-secret information base64-encoded; it can be opened in any text editor) and request your certificate through your CA. This process varies per certificate authority, and so is out of scope for this article.
[Time passes]
Now, your CA provides you with a .ZIP file with the following files.
your_domain_com.crt
AAACertificateServices.crt
DomainValidationSecureServerCA.crt
USERTrustRSAAAACA.crt
(where your_domain_com.crt
is the actual certificate file and the other .CRT files represent the various certificates that will allow a browser to chain up to the root; while the filenames and number of files will almost certainly be different for each certificate authority, the point here is to illustrate that there will be some number of .CRT files and that they are all important)
Extract those files into the same folder that you have the domain.key
file from earlier in.
Finally, let’s take our certificate and combine them with the rest of the chain to create a single .PFX file by running the following command. Your site’s certificate should be specified in the -in
parameter, and for each of the chain certificates, adding another -certfile
entry.
openssl pkcs12 -export -out certificate.pfx \
-inkey domain.key \
-in your_domain_com.crt \
-certfile AAACertificateServices.crt \
-certfile DomainValidationSecureServerCA.crt \
-certfile USERTrustRSAAAACA.crt
NOTE: Azure App Services and Azure Key Vaults require a password-protected .PFX file, so ensure that you enter one when prompted. When you go to upload the certificate and you are required to select the .PFX file and a password, the password you created here is the one it’s referring to.
And you’re done! You now have a file in that folder (certificate.pfx) that you can upload/install and ensure your site is protected against MITM attacks.
]]>The problem We’ve recently begun attempting to scale our Azure App Services up and down for our test environments; scaling them up to match production performance levels (SKU: PremiumV2) during the day and then back down to minimal (SKU: Basic) at the end of the working day to save on costs. Just in the last […]
]]>We’ve recently begun attempting to scale our Azure App Services up and down for our test environments; scaling them up to match production performance levels (SKU: PremiumV2) during the day and then back down to minimal (SKU: Basic) at the end of the working day to save on costs. Just in the last month or two however, we’ve started to get the error “Cannot use the SKU Basic with File Change Audit for site XXX-XXX-XXX-XXX”.
Initially, we thought it had to do with the fact that we had a Diagnostic setting that was tracking AppServiceFileAuditLogs
, but even after removing that Diagnostic setting before attempting the scale down, the issue persisted.
After banging our head against the walls with no progress being made, we opened a low-severity ticket with Azure Support to understand what was going on. They suggested we remove the following App Configuration settings:
Again, these did not have any effect.
It was at this time that I was in the portal browsing around for something else and happened to notice the “JSON View” option at the top right of the App Service so I checked it out and grep’d for audit
just to see what I’d find.
Bingo: fileChangeAuditEnabled
Not so bingo: fileChangeAuditEnabled: null
But seeing that setting got me to thinking. What if there’s a bug in what JSON View is showing. The error we’re receiving is clearly saying it’s enabled, but the website is showing null
; what if there’s some kind of type-mismatch going on behind the portal that is showing null but actually has a setting? What if we could use a different mechanism to test that theory?
Well, it just so happens that Azure PowerShell has a Get-AzResource
function that can do just that and this blog post shows us how to do that.
First, let’s get the resource:
$appServiceRG = "example-resource-group"
$appServiceName = "example-app-service-name"
$config = Get-AzResource -ResourceGroupName $appServiceRG `
-ResourceType "Microsoft.Web/sites/config" `
-ResourceName "$($appServiceName)/web" `
-apiVersion 2016-08-01
We now have an object in $config
that we can now check the properties of by doing:
$config.Properties
And there it is:
fileChangeAuditEnabled : True
Now all we need to do is configure it to false
(and also unset a property called ReservedInstanceCount
which is a duplicate of preWarmedInstanceCount
but cannot be included when we try and reset the other setting due to what I assume is Azure just keeping it around for legacy reasons):
$config.Properties.fileChangeAuditEnabled = "false"
$config.Properties.PSObject.Properties.Remove('ReservedInstanceCount')
Next, as per the suggestion from Parameswaran in the comments (thank you!), create a new Array (since existing arrays are of fixed size and cannot be modified) while removing AppServiceFileAuditLogs
from the list of azureMonitorLogCategories
$newCategories = @()
ForEach ($entry in $config.Properties.azureMonitorLogCategories) {
If ($entry -ne "AppServiceFileAuditLogs") {
$newCategories += $entry
}
}
$config.Properties.azureMonitorLogCategories = $newCategories
And finally, let’s set the setting:
$config | Set-AzResource -Force
Next, for any Deployment Slots you have on this resource, repeat these steps again, but using the following resource retrieval code:
$config = Get-AzResource -ResourceGroupName $appServiceRG `
-ResourceType "Microsoft.Web/sites/slots" `
-ResourceName "$($appServiceName)" `
-apiVersion 2016-08-01
Now, when you try to scale down from a PremiumV2 SKU to a Basic SKU, you will no longer receive the error of “Cannot use the SKU Basic with File Change Audit for site XXX-XXX-XXX-XXX”.
]]>As many of you have probably gathered, over the past few weeks, I’ve been working on building a process for deploying an Azure App Service from scratch, including DNS and TLS in a single Terraform module. Today, I write this post with success in my heart, and at the bottom, I provide copies of the […]
]]>As many of you have probably gathered, over the past few weeks, I’ve been working on building a process for deploying an Azure App Service from scratch, including DNS and TLS in a single Terraform module.
Today, I write this post with success in my heart, and at the bottom, I provide copies of the necessary files for your own usage.
One of the biggest hurdles I faced was trying to integrate Cloudflare’s CDN services with Azure’s Custom Domain verification. Typically, I’ll rely on the options available in the GUI as the inclusive list of “things I can do” so up until now, if we wanted to stand up a multi-region App Service, we had to do the following:
azurewebsites.net
hostname for HTTPS for each region (R1 and R2)example-app-eastus.azurewebsites.net
(R1), example-app-westus.azurewebsites.net
(R2)example-app.domain.com -> example-app-eastus.azurewebsites.net
example-app.domain.com -> example-app-westus.azurewebsites.net
While this eventually accomplishes the task, the failure mode it introduces is that if you ever want to add a third (or fourth or fifth…) region, you temporarily have to not only direct all traffic to your brand new single instance momentarily to verify the domain, but you also have to turn off proxying, exposing the fact that you are using Azure (bad OPSEC).
After doing some digging however, I came across a Microsoft document that explains that there is a way to add a TXT record which you can use to verify ownership of the domain without a bunch of messing around with the original record you’re dealing with.
This is great because we can just add new awverify records for each region and Azure will trust we own them, but Terraform introduces a new wrinkle in that it creates the record at Cloudflare so fast that Cloudflare’s infrastructure often doesn’t have time to replicate the new entry across their fleet before you attempt the verification, which means that the lookup will fail and Terraform will die.
To get around this, we added a null_resource that just executes a 30 second sleep to allow time for the record to propagate through Cloudflare’s network before attempting the lookup.
I’ve put together a copy of our Terraform modules for your perusal and usage:
Using this module will allow you to easily deploy all of your micro-services in a Highly Available configuration by utilizing multiple regions.
]]>For the last two days, I’ve been trying to deploy some new microservices using a certificate stored in Key Vault in an Azure App Service. By now, you’ve probably figured out that we love them around here. I’ve also been slamming my head against the wall because of some not-well-documented functionality about granting permissions to […]
]]>For the last two days, I’ve been trying to deploy some new microservices using a certificate stored in Key Vault in an Azure App Service. By now, you’ve probably figured out that we love them around here. I’ve also been slamming my head against the wall because of some not-well-documented functionality about granting permissions to the Key Vault.
As a quick primer, here’s the basics of what I was trying to do:
resource "azurerm_app_service" "centralus-app-service" { name = "${var.service-name}-centralus-app-service-${var.environment_name}" location = "${azurerm_resource_group.centralus-rg.location}" resource_group_name = "${azurerm_resource_group.centralus-rg.name}" app_service_plan_id = "${azurerm_app_service_plan.centralus-app-service-plan.id}" identity { type = "SystemAssigned" } } data "azurerm_key_vault" "cert" { name = "${var.key-vault-name}" resource_group_name = "${var.key-vault-rg}" }
resource "azurerm_key_vault_access_policy" "centralus" { key_vault_id = "${data.azurerm_key_vault.cert.id}" tenant_id = "${azurerm_app_service.centralus-app-service.identity.0.tenant_id}" object_id = "${azurerm_app_service.centralus-app-service.identity.0.principal_id}" secret_permissions = [ "get" ] certificate_permissions = [ "get" ] }
resource "azurerm_app_service_certificate" "centralus" { name = "${local.full_service_name}-cert" resource_group_name = "${azurerm_resource_group.centralus-rg.name}" location = "${azurerm_resource_group.centralus-rg.location}" key_vault_secret_id = "${var.key-vault-secret-id}" depends_on = [azurerm_key_vault_access_policy.centralus] }
and these are the relevant values I was passing into the module:
key-vault-secret-id = "https://example-keyvault.vault.azure.net/secrets/cert/0d599f0ec05c3bda8c3b8a68c32a1b47" key-vault-rg = "example-keyvault" key-vault-name = "example-keyvault"
But no matter what I did, I kept bumping up against this error:
Error: Error creating/updating App Service Certificate "example-app-dev-cert" (Resource Group "example-app-centralus-rg-dev"): web.CertificatesClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="The service does not have access to '/subscriptions/[SUBSCRIPTIONID]/resourcegroups/example-keyvault/providers/microsoft.keyvault/vaults/example-keyvault' Key Vault. Please make sure that you have granted necessary permissions to the service to perform the request operation." Details=[{"Message":"The service does not have access to '/subscriptions/[SUBSCRIPTIONID]/resourcegroups/example-keyvault/providers/microsoft.keyvault/vaults/example-keyvault' Key Vault. Please make sure that you have granted necessary permissions to the service to perform the request operation."},{"Code":"BadRequest"},{"ErrorEntity":{"Code":"BadRequest","ExtendedCode":"59716","Message":"The service does not have access to '/subscriptions/[SUBSCRIPTIONID]/resourcegroups/example-keyvault/providers/microsoft.keyvault/vaults/example-keyvault' Key Vault. Please make sure that you have granted necessary permissions to the service to perform the request operation.","MessageTemplate":"The service does not have access to '{0}' Key Vault. Please make sure that you have granted necessary permissions to the service to perform the request operation.","Parameters":["/subscriptions/[SUBSCRIPTIONID]/resourcegroups/example-keyvault/providers/microsoft.keyvault/vaults/example-keyvault"]}}]
I checked and re-checked and triple-checked and had colleagues check, but no matter what I did, it kept puking with this permissions issue. I confirmed that the App Service’s identity was being provided and saved, but nothing seemed to work.
Then I found this blog post from 2016 talking about a magic Service Principal (or more specifically, a Resource Principal) that requires access to the Key Vault too. All I did was add the following resource with the magic SP, and everything worked perfectly.
resource "azurerm_key_vault_access_policy" "azure-app-service" { key_vault_id = "${data.azurerm_key_vault.cert.id}" tenant_id = "${azurerm_app_service.centralus-app-service.identity.0.tenant_id}" # This object is the Microsoft Azure Web App Service magic SP # as per https://azure.github.io/AppService/2016/05/24/Deploying-Azure-Web-App-Certificate-through-Key-Vault.html object_id = "abfa0a7c-a6b6-4736-8310-5855508787cd" secret_permissions = [ "get" ] certificate_permissions = [ "get" ] }
It’s frustrating that Microsoft hasn’t documented this piece (at least officially), but hopefully with this knowledge, you’ll be able to automate using a certificate stored in Key Vault in your next Azure App Service.
]]>You may find yourself in a position where a resource already exists in your cloud environment but was created in the respective provider’s GUI rather than in Terraform. You may feel a bit overwhelmed at first, but there are a few ways to generate Terraform files for existing resources, and we’re going to talk about […]
]]>You may find yourself in a position where a resource already exists in your cloud environment but was created in the respective provider’s GUI rather than in Terraform. You may feel a bit overwhelmed at first, but there are a few ways to generate Terraform files for existing resources, and we’re going to talk about the various ways today. This is also not an exhaustive list; if you have any other suggestions, please leave a comment and I’ll be sure to update this post.
Be warned, the manual method takes a little more time, but is not restricted to certain resource types. I prefer this method because it means that you’ll be able to see every setting that is already set on your resource with your own two eyes, which is good for sanity checking.
First, you’re going to want to create a .tf
file with just the outline of the resource type you’re trying to import or generate.
For example, if I wanted to create the Terraform for a resource group called example-resource-group
that had several tags attached to it, I would do:
resource "azurerm_resource_group" "example-resource-group" {
}
and then save it.
Next, I would go to the Azure GUI, find and open the resource group, and then open the ‘Properties’ section from the blade.
I would look for the Resource ID, for example /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group
and copy it.
I would then open up a command prompt / terminal and import the state by running: terraform import azurerm_resource_group.example-resource-group /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group
Finally, and this is the crucial part, I would immediately run terraform plan
. There may be required fields that you will need to fill out before this comamnd works, but in general, this will compare the existing state that you just imported to the blank resource in the .tf
file, and show you all of the differences which you can then copy into your new Terraform file, and be confident that you have imported all of the settings.
Example:
# azurerm_resource_group.example-resource-group will be updated in-place ~ resource "azurerm_resource_group" "example-resource-group" { id = "/subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group" location = "centralus" name = "example-resource-group" ~ tags = { ~ "environment" = "dev" -> null ~ "owner" = "example.person" -> null ~ "product" = "internal" -> null } }
A shortcut I’ve found is to just copy the entire resource
section, and then replace all of the tildes (~
) with spaces, and then find and remove all instances of -> null
.
Andy Thomas (Microsoft employee) put together a tool called Az2tf which iterates over your entire subscription, and generates .tf
files for most of the common types of resources, and he’s adding more all the time. Requesting a specific resource type is as simple as opening an issue and explaining which resource is missing. In my experience, he’s responded within a few hours with a solution.
Daisuke Fujita put together a tool called Terraforming that with a little bit of scripting can generate Terraform files for all of your AWS resources.
Cloudflare put together a fantastic tool called cf-terraforming which rips through your Cloudflare tenant and generates .tf files for everything Cloudflare related. The great thing about cf-terraforming is that because it’s written by the vendor of the original product, they treat it as a first class citizen and keep it very up-to-date with any new resources they themselves add to their product. I wish all vendors would do this.
To sum things up, there are plenty of ways to generate Terraform files for existing resources. Some are more time consuming than others, but they all have the goal of making your environment less brittle and your processes more repeatable, which will save time, money, and most importantly stress, when an inevitable incident takes place.
Do you know of any other tools for these or other providers that can assist in bringing previously unmanaged resources under Terraform management? Leave a comment and we’ll add them to this page as soon as possible!
]]>Today, I found myself in need of an automated SFTP connection that would reach out to one of our partners, download a file, and then dump it in to a Data Lake for further processing. This meant that I would need to store a private in Azure Key Vault for use in a Logic App. […]
]]>Today, I found myself in need of an automated SFTP connection that would reach out to one of our partners, download a file, and then dump it in to a Data Lake for further processing. This meant that I would need to store a private in Azure Key Vault for use in a Logic App. While this was mainly a straightforward process, there was a small hiccup that we encountered and wanted to pass along.
First, we went ahead and generated a public/private key pair using:
ssh-keygen -t rsa -b 4096
where rsa
is the algorithm and 4096
is the length of the key in bits. We avoided the ec25519
and ecdsa
algorithms as our partner does not support elliptic-curve cryptography. As this command was run on a Mac laptop which already has it’s own ~/.ssh/id_rsa[.pub]
key pair, we chose a new filename and location /tmp/sftp
to temporarily store this new pair.
The problem arose when we tried to insert the private key data into Key Vault as a secret: the Azure portal does not support multi-line secret entry, resulting in a non-standard and ultimately broken key entry.
The solution was to use the Azure CLI to upload the contents of the private key by doing:
az keyvault secret set --vault-name sftp-keyvault -n private-key -f '/tmp/sftp'
This uploaded the file correctly to the secret titled private-key
, which means that we can now add a Key Vault action in our Logic App to pull the secret, without having to leave the key in plain view, and then use it as the data source for the private key field in SFTP - Copy File
action.
As an aside, we also created a new secret called public-key
and uploaded a copy of sftp.pub
just so that 6 months from now if we need to recall a copy of it to send to another partner, it’s there for us to grab.
At our office, we’ve been using Docker containers deployed to Azure App Service for Containers for all of our microservices, but after a few incidents in the past couple of weeks, we’ve decided to look into managing our own Kubernetes cluster, but we quickly found out that integrating Azure Kubernetes Service and Azure Container Registry […]
]]>At our office, we’ve been using Docker containers deployed to Azure App Service for Containers for all of our microservices, but after a few incidents in the past couple of weeks, we’ve decided to look into managing our own Kubernetes cluster, but we quickly found out that integrating Azure Kubernetes Service and Azure Container Registry took a little bit of tweaking.
The main issue is that having AKS authenticate against ACR requires setting up a service principal and then instructing AKS to use that SP. It’s not hard to set up, but there are a few steps and I wanted to bring them all to one place to make things easier for you in the future.
I’m going to describe this process from the perspective of someone who already has a container registry and Kubernetes cluster stood up and just need to tie the two together. There are plenty of tutorials on how to stand each of those services up themselves and so I’ll leave it as an exercise for the reader to get to this point.
You should also be aware that the AKS cluster and the ACR must be in the same subscription for this process to work.
#!/bin/bash
echo -n "Ensure you are logged into Azure CLI before continuing and then press [ENTER]"
read UNUSEDVARIABLE
echo -n ""
echo -n "Enter the name of the container registry (*without* the .azurecr.io) and press [ENTER]: "
read ACR_NAME
echo -n "Enter the name of the service prinicpal you would like to create (e.g. test-sp-dev) and press [ENTER]: "
read SERVICE_PRINCIPAL_NAME
# Populate the ACR login server and resource id.
ACR_LOGIN_SERVER=$(az acr show --name $ACR_NAME --query loginServer --output tsv)
ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
# Create acrpull role assignment with a scope of the ACR resource.
SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --role acrpull --scopes $ACR_REGISTRY_ID --query password --output tsv)
# Get the service principal client id.
CLIENT_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
# Output used when creating Kubernetes secret.
echo "Service principal ID: $CLIENT_ID"
echo "Service principal password: $SP_PASSWD"
You will want to make note of this ID and password combination for future troubleshooting as well as in the instances where you cannot proceed through step 4 of this post in the same terminal instance.
az aks install-cli
NOTE: You may need to run this command using sudo
if you are on a Linux/MacOS/BSD computer
Step 3: Authenticate to your AKS cluster
Let’s assume that I have a Kubernetes cluster called nexxai-k8s-dev
in a resource group also called nexxai-k8s-dev
. I would run:
az aks get-credentials --resource-group nexxai-k8s-dev --name nexxai-k8s-dev
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
NOTE: The kubectl
command just solves a permissions issue that appears if you try to access the Kubernetes dashboard using az aks browse --resource-group nexxai-k8s-dev --name nexxai-k8s-dev
. If you plan on doing things only through the CLI, I’m not sure if this step is required and you may be able to skip it.
Let’s assume that I have a container registry called nexxai-acr-dev
and I want to name the secret acr-auth
. I would run:
kubectl create secret docker-registry acr-auth --docker-server nexxai-acr-dev.azurecr.io --docker-username $CLIENT_ID --docker-password $SP_PASSWD --docker-email [email protected]
This command assumes you are working in the same terminal as when you executed the BASH script in step 1, and uses their variables. If you are not working in the same terminal session as in step 1, simply substitute $CLIENT_ID
and $SP_PASSWD
with their actual values.
At this point your new Kubernetes cluster is ready to talk to your container registry!
Now all you need to remember is that when you go to create a deployment, you need to provide the entire ACR address (including the .azurecr.io
part of the domain) for the image
setting, and you’ll need to define the imagePullSecrets
block and set its name
property to acr-auth
in your YAML file like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app-dev-deploy
labels:
app: demo-app
spec:
replicas: 3
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: nexxai-acr-dev.azurecr.io/demo-app:latest
ports:
- containerPort: 4000
imagePullSecrets:
- name: acr-auth
And finally, by exposing the deployment using the command below, you’ll have a fully functional application that lives in Azure Kubernetes Service and uses Azure Container Registry to host its code.
kubectl expose deployment demo-app-dev-deploy --type=LoadBalancer --name=demo-app-dev
Congratulations!
]]>Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your […]
]]>Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your co-workers with Cat Facts. We learned a bunch about the different encoding formats of certificates and some of their restrictions, both within Azure Key Vault as well as with the certificate types themselves. Let’s get started!
Initially, we created an elliptic curve-derived (EC) private key (using elliptic curve prime256v1), and a CSR by doing:
openssl ecparam -out privatekey.key -name prime256v1 -genkey
openssl req -new -key privatekey.key -out request.csr -sha256
making sure to not include an email address or password. I am not actually clear on what the technical reasoning behind this is, but I saw it noted on several sites.
We submitted the CSR to our certificate authority (CA) and shortly thereafter got back a signed PEM file.
We next needed to create a single PFX/PKCS12-formatted, password-protected certificate, so we grabbed our signed certificate (ServerCertificate.crt
) and our CA’s intermediate certificate chain (Chain.crt
) and then did:
openssl pkcs12 -export -inkey privatekey.key -in ServerCertificate.crt -certfile Chain.crt -out Certificate.pfx
But when we went to import it into the Key Vault with the correct password, it threw a general “We don’t like this certificate” error. The first thing we did was check out the provided link and saw that we could import PEM-formatted certificates directly. I didn’t remember this being the case in the past, so maybe this is a new feature?
No problem. We concatenated the certificate and key files into a single, large text file (echo ServerCertificate.crt >> concat.crt ; echo privatekey.key >> concat.crt
) which would create a file called concat.crt
which itself would consist of the -----------BEGIN CERTIFICATE-----------
-----------END CERTIFICATE-----------
section from the ServerCertificate.crt
file as well as the -----BEGIN EC PARAMETERS-----
-----END EC PARAMETERS-----
and-----BEGIN EC PRIVATE KEY-----
-----END EC PRIVATE KEY-----
sections from the privatekey.key
file.
We went to upload concat.crt
to the Key Vault and again were given the same error as before however after re-reading the document, we were disappointed when we saw this quote:
We currently don’t support EC keys in PEM format.
Section: Formats of Import we support
It surprises me that Microsoft does not support elliptic curve-based keys in PEM format. I am not aware of any technical limitation on the part of the certificate so this seems very much like a Microsoft-specfic thing, however if anyone is able to provide insight into this, I’d love to hear it.
OK, we’ll generate an 2048-bit RSA-derived key and CSR, and then try again.
openssl genrsa -des3 -out rsaprivate.key 2048
openssl req -new -key rsaprivate.key -out RSA.csr
We uploaded the CSR to the CA as a re-key request, and waited.
When the certificate was finally issued (as cert.pem
), we could now take the final steps to prepare it for upload to the Key Vault. We concatenated the key and certificate together (echo rsaprivate.key >> rsacert.crt ; echo cert.pem >> rsacert.crt
) and went to upload it to the Key Vault.
And yet again, it failed. After a bunch of researching on security blogs and StackOverflow, it turns out that the default output format of the private key is PKCS1, and Key Vault expects it to be in PKCS8 format. So now time to convert it.
openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in rsaprivate.key -out rsaprivate8.key
Finally, we re-concatenated the rsaprivate8.key
and cert.pem
files into a single rsacert8.crt
file (echo rsaprivate8.key >> rsacert8.crt ; echo cert.pem >> rsacert8.crt
) which we could import into Key Vault.
It worked!
We now have our SSL certificate in our HSM-backed Azure Key Vault that we can apply to our various web properties without having to store the actual certificate files anywhere, which makes our auditors very happy.
]]>