Integrating Azure Kubernetes Service and Azure Container Registry

At our office, we’ve been using Docker containers deployed to Azure App Service for Containers for all of our microservices, but after a few incidents in the past couple of weeks, we’ve decided to look into managing our own Kubernetes cluster, but we quickly found out that integrating Azure Kubernetes Service and Azure Container Registry took a little bit of tweaking.

The main issue is that having AKS authenticate against ACR requires setting up a service principal and then instructing AKS to use that SP. It’s not hard to set up, but there are a few steps and I wanted to bring them all to one place to make things easier for you in the future.

I’m going to describe this process from the perspective of someone who already has a container registry and Kubernetes cluster stood up and just need to tie the two together. There are plenty of tutorials on how to stand each of those services up themselves and so I’ll leave it as an exercise for the reader to get to this point.

You should also be aware that the AKS cluster and the ACR must be in the same subscription for this process to work.

Step 1: Create the service principal using the following BASH script

#!/bin/bash

echo -n "Ensure you are logged into Azure CLI before continuing and then press [ENTER]"
read UNUSEDVARIABLE
echo -n ""
echo -n "Enter the name of the container registry (*without* the .azurecr.io) and press [ENTER]: "
read ACR_NAME
echo -n "Enter the name of the service prinicpal you would like to create (e.g. test-sp-dev) and press [ENTER]: "
read SERVICE_PRINCIPAL_NAME

# Populate the ACR login server and resource id.

ACR_LOGIN_SERVER=$(az acr show --name $ACR_NAME --query loginServer --output tsv)
ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)

# Create acrpull role assignment with a scope of the ACR resource.

SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --role acrpull --scopes $ACR_REGISTRY_ID --query password --output tsv)

# Get the service principal client id.

CLIENT_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)

# Output used when creating Kubernetes secret.

echo "Service principal ID: $CLIENT_ID"
echo "Service principal password: $SP_PASSWD"

You will want to make note of this ID and password combination for future troubleshooting as well as in the instances where you cannot proceed through step 4 of this post in the same terminal instance.

Step 2: Install the aks cli from az cli

az aks install-cli

NOTE: You may need to run this command using sudo if you are on a Linux/MacOS/BSD computer

Step 3: Authenticate to your AKS cluster

Let’s assume that I have a Kubernetes cluster called nexxai-k8s-dev in a resource group also called nexxai-k8s-dev. I would run:

az aks get-credentials --resource-group nexxai-k8s-dev --name nexxai-k8s-dev
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

NOTE: The kubectl command just solves a permissions issue that appears if you try to access the Kubernetes dashboard using az aks browse --resource-group nexxai-k8s-dev --name nexxai-k8s-dev. If you plan on doing things only through the CLI, I’m not sure if this step is required and you may be able to skip it.

Step 4: Tell Kubernetes to use that Service Principal

Let’s assume that I have a container registry called nexxai-acr-dev and I want to name the secret acr-auth. I would run:

kubectl create secret docker-registry acr-auth --docker-server nexxai-acr-dev.azurecr.io --docker-username $CLIENT_ID --docker-password $SP_PASSWD --docker-email [email protected]

This command assumes you are working in the same terminal as when you executed the BASH script in step 1, and uses their variables. If you are not working in the same terminal session as in step 1, simply substitute $CLIENT_ID and $SP_PASSWD with their actual values.

Step 5: Profit!

At this point your new Kubernetes cluster is ready to talk to your container registry!

Now all you need to remember is that when you go to create a deployment, you need to provide the entire ACR address (including the .azurecr.io part of the domain) for the image setting, and you’ll need to define the imagePullSecrets block and set its name property to acr-auth in your YAML file like this:

apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app-dev-deploy
labels:
app: demo-app
spec:
replicas: 3
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: nexxai-acr-dev.azurecr.io/demo-app:latest
ports:
- containerPort: 4000
imagePullSecrets:
- name: acr-auth

And finally, by exposing the deployment using the command below, you’ll have a fully functional application that lives in Azure Kubernetes Service and uses Azure Container Registry to host its code.

kubectl expose deployment demo-app-dev-deploy --type=LoadBalancer --name=demo-app-dev

Congratulations!

How to import a publicly-issued certificate into Azure Key Vault

Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your co-workers with Cat Facts. We learned a bunch about the different encoding formats of certificates and some of their restrictions, both within Azure Key Vault as well as with the certificate types themselves. Let’s get started!

Initially, we created an elliptic curve-derived (EC) private key (using elliptic curve prime256v1), and a CSR by doing:

openssl ecparam -out privatekey.key -name prime256v1 -genkey
openssl req -new -key privatekey.key -out request.csr -sha256

making sure to not include an email address or password. I am not actually clear on what the technical reasoning behind this is, but I saw it noted on several sites.

We submitted the CSR to our certificate authority (CA) and shortly thereafter got back a signed PEM file.

We next needed to create a single PFX/PKCS12-formatted, password-protected certificate, so we grabbed our signed certificate (ServerCertificate.crt) and our CA’s intermediate certificate chain (Chain.crt) and then did:

openssl pkcs12 -export -inkey privatekey.key -in ServerCertificate.crt -certfile Chain.crt -out Certificate.pfx

But when we went to import it into the Key Vault with the correct password, it threw a general “We don’t like this certificate” error. The first thing we did was check out the provided link and saw that we could import PEM-formatted certificates directly. I didn’t remember this being the case in the past, so maybe this is a new feature?

No problem. We concatenated the certificate and key files into a single, large text file (echo ServerCertificate.crt >> concat.crt ; echo privatekey.key >> concat.crt) which would create a file called concat.crt which itself would consist of the

-----------BEGIN CERTIFICATE-----------
-----------END CERTIFICATE-----------

section from the ServerCertificate.crt file as well as the

-----BEGIN EC PARAMETERS-----
-----END EC PARAMETERS-----

and

-----BEGIN EC PRIVATE KEY-----
-----END EC PRIVATE KEY-----

sections from the privatekey.key file.

We went to upload concat.crt to the Key Vault and again were given the same error as before however after re-reading the document, we were disappointed when we saw this quote:

We currently don’t support EC keys in PEM format.

Section: Formats of Import we support

It surprises me that Microsoft does not support elliptic curve-based keys in PEM format. I am not aware of any technical limitation on the part of the certificate so this seems very much like a Microsoft-specfic thing, however if anyone is able to provide insight into this, I’d love to hear it.

OK, we’ll generate an 2048-bit RSA-derived key and CSR, and then try again.

openssl genrsa -des3 -out rsaprivate.key 2048
openssl req -new -key rsaprivate.key -out RSA.csr

We uploaded the CSR to the CA as a re-key request, and waited.

When the certificate was finally issued (as cert.pem), we could now take the final steps to prepare it for upload to the Key Vault. We concatenated the key and certificate together (echo rsaprivate.key >> rsacert.crt ; echo cert.pem >> rsacert.crt) and went to upload it to the Key Vault.

And yet again, it failed. After a bunch of researching on security blogs and StackOverflow, it turns out that the default output format of the private key is PKCS1, and Key Vault expects it to be in PKCS8 format. So now time to convert it.

openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in rsaprivate.key -out rsaprivate8.key

Finally, we re-concatenated the rsaprivate8.key and cert.pem files into a single rsacert8.crt file (echo rsaprivate8.key >> rsacert8.crt ; echo cert.pem >> rsacert8.crt) which we could import into Key Vault.

It worked!

We now have our SSL certificate in our HSM-backed Azure Key Vault that we can apply to our various web properties without having to store the actual certificate files anywhere, which makes our auditors very happy.

Azure Logic Apps and SQL Injection

Michael Howard of Microsoft put out a great post about how easy it is to inadvertently create massive security holes in the form of SQL Injection Vulnerabilities in your HTTP-accessible Azure Logic App by not using the ‘Execute a SQL Query’ action correctly. He also gives some simple examples of how to protect yourself in the process.

To summarize: if you are not using prepared statements or stored procedures, it is extremely trivial for an attacker to construct a query that does anything from truncate or drop tables, to changing data within the database, to getting full remote command execution using a command like SQL Server’s xp_cmdshell.

Please be extremely careful when you’re building your Logic Apps – they may be simple to build but that also means it’s just as simple to make a glaring security mistake that could cost your business time and money.

Posts navigation