We ran into an issue this morning where we needed to renew our Azure DevOps Service Connection’s expired secret but there is no officially supported way to do this. The error was AADSTS7000215 - invalid clientid or secret. Thankfully, it’s not that difficult to solve. Fake a change Open your project in ADO (https://dev.azure.com/[GROUP]/[PROJECT]) At […]
]]>We ran into an issue this morning where we needed to renew our Azure DevOps Service Connection’s expired secret but there is no officially supported way to do this. The error was AADSTS7000215 - invalid clientid or secret
. Thankfully, it’s not that difficult to solve.
Now, if you ever need to renew an Azure DevOps Service Connection’s expired secret, hopefully you can avoid wasting precious time by trying to figure out how to do it manually and just trick the system into doing it for you.
]]>Your small business website is likely an essential part of your marketing strategy. It may also be your e-commerce sales channel or the platform you deliver your software on. In short, you need to keep your small business website safe. However, you likely can’t afford the same cybersecurity services as the big guys. Fortunately, there […]
]]>Your small business website is likely an essential part of your marketing strategy. It may also be your e-commerce sales channel or the platform you deliver your software on. In short, you need to keep your small business website safe. However, you likely can’t afford the same cybersecurity services as the big guys. Fortunately, there is a lot you can do yourself. This quick guide from nexxai.dev can help you figure out what you need to do.
The various login credentials you use for your website are one of your most important lines of defense. Make sure you are using long, strong passwords for any accounts. Additionally, at a minimum, all accounts with administrator access should be using either two-factor authentication or SSH keys. This may seem like a lot of trouble, but it is worth it.
Additionally, you should be very cautious about who has access to your website. If you need to give access to employees or freelancers, only give them the permissions they need. For example, if someone is just posting blogs, he or she doesn’t need administrator access.
Secure socket layer or SSL is a technology used to encrypt data between computer browsers and website servers. It is a must-have technology for any small business website.
First, it will ensure that no one can snoop on the traffic between your visitors and your website. This includes if you are trying to log into your website back end from your own computer.
Second, many browsers are all but requiring HTTPS connections (achieved using SSL). It makes your website more secure, more professional-looking, and in compliance with the latest best practices. In short, you need to use this technology. According to the University of Michigan, around 80 percent of websites use HTTPS. If you aren’t, you are falling behind.
You are hopefully already backing up your business data regularly. You should be doing the same with your website content. Anything that you have on your website should be backed up fairly regularly. If you post a lot of new content or capture customer data through your site, consider daily or even hourly backups. If not, you may be able to do weekly backups.
There are a lot of options when setting up a website, especially if you manage your own server or content management system. It is a good idea to get someone to help you set it up. This will help you to ensure that your website complies with all the latest security best practices. Even seemingly unrelated errors can cause significant vulnerabilities. Don’t risk your website or your business’s financial well-being. Consider hiring a freelancer. When you are considering an individual, look at his or her reviews from other customers. Also, make sure you have clear expectations about cost and delivery time.
Finally, remember to use malware protection with your website hosting service. If you are renting or setting up a server on your own, you should install the appropriate anti-malware software – and keep it updated. Additionally, you will want a firewall (ideally a stand-alone network firewall). If you are using a shared hosting service, learn about your host’s security practices. Never use a host that doesn’t have a well-defined security plan.
Discover more today about keeping your small business website safe. With a few best practices and the right help, you can ensure that your website is safe from cyberattacks.
Cody McBride’s love for computers stems from high school when he built his own computer. Today he is a trained IT technician and knows how the inner workings of computers can be confusing to most. He is the creator of TechDeck.info where he offers easy-to-understand tech related advice and troubleshooting tips.
]]>Over the last couple weeks, I’ve had a number of conversations with people on our product and delivery teams about public key authentication due to conversations they’ve had to have with some of our vendors. After having to explain public key authentication to non-techies several times, I figured it might be useful to post something […]
]]>Over the last couple weeks, I’ve had a number of conversations with people on our product and delivery teams about public key authentication due to conversations they’ve had to have with some of our vendors. After having to explain public key authentication to non-techies several times, I figured it might be useful to post something public in case it helps anyone else.
So what is public key authentication and how does it work?
At it’s base, public key authentication is a secure way for a user or client to connect to a service via SSH, without having to send a password across the wire. Passwords can be intercepted and so the fewer times we have to send a password across an untrusted network, the better. Before the first connection is ever established, the client generates a public/private key pair and then sends their public key to the server they wish to connect to, while keeping their private key private.
Now the math that goes into why sharing this public key in the open isn’t a problem is a bit complicated (but hardly goes beyond grade 12 math), but it’s a lot easier to understand when you compare it to a real world scenario.
Let’s say you have a friend (vendor) and they have a shed (server) that you want to access and leave a batch of freshly baked cookies (files) for on a regular basis when they’re not home. You think about it for a while and come up with a solution: “Friend, please install a separate door (username) on your shed that only provides access to a small section of it (your home folder)”. Next, you go to the hardware store and buy a deadbolt lock (public key) that comes with a metal key (private key) that you keep on your keychain. Finally, you send this deadbolt lock to your friend to install on the door, while never showing them the key that opens the lock.
You can now come and go as you please, leaving your friend freshly baked cookies every few days, and since your key never left your possession, you can be sure that one else is sneaking inside the shed to steal the cookies. Additionally, your friend never needs to see the key because they have access to the whole shed, and you can be certain that no one else will ever be able to open up the lock because they don’t have your key.
This is public key authentication in a nutshell.
One added benefit that I haven’t really touched on here is that public key authentication – when properly created and protected – is the securest way to offer access to a system. Due to the math involved, it would take an adversary many times longer than the entire universe’s existence (that is not an exaggeration) to break your private key. Basically you can assume that if you do it right, it’s going to be secure.
I hope this made public key authentication a little less confusing and daunting for the non-techies who read this site, but if not, Khan Academy has a great video that goes into more depth using paint and color mixing. And if that still doesn’t answer your question, please leave a comment and I’d be happy to go into more detail.
]]>We recently ran into an issue setting up a new DNS entry on Cloudflare, using the orange-cloud (reverse proxying) feature, but we were receiving Error 520 and were curious what was wrong and how to fix it. The error page itself doesn’t give a lot of information and since it’s a custom error they’ve created, […]
]]>We recently ran into an issue setting up a new DNS entry on Cloudflare, using the orange-cloud (reverse proxying) feature, but we were receiving Error 520 and were curious what was wrong and how to fix it. The error page itself doesn’t give a lot of information and since it’s a custom error they’ve created, it wasn’t easy to find out or even intuit much information about what it might mean.
To give some backstory, we are using a SaaS provider of a service for our employees that we want to protect behind our own domain. For example, instead of using ourcompany.saascompany.com, we wanted to use something like saasservicename.ourcompany.com. The provider supported this and so we set up the record within Cloudflare but as soon as we tried to visit the page, we received Cloudflare’s infamous 520 error: “Web server is returning an unknown error”.
After trying to troubleshoot the problem through Cloudflare, we turned off the orange-cloud and figured out that the SaaS provider hadn’t installed our TLS certificate correctly and so when Cloudflare was attempting to retrieve our instance from their server, they were receiving the NET::ERR_CERT_COMMON_NAME_INVALID. In response to that, they were throwing their own custom error 520 (it is not an official error code).
As soon as the vendor fixed the certificate issue, the 520 went away and we were able to re-enable orange-cloud, confirm that the site was up and working, and continue on with life confident that an attacker would not be able to determine who is providing the SaaS service for us.
]]>For the last two days, I’ve been trying to deploy some new microservices using a certificate stored in Key Vault in an Azure App Service. By now, you’ve probably figured out that we love them around here. I’ve also been slamming my head against the wall because of some not-well-documented functionality about granting permissions to […]
]]>For the last two days, I’ve been trying to deploy some new microservices using a certificate stored in Key Vault in an Azure App Service. By now, you’ve probably figured out that we love them around here. I’ve also been slamming my head against the wall because of some not-well-documented functionality about granting permissions to the Key Vault.
As a quick primer, here’s the basics of what I was trying to do:
resource "azurerm_app_service" "centralus-app-service" { name = "${var.service-name}-centralus-app-service-${var.environment_name}" location = "${azurerm_resource_group.centralus-rg.location}" resource_group_name = "${azurerm_resource_group.centralus-rg.name}" app_service_plan_id = "${azurerm_app_service_plan.centralus-app-service-plan.id}" identity { type = "SystemAssigned" } } data "azurerm_key_vault" "cert" { name = "${var.key-vault-name}" resource_group_name = "${var.key-vault-rg}" }
resource "azurerm_key_vault_access_policy" "centralus" { key_vault_id = "${data.azurerm_key_vault.cert.id}" tenant_id = "${azurerm_app_service.centralus-app-service.identity.0.tenant_id}" object_id = "${azurerm_app_service.centralus-app-service.identity.0.principal_id}" secret_permissions = [ "get" ] certificate_permissions = [ "get" ] }
resource "azurerm_app_service_certificate" "centralus" { name = "${local.full_service_name}-cert" resource_group_name = "${azurerm_resource_group.centralus-rg.name}" location = "${azurerm_resource_group.centralus-rg.location}" key_vault_secret_id = "${var.key-vault-secret-id}" depends_on = [azurerm_key_vault_access_policy.centralus] }
and these are the relevant values I was passing into the module:
key-vault-secret-id = "https://example-keyvault.vault.azure.net/secrets/cert/0d599f0ec05c3bda8c3b8a68c32a1b47" key-vault-rg = "example-keyvault" key-vault-name = "example-keyvault"
But no matter what I did, I kept bumping up against this error:
Error: Error creating/updating App Service Certificate "example-app-dev-cert" (Resource Group "example-app-centralus-rg-dev"): web.CertificatesClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="The service does not have access to '/subscriptions/[SUBSCRIPTIONID]/resourcegroups/example-keyvault/providers/microsoft.keyvault/vaults/example-keyvault' Key Vault. Please make sure that you have granted necessary permissions to the service to perform the request operation." Details=[{"Message":"The service does not have access to '/subscriptions/[SUBSCRIPTIONID]/resourcegroups/example-keyvault/providers/microsoft.keyvault/vaults/example-keyvault' Key Vault. Please make sure that you have granted necessary permissions to the service to perform the request operation."},{"Code":"BadRequest"},{"ErrorEntity":{"Code":"BadRequest","ExtendedCode":"59716","Message":"The service does not have access to '/subscriptions/[SUBSCRIPTIONID]/resourcegroups/example-keyvault/providers/microsoft.keyvault/vaults/example-keyvault' Key Vault. Please make sure that you have granted necessary permissions to the service to perform the request operation.","MessageTemplate":"The service does not have access to '{0}' Key Vault. Please make sure that you have granted necessary permissions to the service to perform the request operation.","Parameters":["/subscriptions/[SUBSCRIPTIONID]/resourcegroups/example-keyvault/providers/microsoft.keyvault/vaults/example-keyvault"]}}]
I checked and re-checked and triple-checked and had colleagues check, but no matter what I did, it kept puking with this permissions issue. I confirmed that the App Service’s identity was being provided and saved, but nothing seemed to work.
Then I found this blog post from 2016 talking about a magic Service Principal (or more specifically, a Resource Principal) that requires access to the Key Vault too. All I did was add the following resource with the magic SP, and everything worked perfectly.
resource "azurerm_key_vault_access_policy" "azure-app-service" { key_vault_id = "${data.azurerm_key_vault.cert.id}" tenant_id = "${azurerm_app_service.centralus-app-service.identity.0.tenant_id}" # This object is the Microsoft Azure Web App Service magic SP # as per https://azure.github.io/AppService/2016/05/24/Deploying-Azure-Web-App-Certificate-through-Key-Vault.html object_id = "abfa0a7c-a6b6-4736-8310-5855508787cd" secret_permissions = [ "get" ] certificate_permissions = [ "get" ] }
It’s frustrating that Microsoft hasn’t documented this piece (at least officially), but hopefully with this knowledge, you’ll be able to automate using a certificate stored in Key Vault in your next Azure App Service.
]]>Today, I found myself in need of an automated SFTP connection that would reach out to one of our partners, download a file, and then dump it in to a Data Lake for further processing. This meant that I would need to store a private in Azure Key Vault for use in a Logic App. […]
]]>Today, I found myself in need of an automated SFTP connection that would reach out to one of our partners, download a file, and then dump it in to a Data Lake for further processing. This meant that I would need to store a private in Azure Key Vault for use in a Logic App. While this was mainly a straightforward process, there was a small hiccup that we encountered and wanted to pass along.
First, we went ahead and generated a public/private key pair using:
ssh-keygen -t rsa -b 4096
where rsa
is the algorithm and 4096
is the length of the key in bits. We avoided the ec25519
and ecdsa
algorithms as our partner does not support elliptic-curve cryptography. As this command was run on a Mac laptop which already has it’s own ~/.ssh/id_rsa[.pub]
key pair, we chose a new filename and location /tmp/sftp
to temporarily store this new pair.
The problem arose when we tried to insert the private key data into Key Vault as a secret: the Azure portal does not support multi-line secret entry, resulting in a non-standard and ultimately broken key entry.
The solution was to use the Azure CLI to upload the contents of the private key by doing:
az keyvault secret set --vault-name sftp-keyvault -n private-key -f '/tmp/sftp'
This uploaded the file correctly to the secret titled private-key
, which means that we can now add a Key Vault action in our Logic App to pull the secret, without having to leave the key in plain view, and then use it as the data source for the private key field in SFTP - Copy File
action.
As an aside, we also created a new secret called public-key
and uploaded a copy of sftp.pub
just so that 6 months from now if we need to recall a copy of it to send to another partner, it’s there for us to grab.
At our office, we’ve been using Docker containers deployed to Azure App Service for Containers for all of our microservices, but after a few incidents in the past couple of weeks, we’ve decided to look into managing our own Kubernetes cluster, but we quickly found out that integrating Azure Kubernetes Service and Azure Container Registry […]
]]>At our office, we’ve been using Docker containers deployed to Azure App Service for Containers for all of our microservices, but after a few incidents in the past couple of weeks, we’ve decided to look into managing our own Kubernetes cluster, but we quickly found out that integrating Azure Kubernetes Service and Azure Container Registry took a little bit of tweaking.
The main issue is that having AKS authenticate against ACR requires setting up a service principal and then instructing AKS to use that SP. It’s not hard to set up, but there are a few steps and I wanted to bring them all to one place to make things easier for you in the future.
I’m going to describe this process from the perspective of someone who already has a container registry and Kubernetes cluster stood up and just need to tie the two together. There are plenty of tutorials on how to stand each of those services up themselves and so I’ll leave it as an exercise for the reader to get to this point.
You should also be aware that the AKS cluster and the ACR must be in the same subscription for this process to work.
#!/bin/bash
echo -n "Ensure you are logged into Azure CLI before continuing and then press [ENTER]"
read UNUSEDVARIABLE
echo -n ""
echo -n "Enter the name of the container registry (*without* the .azurecr.io) and press [ENTER]: "
read ACR_NAME
echo -n "Enter the name of the service prinicpal you would like to create (e.g. test-sp-dev) and press [ENTER]: "
read SERVICE_PRINCIPAL_NAME
# Populate the ACR login server and resource id.
ACR_LOGIN_SERVER=$(az acr show --name $ACR_NAME --query loginServer --output tsv)
ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
# Create acrpull role assignment with a scope of the ACR resource.
SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --role acrpull --scopes $ACR_REGISTRY_ID --query password --output tsv)
# Get the service principal client id.
CLIENT_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
# Output used when creating Kubernetes secret.
echo "Service principal ID: $CLIENT_ID"
echo "Service principal password: $SP_PASSWD"
You will want to make note of this ID and password combination for future troubleshooting as well as in the instances where you cannot proceed through step 4 of this post in the same terminal instance.
az aks install-cli
NOTE: You may need to run this command using sudo
if you are on a Linux/MacOS/BSD computer
Step 3: Authenticate to your AKS cluster
Let’s assume that I have a Kubernetes cluster called nexxai-k8s-dev
in a resource group also called nexxai-k8s-dev
. I would run:
az aks get-credentials --resource-group nexxai-k8s-dev --name nexxai-k8s-dev
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
NOTE: The kubectl
command just solves a permissions issue that appears if you try to access the Kubernetes dashboard using az aks browse --resource-group nexxai-k8s-dev --name nexxai-k8s-dev
. If you plan on doing things only through the CLI, I’m not sure if this step is required and you may be able to skip it.
Let’s assume that I have a container registry called nexxai-acr-dev
and I want to name the secret acr-auth
. I would run:
kubectl create secret docker-registry acr-auth --docker-server nexxai-acr-dev.azurecr.io --docker-username $CLIENT_ID --docker-password $SP_PASSWD --docker-email [email protected]
This command assumes you are working in the same terminal as when you executed the BASH script in step 1, and uses their variables. If you are not working in the same terminal session as in step 1, simply substitute $CLIENT_ID
and $SP_PASSWD
with their actual values.
At this point your new Kubernetes cluster is ready to talk to your container registry!
Now all you need to remember is that when you go to create a deployment, you need to provide the entire ACR address (including the .azurecr.io
part of the domain) for the image
setting, and you’ll need to define the imagePullSecrets
block and set its name
property to acr-auth
in your YAML file like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app-dev-deploy
labels:
app: demo-app
spec:
replicas: 3
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: nexxai-acr-dev.azurecr.io/demo-app:latest
ports:
- containerPort: 4000
imagePullSecrets:
- name: acr-auth
And finally, by exposing the deployment using the command below, you’ll have a fully functional application that lives in Azure Kubernetes Service and uses Azure Container Registry to host its code.
kubectl expose deployment demo-app-dev-deploy --type=LoadBalancer --name=demo-app-dev
Congratulations!
]]>Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your […]
]]>Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your co-workers with Cat Facts. We learned a bunch about the different encoding formats of certificates and some of their restrictions, both within Azure Key Vault as well as with the certificate types themselves. Let’s get started!
Initially, we created an elliptic curve-derived (EC) private key (using elliptic curve prime256v1), and a CSR by doing:
openssl ecparam -out privatekey.key -name prime256v1 -genkey
openssl req -new -key privatekey.key -out request.csr -sha256
making sure to not include an email address or password. I am not actually clear on what the technical reasoning behind this is, but I saw it noted on several sites.
We submitted the CSR to our certificate authority (CA) and shortly thereafter got back a signed PEM file.
We next needed to create a single PFX/PKCS12-formatted, password-protected certificate, so we grabbed our signed certificate (ServerCertificate.crt
) and our CA’s intermediate certificate chain (Chain.crt
) and then did:
openssl pkcs12 -export -inkey privatekey.key -in ServerCertificate.crt -certfile Chain.crt -out Certificate.pfx
But when we went to import it into the Key Vault with the correct password, it threw a general “We don’t like this certificate” error. The first thing we did was check out the provided link and saw that we could import PEM-formatted certificates directly. I didn’t remember this being the case in the past, so maybe this is a new feature?
No problem. We concatenated the certificate and key files into a single, large text file (echo ServerCertificate.crt >> concat.crt ; echo privatekey.key >> concat.crt
) which would create a file called concat.crt
which itself would consist of the -----------BEGIN CERTIFICATE-----------
-----------END CERTIFICATE-----------
section from the ServerCertificate.crt
file as well as the -----BEGIN EC PARAMETERS-----
-----END EC PARAMETERS-----
and-----BEGIN EC PRIVATE KEY-----
-----END EC PRIVATE KEY-----
sections from the privatekey.key
file.
We went to upload concat.crt
to the Key Vault and again were given the same error as before however after re-reading the document, we were disappointed when we saw this quote:
We currently don’t support EC keys in PEM format.
Section: Formats of Import we support
It surprises me that Microsoft does not support elliptic curve-based keys in PEM format. I am not aware of any technical limitation on the part of the certificate so this seems very much like a Microsoft-specfic thing, however if anyone is able to provide insight into this, I’d love to hear it.
OK, we’ll generate an 2048-bit RSA-derived key and CSR, and then try again.
openssl genrsa -des3 -out rsaprivate.key 2048
openssl req -new -key rsaprivate.key -out RSA.csr
We uploaded the CSR to the CA as a re-key request, and waited.
When the certificate was finally issued (as cert.pem
), we could now take the final steps to prepare it for upload to the Key Vault. We concatenated the key and certificate together (echo rsaprivate.key >> rsacert.crt ; echo cert.pem >> rsacert.crt
) and went to upload it to the Key Vault.
And yet again, it failed. After a bunch of researching on security blogs and StackOverflow, it turns out that the default output format of the private key is PKCS1, and Key Vault expects it to be in PKCS8 format. So now time to convert it.
openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in rsaprivate.key -out rsaprivate8.key
Finally, we re-concatenated the rsaprivate8.key
and cert.pem
files into a single rsacert8.crt
file (echo rsaprivate8.key >> rsacert8.crt ; echo cert.pem >> rsacert8.crt
) which we could import into Key Vault.
It worked!
We now have our SSL certificate in our HSM-backed Azure Key Vault that we can apply to our various web properties without having to store the actual certificate files anywhere, which makes our auditors very happy.
]]>Michael Howard of Microsoft put out a great post about how easy it is to inadvertently create massive security holes in the form of SQL Injection Vulnerabilities in your HTTP-accessible Azure Logic App by not using the ‘Execute a SQL Query’ action correctly. He also gives some simple examples of how to protect yourself in […]
]]>Michael Howard of Microsoft put out a great post about how easy it is to inadvertently create massive security holes in the form of SQL Injection Vulnerabilities in your HTTP-accessible Azure Logic App by not using the ‘Execute a SQL Query’ action correctly. He also gives some simple examples of how to protect yourself in the process.
To summarize: if you are not using prepared statements or stored procedures, it is extremely trivial for an attacker to construct a query that does anything from truncate or drop tables, to changing data within the database, to getting full remote command execution using a command like SQL Server’s xp_cmdshell
.
Please be extremely careful when you’re building your Logic Apps – they may be simple to build but that also means it’s just as simple to make a glaring security mistake that could cost your business time and money.
]]>EDIT: Updated on July 10, 2019; modified second- and third-last paragraphs to show the correct process of retrieving the AWS_SECRET_ACCESS_KEY from the Key Vault and setting it as a protected environment variable Our primary cloud is in Azure which makes building DevOps pipelines with automation scoped to a particular subscription very easy, but what happens […]
]]>EDIT: Updated on July 10, 2019; modified second- and third-last paragraphs to show the correct process of retrieving the AWS_SECRET_ACCESS_KEY from the Key Vault and setting it as a protected environment variable
Our primary cloud is in Azure which makes building DevOps pipelines with automation scoped to a particular subscription very easy, but what happens when we want to deploy something in AWS, since storing keys in source control is A Very Bad Idea?
Simple, we use Azure Key Vault.
First, we created a Key Vault specifically for this purpose called company-terraform
which will specifically be used to store the various secrets for Terraform-based deployments. When you tie a subscription from Azure DevOps to an Azure subscription, it creates an “application” in the Azure Enterprise Applications list, so give that application Get and List permissions to this vault.
Next, we created a secret called AmazonAPISecretKey
and then set the secret’s content to the actual API key you are presented when you enable programmatic access to an account in the AWS IAM console.
In our Azure DevOps Terraform build and release pipelines, we then added an Azure Key Vault step, selecting the appropriate subscription and Key Vault. Once selected, we added a Secrets filter AmazonAPISecretKey
meaning that it will only ever fetch that secret on run; if you will be adding multiple secrets which will all be used in this particular pipeline, add them to this filter list.
Finally, we can now use the string $(AmazonAPISecretKey)
in any shellexec or other pipeline task to authenticate against AWS, while never having to commit the actual key to a viewable source.
Since one of the methods the Terraform AWS provider can use to authenticate is by using the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables, we will set them up so that DevOps can use them in its various tasks.
First, open your Build or Release pipeline and select the Variables tab. Create a new variable called AWS_ACCESS_KEY_ID
and set the value to your access key ID (usually something like AK49FKF4034F42DZV2VRMD
). Then create a second variable called AWS_SECRET_ACCESS_KEY
which you can leave blank, but click the padlock icon next to it, to tell DevOps that its contents are secret and shouldn’t be shared.
Now create a shellexec
task and add the following command to it, which will set the AWS_SECRET_ACCESS_KEY
environment variable to the contents of the Key Vault entry we created earlier:
echo "##vso[task.setvariable variable=AWS_SECRET_ACCESS_KEY;]$(AmazonAPISecretKey)"
And there you have it! You can now reference your AWS accounts from within your Terraform structure without ever actually exposing your keys to prying eyes!
]]>