Find a missing EC2 instance in AWS using just the command line

Today we found ourselves in a position where we had a Windows server – with only the Server Core experience installed – somewhere in EC2 but we could not figure out what AWS account it actually lived in. This meant that we were limiting to finding the missing EC2 instance in AWS using just the command line.

Most cloud providers offer an instance metadata endpoint that you can query to get some basic info about the instance itself. It usually operates on http://169.254.169.254 (that IP is not a placeholder, it’s literally the IP you query) and AWS is no different. As per the AWS documentation, if you hit http://169.254.169.254/latest/meta-data/ you can get some information about the running instance which was exactly what we needed.

The first thing we knew is that we needed to be in PowerShell, since the basic Windows command line doesn’t have an HTTP client.

Microsoft Windows [Version 10.0.20348.1070]
(c) Microsoft Corporation. All rights reserved.

C:\Users\nexxai>powershell.exe
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows

PS C:\Users\nexxai>

Next, we tried to use the Invoke-WebRequest cmdlet, except being Server Core, it did not have the Internet Explorer (lol) components installed that this cmdlet apparently needs.

Thinking on our feet, we considered that maybe the Invoke-RestMethod cmdlet didn’t have the IE dependency since it was strictly used for REST-based calls and may have be implemented without any IE components. Success.

PS C:\Users\nexxai>Invoke-RestMethod -Uri http://169.254.169.254/latest/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
events/
hostname
iam/
instance-action
instance-id
instance-life-cycle
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services/

We tried a couple of different endpoints, but then saw the iam entry and gave it a shot.

PS C:\Users\nexxai>Invoke-RestMethod -Uri http://169.254.169.254/latest/meta-data/iam/
info
security-credentials/

Then naturally we checked out info.

PS C:\Users\nexxai>Invoke-RestMethod -Uri http://169.254.169.254/latest/meta-data/iam/info

I can’t share the results of this command for obvious reasons, but contained within the results was the ARN of the IAM role that the EC2 instance was associated with, which – and here’s the important part – includes the account number! Yes!

After some more digging, it turns if you have the AWS CLI installed on the VM (unsure if this is installed by default), you can also run aws sts get-caller-identity which will show the account the instance is running in.

Once we were able to get the account number, we were quickly able to locate the rogue instance and deal with the original problem we were searching it out for. I hope you never have to try and find a missing EC2 instance using just the command line, but if so, hopefully we’ve just made it a little easier to do so.

Generate Terraform files for existing resources

You may find yourself in a position where a resource already exists in your cloud environment but was created in the respective provider’s GUI rather than in Terraform. You may feel a bit overwhelmed at first, but there are a few ways to generate Terraform files for existing resources, and we’re going to talk about the various ways today. This is also not an exhaustive list; if you have any other suggestions, please leave a comment and I’ll be sure to update this post.

Method 1 – Manual

Be warned, the manual method takes a little more time, but is not restricted to certain resource types. I prefer this method because it means that you’ll be able to see every setting that is already set on your resource with your own two eyes, which is good for sanity checking.

First, you’re going to want to create a .tf file with just the outline of the resource type you’re trying to import or generate.

For example, if I wanted to create the Terraform for a resource group called example-resource-group that had several tags attached to it, I would do:

resource "azurerm_resource_group" "example-resource-group" {
}

and then save it.

Next, I would go to the Azure GUI, find and open the resource group, and then open the ‘Properties’ section from the blade.

I would look for the Resource ID, for example /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group and copy it.

I would then open up a command prompt / terminal and import the state by running: terraform import azurerm_resource_group.example-resource-group /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group

Finally, and this is the crucial part, I would immediately run terraform plan. There may be required fields that you will need to fill out before this comamnd works, but in general, this will compare the existing state that you just imported to the blank resource in the .tf file, and show you all of the differences which you can then copy into your new Terraform file, and be confident that you have imported all of the settings.

Example:

# azurerm_resource_group.example-resource-group will be updated in-place
   ~ resource "azurerm_resource_group" "example-resource-group" {
         id       = "/subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group"
         location = "centralus"
         name     = "example-resource-group"
       ~ tags     = {
           ~ "environment" = "dev" -> null
           ~ "owner"       = "example.person" -> null
           ~ "product"     = "internal" -> null
         }
     }

A shortcut I’ve found is to just copy the entire resource section, and then replace all of the tildes (~) with spaces, and then find and remove all instances of -> null.

Method 2 – Az2tf (Azure only)

Andy Thomas (Microsoft employee) put together a tool called Az2tf which iterates over your entire subscription, and generates .tf files for most of the common types of resources, and he’s adding more all the time. Requesting a specific resource type is as simple as opening an issue and explaining which resource is missing. In my experience, he’s responded within a few hours with a solution.

Method 3 – Terraforming (AWS only)

Daisuke Fujita put together a tool called Terraforming that with a little bit of scripting can generate Terraform files for all of your AWS resources.

Method 4 – cf-terraforming (Cloudflare only)

Cloudflare put together a fantastic tool called cf-terraforming which rips through your Cloudflare tenant and generates .tf files for everything Cloudflare related. The great thing about cf-terraforming is that because it’s written by the vendor of the original product, they treat it as a first class citizen and keep it very up-to-date with any new resources they themselves add to their product. I wish all vendors would do this.

To sum things up, there are plenty of ways to generate Terraform files for existing resources. Some are more time consuming than others, but they all have the goal of making your environment less brittle and your processes more repeatable, which will save time, money, and most importantly stress, when an inevitable incident takes place.

Do you know of any other tools for these or other providers that can assist in bringing previously unmanaged resources under Terraform management? Leave a comment and we’ll add them to this page as soon as possible!

Add your AWS API key info in a Key Vault for Terraform

EDIT: Updated on July 10, 2019; modified second- and third-last paragraphs to show the correct process of retrieving the AWS_SECRET_ACCESS_KEY from the Key Vault and setting it as a protected environment variable

Our primary cloud is in Azure which makes building DevOps pipelines with automation scoped to a particular subscription very easy, but what happens when we want to deploy something in AWS, since storing keys in source control is A Very Bad Idea™?

Simple, we use Azure Key Vault.

First, we created a Key Vault specifically for this purpose called company-terraform which will specifically be used to store the various secrets for Terraform-based deployments. When you tie a subscription from Azure DevOps to an Azure subscription, it creates an “application” in the Azure Enterprise Applications list, so give that application Get and List permissions to this vault.

Next, we created a secret called AmazonAPISecretKey and then set the secret’s content to the actual API key you are presented when you enable programmatic access to an account in the AWS IAM console.

In our Azure DevOps Terraform build and release pipelines, we then added an Azure Key Vault step, selecting the appropriate subscription and Key Vault. Once selected, we added a Secrets filter AmazonAPISecretKey meaning that it will only ever fetch that secret on run; if you will be adding multiple secrets which will all be used in this particular pipeline, add them to this filter list.

Finally, we can now use the string $(AmazonAPISecretKey) in any shellexec or other pipeline task to authenticate against AWS, while never having to commit the actual key to a viewable source.

Since one of the methods the Terraform AWS provider can use to authenticate is by using the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, we will set them up so that DevOps can use them in its various tasks.

First, open your Build or Release pipeline and select the Variables tab. Create a new variable called AWS_ACCESS_KEY_ID and set the value to your access key ID (usually something like AK49FKF4034F42DZV2VRMD). Then create a second variable called AWS_SECRET_ACCESS_KEY which you can leave blank, but click the padlock icon next to it, to tell DevOps that its contents are secret and shouldn’t be shared.

Now create a shellexec task and add the following command to it, which will set the AWS_SECRET_ACCESS_KEY environment variable to the contents of the Key Vault entry we created earlier:

echo "##vso[task.setvariable variable=AWS_SECRET_ACCESS_KEY;]$(AmazonAPISecretKey)"

And there you have it! You can now reference your AWS accounts from within your Terraform structure without ever actually exposing your keys to prying eyes!