blog Archives » nexxai.dev https://nexxai.dev/category/blog/ reminders for my future self Tue, 27 Apr 2021 13:51:37 +0000 en-CA hourly 1 https://wordpress.org/?v=6.5.5 5 Ways to Secure Your Small Business Website https://nexxai.dev/5-ways-to-secure-your-small-business-website/?utm_source=rss&utm_medium=rss&utm_campaign=5-ways-to-secure-your-small-business-website https://nexxai.dev/5-ways-to-secure-your-small-business-website/#respond Tue, 27 Apr 2021 13:51:36 +0000 https://nexxai.dev/?p=285 The post 5 Ways to Secure Your Small Business Website appeared first on nexxai.dev.

Your small business website is likely an essential part of your marketing strategy. It may also be your e-commerce sales channel or the platform you deliver your software on. In short, you need to keep your small business website safe. However, you likely can’t afford the same cybersecurity services as the big guys. Fortunately, there […]

]]>
The post 5 Ways to Secure Your Small Business Website appeared first on nexxai.dev.

Your small business website is likely an essential part of your marketing strategy. It may also be your e-commerce sales channel or the platform you deliver your software on. In short, you need to keep your small business website safe. However, you likely can’t afford the same cybersecurity services as the big guys. Fortunately, there is a lot you can do yourself. This quick guide from nexxai.dev can help you figure out what you need to do.

Set Strong Login Credentials

The various login credentials you use for your website are one of your most important lines of defense. Make sure you are using long, strong passwords for any accounts. Additionally, at a minimum, all accounts with administrator access should be using either two-factor authentication or SSH keys. This may seem like a lot of trouble, but it is worth it.

Additionally, you should be very cautious about who has access to your website. If you need to give access to employees or freelancers, only give them the permissions they need. For example, if someone is just posting blogs, he or she doesn’t need administrator access.

Implement SSL

Secure socket layer or SSL is a technology used to encrypt data between computer browsers and website servers. It is a must-have technology for any small business website.

First, it will ensure that no one can snoop on the traffic between your visitors and your website. This includes if you are trying to log into your website back end from your own computer.

Second, many browsers are all but requiring HTTPS connections (achieved using SSL). It makes your website more secure, more professional-looking, and in compliance with the latest best practices. In short, you need to use this technology. According to the University of Michigan, around 80 percent of websites use HTTPS. If you aren’t, you are falling behind.

Back Up Your Website Often

You are hopefully already backing up your business data regularly. You should be doing the same with your website content. Anything that you have on your website should be backed up fairly regularly. If you post a lot of new content or capture customer data through your site, consider daily or even hourly backups. If not, you may be able to do weekly backups.

Get Help Configuring It

There are a lot of options when setting up a website, especially if you manage your own server or content management system. It is a good idea to get someone to help you set it up. This will help you to ensure that your website complies with all the latest security best practices. Even seemingly unrelated errors can cause significant vulnerabilities. Don’t risk your website or your business’s financial well-being. Consider hiring a freelancer. When you are considering an individual, look at his or her reviews from other customers. Also, make sure you have clear expectations about cost and delivery time.

Use Malware Protection

Finally, remember to use malware protection with your website hosting service. If you are renting or setting up a server on your own, you should install the appropriate anti-malware software – and keep it updated. Additionally, you will want a firewall (ideally a stand-alone network firewall). If you are using a shared hosting service, learn about your host’s security practices. Never use a host that doesn’t have a well-defined security plan.

Get Started Today

Discover more today about keeping your small business website safe. With a few best practices and the right help, you can ensure that your website is safe from cyberattacks.

About the Author

Cody McBride’s love for computers stems from high school when he built his own computer. Today he is a trained IT technician and knows how the inner workings of computers can be confusing to most. He is the creator of TechDeck.info where he offers easy-to-understand tech related advice and troubleshooting tips.

]]>
https://nexxai.dev/5-ways-to-secure-your-small-business-website/feed/ 0
Deploying an Azure App Service from scratch, including DNS and TLS https://nexxai.dev/deploying-an-azure-app-service-from-scratch-including-dns-and-tls/?utm_source=rss&utm_medium=rss&utm_campaign=deploying-an-azure-app-service-from-scratch-including-dns-and-tls https://nexxai.dev/deploying-an-azure-app-service-from-scratch-including-dns-and-tls/#respond Fri, 11 Oct 2019 17:27:51 +0000 https://nexxai.dev/?p=186 The post Deploying an Azure App Service from scratch, including DNS and TLS appeared first on nexxai.dev.

As many of you have probably gathered, over the past few weeks, I’ve been working on building a process for deploying an Azure App Service from scratch, including DNS and TLS in a single Terraform module. Today, I write this post with success in my heart, and at the bottom, I provide copies of the […]

]]>
The post Deploying an Azure App Service from scratch, including DNS and TLS appeared first on nexxai.dev.

As many of you have probably gathered, over the past few weeks, I’ve been working on building a process for deploying an Azure App Service from scratch, including DNS and TLS in a single Terraform module.

Today, I write this post with success in my heart, and at the bottom, I provide copies of the necessary files for your own usage.

One of the biggest hurdles I faced was trying to integrate Cloudflare’s CDN services with Azure’s Custom Domain verification. Typically, I’ll rely on the options available in the GUI as the inclusive list of “things I can do” so up until now, if we wanted to stand up a multi-region App Service, we had to do the following:

  1. Build and deploy the App Service, using the azurewebsites.net hostname for HTTPS for each region (R1 and R2)

    e.g. example-app-eastus.azurewebsites.net (R1), example-app-westus.azurewebsites.net (R2)
  2. Create the CNAME record for the service at Cloudflare pointing at R1, turning off proxying (orange cloud off)

    e.g. example-app.domain.com -> example-app-eastus.azurewebsites.net
  3. Add the Custom Domain on R1, using the CNAME verification method
  4. Once the hostname is verified, go back to Cloudflare and update the CNAME record for the service to point to R2

    e.g. example-app.domain.com -> example-app-westus.azurewebsites.net
  5. Add the Custom Domain on R2, using the CNAME verification method
  6. Once the hostname is verified, go back to Cloudflare and update the CNAME record for the service to point to the Traffic Manager, and also turn on proxying (orange cloud on)

While this eventually accomplishes the task, the failure mode it introduces is that if you ever want to add a third (or fourth or fifth…) region, you temporarily have to not only direct all traffic to your brand new single instance momentarily to verify the domain, but you also have to turn off proxying, exposing the fact that you are using Azure (bad OPSEC).

After doing some digging however, I came across a Microsoft document that explains that there is a way to add a TXT record which you can use to verify ownership of the domain without a bunch of messing around with the original record you’re dealing with.

This is great because we can just add new awverify records for each region and Azure will trust we own them, but Terraform introduces a new wrinkle in that it creates the record at Cloudflare so fast that Cloudflare’s infrastructure often doesn’t have time to replicate the new entry across their fleet before you attempt the verification, which means that the lookup will fail and Terraform will die.

To get around this, we added a null_resource that just executes a 30 second sleep to allow time for the record to propagate through Cloudflare’s network before attempting the lookup.

I’ve put together a copy of our Terraform modules for your perusal and usage:

Using this module will allow you to easily deploy all of your micro-services in a Highly Available configuration by utilizing multiple regions.

]]>
https://nexxai.dev/deploying-an-azure-app-service-from-scratch-including-dns-and-tls/feed/ 0
Generate Terraform files for existing resources https://nexxai.dev/generate-terraform-files-for-existing-resources/?utm_source=rss&utm_medium=rss&utm_campaign=generate-terraform-files-for-existing-resources https://nexxai.dev/generate-terraform-files-for-existing-resources/#respond Mon, 16 Sep 2019 15:15:57 +0000 https://nexxai.dev/?p=164 The post Generate Terraform files for existing resources appeared first on nexxai.dev.

You may find yourself in a position where a resource already exists in your cloud environment but was created in the respective provider’s GUI rather than in Terraform. You may feel a bit overwhelmed at first, but there are a few ways to generate Terraform files for existing resources, and we’re going to talk about […]

]]>
The post Generate Terraform files for existing resources appeared first on nexxai.dev.

You may find yourself in a position where a resource already exists in your cloud environment but was created in the respective provider’s GUI rather than in Terraform. You may feel a bit overwhelmed at first, but there are a few ways to generate Terraform files for existing resources, and we’re going to talk about the various ways today. This is also not an exhaustive list; if you have any other suggestions, please leave a comment and I’ll be sure to update this post.

Method 1 – Manual

Be warned, the manual method takes a little more time, but is not restricted to certain resource types. I prefer this method because it means that you’ll be able to see every setting that is already set on your resource with your own two eyes, which is good for sanity checking.

First, you’re going to want to create a .tf file with just the outline of the resource type you’re trying to import or generate.

For example, if I wanted to create the Terraform for a resource group called example-resource-group that had several tags attached to it, I would do:

resource "azurerm_resource_group" "example-resource-group" {
}

and then save it.

Next, I would go to the Azure GUI, find and open the resource group, and then open the ‘Properties’ section from the blade.

I would look for the Resource ID, for example /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group and copy it.

I would then open up a command prompt / terminal and import the state by running: terraform import azurerm_resource_group.example-resource-group /subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group

Finally, and this is the crucial part, I would immediately run terraform plan. There may be required fields that you will need to fill out before this comamnd works, but in general, this will compare the existing state that you just imported to the blank resource in the .tf file, and show you all of the differences which you can then copy into your new Terraform file, and be confident that you have imported all of the settings.

Example:

# azurerm_resource_group.example-resource-group will be updated in-place
   ~ resource "azurerm_resource_group" "example-resource-group" {
         id       = "/subscriptions/54ba8d50-7332-4f23-88fe-f88221f75bb3/resourceGroups/example-resource-group"
         location = "centralus"
         name     = "example-resource-group"
       ~ tags     = {
           ~ "environment" = "dev" -> null
           ~ "owner"       = "example.person" -> null
           ~ "product"     = "internal" -> null
         }
     }

A shortcut I’ve found is to just copy the entire resource section, and then replace all of the tildes (~) with spaces, and then find and remove all instances of -> null.

Method 2 – Az2tf (Azure only)

Andy Thomas (Microsoft employee) put together a tool called Az2tf which iterates over your entire subscription, and generates .tf files for most of the common types of resources, and he’s adding more all the time. Requesting a specific resource type is as simple as opening an issue and explaining which resource is missing. In my experience, he’s responded within a few hours with a solution.

Method 3 – Terraforming (AWS only)

Daisuke Fujita put together a tool called Terraforming that with a little bit of scripting can generate Terraform files for all of your AWS resources.

Method 4 – cf-terraforming (Cloudflare only)

Cloudflare put together a fantastic tool called cf-terraforming which rips through your Cloudflare tenant and generates .tf files for everything Cloudflare related. The great thing about cf-terraforming is that because it’s written by the vendor of the original product, they treat it as a first class citizen and keep it very up-to-date with any new resources they themselves add to their product. I wish all vendors would do this.

To sum things up, there are plenty of ways to generate Terraform files for existing resources. Some are more time consuming than others, but they all have the goal of making your environment less brittle and your processes more repeatable, which will save time, money, and most importantly stress, when an inevitable incident takes place.

Do you know of any other tools for these or other providers that can assist in bringing previously unmanaged resources under Terraform management? Leave a comment and we’ll add them to this page as soon as possible!

]]>
https://nexxai.dev/generate-terraform-files-for-existing-resources/feed/ 0
How to import a publicly-issued certificate into Azure Key Vault https://nexxai.dev/how-to-import-a-publicly-issued-certificate-into-azure-key-vault/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-import-a-publicly-issued-certificate-into-azure-key-vault https://nexxai.dev/how-to-import-a-publicly-issued-certificate-into-azure-key-vault/#respond Thu, 05 Sep 2019 18:37:46 +0000 https://nexxai.dev/?p=145 The post How to import a publicly-issued certificate into Azure Key Vault appeared first on nexxai.dev.

Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your […]

]]>
The post How to import a publicly-issued certificate into Azure Key Vault appeared first on nexxai.dev.

Today, after spending several hours swearing and researching how to import a publicly-issued certificate into Azure Key Vault, I thought I’d share the entire process of how we did it from start to finish so that you can save yourself a bunch of time and get back to working on fun stuff, like spamming your co-workers with Cat Facts. We learned a bunch about the different encoding formats of certificates and some of their restrictions, both within Azure Key Vault as well as with the certificate types themselves. Let’s get started!

Initially, we created an elliptic curve-derived (EC) private key (using elliptic curve prime256v1), and a CSR by doing:

openssl ecparam -out privatekey.key -name prime256v1 -genkey
openssl req -new -key privatekey.key -out request.csr -sha256

making sure to not include an email address or password. I am not actually clear on what the technical reasoning behind this is, but I saw it noted on several sites.

We submitted the CSR to our certificate authority (CA) and shortly thereafter got back a signed PEM file.

We next needed to create a single PFX/PKCS12-formatted, password-protected certificate, so we grabbed our signed certificate (ServerCertificate.crt) and our CA’s intermediate certificate chain (Chain.crt) and then did:

openssl pkcs12 -export -inkey privatekey.key -in ServerCertificate.crt -certfile Chain.crt -out Certificate.pfx

But when we went to import it into the Key Vault with the correct password, it threw a general “We don’t like this certificate” error. The first thing we did was check out the provided link and saw that we could import PEM-formatted certificates directly. I didn’t remember this being the case in the past, so maybe this is a new feature?

No problem. We concatenated the certificate and key files into a single, large text file (echo ServerCertificate.crt >> concat.crt ; echo privatekey.key >> concat.crt) which would create a file called concat.crt which itself would consist of the

-----------BEGIN CERTIFICATE-----------
-----------END CERTIFICATE-----------

section from the ServerCertificate.crt file as well as the

-----BEGIN EC PARAMETERS-----
-----END EC PARAMETERS-----

and

-----BEGIN EC PRIVATE KEY-----
-----END EC PRIVATE KEY-----

sections from the privatekey.key file.

We went to upload concat.crt to the Key Vault and again were given the same error as before however after re-reading the document, we were disappointed when we saw this quote:

We currently don’t support EC keys in PEM format.

Section: Formats of Import we support

It surprises me that Microsoft does not support elliptic curve-based keys in PEM format. I am not aware of any technical limitation on the part of the certificate so this seems very much like a Microsoft-specfic thing, however if anyone is able to provide insight into this, I’d love to hear it.

OK, we’ll generate an 2048-bit RSA-derived key and CSR, and then try again.

openssl genrsa -des3 -out rsaprivate.key 2048
openssl req -new -key rsaprivate.key -out RSA.csr

We uploaded the CSR to the CA as a re-key request, and waited.

When the certificate was finally issued (as cert.pem), we could now take the final steps to prepare it for upload to the Key Vault. We concatenated the key and certificate together (echo rsaprivate.key >> rsacert.crt ; echo cert.pem >> rsacert.crt) and went to upload it to the Key Vault.

And yet again, it failed. After a bunch of researching on security blogs and StackOverflow, it turns out that the default output format of the private key is PKCS1, and Key Vault expects it to be in PKCS8 format. So now time to convert it.

openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in rsaprivate.key -out rsaprivate8.key

Finally, we re-concatenated the rsaprivate8.key and cert.pem files into a single rsacert8.crt file (echo rsaprivate8.key >> rsacert8.crt ; echo cert.pem >> rsacert8.crt) which we could import into Key Vault.

It worked!

We now have our SSL certificate in our HSM-backed Azure Key Vault that we can apply to our various web properties without having to store the actual certificate files anywhere, which makes our auditors very happy.

]]>
https://nexxai.dev/how-to-import-a-publicly-issued-certificate-into-azure-key-vault/feed/ 0
Terraform: “Error: insufficient items for attribute “sku”; must have at least 1″ https://nexxai.dev/terraform-error-insufficient-items-for-attribute-sku-must-have-at-least-1/?utm_source=rss&utm_medium=rss&utm_campaign=terraform-error-insufficient-items-for-attribute-sku-must-have-at-least-1 https://nexxai.dev/terraform-error-insufficient-items-for-attribute-sku-must-have-at-least-1/#respond Tue, 06 Aug 2019 16:30:42 +0000 https://nexxai.dev/?p=130 The post Terraform: “Error: insufficient items for attribute “sku”; must have at least 1″ appeared first on nexxai.dev.

Last week, we were attempting to deploy a new Terraform-owned resource but every time we ran terraform plan or terraform apply, we got the error Error: insufficient items for attribute "sku"; must have at least 1. We keep our Terraform code in a Azure DevOps project, with approvals being required for any new commits even […]

]]>
The post Terraform: “Error: insufficient items for attribute “sku”; must have at least 1″ appeared first on nexxai.dev.

Last week, we were attempting to deploy a new Terraform-owned resource but every time we ran terraform plan or terraform apply, we got the error Error: insufficient items for attribute "sku"; must have at least 1. We keep our Terraform code in a Azure DevOps project, with approvals being required for any new commits even into our dev environment, so we were flummoxed.

Our first thought was that we had upgraded the Terraform azurerm provider from 1.28.0 to 1.32.0 and we knew for a fact that the azurerm_key_vault resource had been changed from accepting a sku {} block to simply requiring a sku_name property. We tried every combination of having either, both, and none of them defined, and we still received the error. We even tried downgrading back to 1.28.0 as a fallback, but it made no change. At this point we were relatively confident that it wasn’t the provider.

The next thing we looked for was any other resources that had a sku {} block defined. This included our azurerm_app_service_plans, our azure_virtual_machines, and our azurerm_vpn_gateway. We searched for and commented out all of the respective declarations from our .tf files, but still we received the error.

Now we were starting to get nervous. Nothing we tried would solve the problem, and we were starting to get a backlog of requests for new resources that we couldn’t deploy because no matter what we did, whether adding or removing potentially broken code, we couldn’t deploy any new changes. To say the tension on our team was palpable would be the understatement of the year.

At this point we needed to take a step back and analyze the problem logically, so we all took a break from Terraform to clear our minds and de-stress a bit. We started to suspect something in the state file was causing the problem, but we weren’t really sure what. We decided to take the sledgehammer approach and using terraform state rm, we removed every instance of those commented out resources we found above.

This worked. Now we could run terraform plan and terraform apply without issue, but we still weren’t sure why. That didn’t bode well if the problem re-occured; we couldn’t just keep taking a sledgehammer to the environment, it’s just too disruptive. We needed to figure out the root cause.

We opened an issue on the provider’s GitHub page for further investigation, and after some digging by other community members and Terraform employees themselves, it seems that Microsoft’s API returns a different response for App Service Plans than any other resource when it is found to be missing. An assumption was being made that it would be the same for all resources, but it turned out that this was a bad assumption to make.

This turned out to be the key for us. Someone had deleted several App Service Plans from the Azure portal (thinking they were not being used) and so our assumption is that when the provider is checking for the status of a missing App Service Plan, the broken response makes Terraform think it actually exists, even though there’s no sku {} data in it, causing Terraform to think that that specific data was missing.

Knowing the core problem, the error message Error: insufficient items for attribute "sku"; must have at least 1 kind of makes sense now: the sku attribute is missing at least 1 item, it just doesn’t make clear that the “insufficient items” are on the Azure side, not the Terraform / .tf side.

They’ve added a workaround in the provider until Microsoft updates the API to respond like all of the other resources.

Have you seen this error before? What did you do to solve it?

]]>
https://nexxai.dev/terraform-error-insufficient-items-for-attribute-sku-must-have-at-least-1/feed/ 0
Postman collections https://nexxai.dev/postman-collections/?utm_source=rss&utm_medium=rss&utm_campaign=postman-collections https://nexxai.dev/postman-collections/#respond Tue, 02 Jul 2019 16:35:43 +0000 https://nexxai.dev/?p=120 The post Postman collections appeared first on nexxai.dev.

An open letter to API developers Dear API developer, Let me first thank you for making the decision to offer your data for me to consume. In a time where many companies are holding on to their data as tightly as they can, it is commendable that your company is forward-thinking enough to realize that […]

]]>
The post Postman collections appeared first on nexxai.dev.

An open letter to API developers

Dear API developer,

Let me first thank you for making the decision to offer your data for me to consume. In a time where many companies are holding on to their data as tightly as they can, it is commendable that your company is forward-thinking enough to realize that the world is better when we can share information and grow together.

Depending on your industry, and whether the data being offered was the core offering of your organization or the by-product of another process, I realize that it can be quite the fight to convince senior leadership that the data being offered is more valuable when it is made (semi-)public. I’m sure you had to sit through many boring meetings while extolling the virtues of sharing vs. hoarding, time which could have been better spent doing almost anything else.

You have gone through plenty of testing, ensuring that the data from your API is being returned correctly, that it is formatted logically, and that it is hopefully highly-available. You have fed it countless variations of request values, and confirmed that the responses match what is expected. And you probably did at least some – if not a majority – with Postman.

Why, then, are you making your customers rebuild their own Postman collection instead of just sharing the one you and your team have built? Not only does it save time on your consumer’s side, it ensures that as you update your API, they will immediately have the most up-to-date version of what a correctly formatted request looks like, rather than having to dig through some esoteric documentation that shows that a request must now be submitted in all lowercase, or must have the JSON body in a correct order. Even if it’s not to JSON API spec, I can quickly compare my request to yours and see my problem right away, saving me a call to you or your support team.

My personal suggestion is to add the cost of your Postman Pro licenses and maintenance of the collection to the monthly subscription costs your customer is paying for, but the financial decisions are ultimately up to you. All I ask is that you give me a way to immediately import your API definition to my Postman instance, and keep it up to date over the lifetime of our relationship.

If I can get up and running quickly, that is worth much much more to me, than having a few extra fields that I may or may not ever use. It will make me much more likely to obtain and retain your services, even if it’s not otherwise as fully-featured as your competitor.

With my jaw mostly unclenched,
nexxai

]]>
https://nexxai.dev/postman-collections/feed/ 0
How to spam your co-workers with cat facts in 5 easy steps https://nexxai.dev/how-to-spam-your-co-workers-with-cat-facts-in-5-easy-steps/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-spam-your-co-workers-with-cat-facts-in-5-easy-steps https://nexxai.dev/how-to-spam-your-co-workers-with-cat-facts-in-5-easy-steps/#respond Fri, 21 Jun 2019 19:41:52 +0000 https://nexxai.dev/?p=112 The post How to spam your co-workers with cat facts in 5 easy steps appeared first on nexxai.dev.

Step 1 – Find a cat facts API https://catfact.ninja/ Well that was easy. Step 2 – Build a serverless, Azure Logic App using Terraform that will connect to the API and spam your co-workers with a new fact every 5 minutes https://github.com/nexxai/cat-facts/ Ok that part was easy too, but come on, it’s gotta be at […]

]]>
The post How to spam your co-workers with cat facts in 5 easy steps appeared first on nexxai.dev.

Step 1 – Find a cat facts API

https://catfact.ninja/

Well that was easy.

Step 2 – Build a serverless, Azure Logic App using Terraform that will connect to the API and spam your co-workers with a new fact every 5 minutes

https://github.com/nexxai/cat-facts/

Ok that part was easy too, but come on, it’s gotta be at least a little difficu–

Step 3 – Create an Office 365 connection that your Logic App can use

Open the Azure Logic Apps blade

You have 60 seconds to manually add a step that connects your Office 365 account to this app. ‘Get Calendars’ requires the least configuration.

Step 4 – Wait for your co-workers’ email clients to play their New Email alert sound

Start laughing, and keep laughing every 5 minutes from now until forever, asserting your feline dominance over your team.

“But that was only 4 steps, where’s number fi

Step 5 – Have Senior PM of Microsoft Azure Functions see your stupid app and tweet about it

Sure, no prob–wait, what?

]]>
https://nexxai.dev/how-to-spam-your-co-workers-with-cat-facts-in-5-easy-steps/feed/ 0
Using Terraform workspaces for fun and profit – Part 1 https://nexxai.dev/using-terraform-workspaces-for-fun-and-profit/?utm_source=rss&utm_medium=rss&utm_campaign=using-terraform-workspaces-for-fun-and-profit https://nexxai.dev/using-terraform-workspaces-for-fun-and-profit/#respond Thu, 13 Jun 2019 22:35:11 +0000 https://nexxai.dev/?p=18 The post Using Terraform workspaces for fun and profit – Part 1 appeared first on nexxai.dev.

We are a fairly small company (~350 employees) and a very small cloud team (myself and one other guy), so making use of automation where it’s cheap or free is imperative if we don’t want to get overwhelmed with the amount of work being thrown our way. One major challenge we faced was that for […]

]]>
The post Using Terraform workspaces for fun and profit – Part 1 appeared first on nexxai.dev.

We are a fairly small company (~350 employees) and a very small cloud team (myself and one other guy), so making use of automation where it’s cheap or free is imperative if we don’t want to get overwhelmed with the amount of work being thrown our way. One major challenge we faced was that for compliance reasons, we needed to have separate environments for development, QA, and production, but at the same time minimize the amount of time it takes to promote successful projects from that same development environment to QA, and then eventually from QA to prod.

This is the path we took.

First, we created Terraform workspaces for each one:

$ terraform workspace create dev
$ terraform workspace create qa
$ terraform workspace create prod

Next, we created Azure subscriptions for each environment in PowerShell:

Install-Module Az.Subscription -AllowPrerelease
Get-AzEnrollmentAccount
New-AzSubscription -OfferType MS-AZR-0148P -Name "IT.TechServices.DEV" -EnrollmentAccountObjectId <enrollmentAccountObjectId>
New-AzSubscription -OfferType MS-AZR-0148P -Name "IT.TechServices.QA" -EnrollmentAccountObjectId <enrollmentAccountObjectId>
New-AzSubscription -OfferType MS-AZR-0017P -Name "IT.TechServices.PRD" -EnrollmentAccountObjectId <enrollmentAccountObjectId>

Note: The OfferType property is different between the DEV/QA subscriptions and the PRD subscription. This is because as part of our Enterprise Agreement with Microsoft, we have access to separate Dev/Test subscription pricing on the condition that we don’t run any production workloads in it. I am not sure if these values are universal, so if they don’t work for you, please check with your MS rep for the correct ones.

Then we created a sub-folder called environments, and in that folder we created Terraform variable files for each respective environment (dev.tfvars, qa.tfvars, and prod.tfvars), containing the appropriate Azure subscription ID in a variable called subscription_id and the name of the environment in a variable called environment_name. We then created a main.tf file with the contents variable subscription_id {} and variable environment_name {}. So, for example, in dev.tfvars we would have subscription_id = "109b6c11-e163-477e-8453-7613249447c" and environment_name = "dev" and in qa.tfvars we would have subscription_id = "95958666-8ab6-3980-828a-23f7382b9c5a" and environment_name = "qa"

/terraform    
 |    
 +--+ /environments    
 |         |    
 |         +--+ dev.tfvars
 |         +--+ qa.tfvars    
 |         +--+ prod.tfvars    
 | 
 +--+ main.tf

OK, let’s say we wanted to create some service called example-service and which required a resource group to start placing components in. If we wanted to do that in the Azure Canada Central region with some descriptive tags for sorting and billing, we would do the following:

resource "azurerm_resource_group" "example-service" {
   name     = "example-service-${var.environment_name}"
   location = "canadacentral"
   tags = {
     environment = "${var.environment_name}"
     owner       = "justin.smith"
     product     = "example-product"
     department  = "tech.services"
   }
 }

We’re using the ${var.environment_name} placeholder which means that we only have to create a single .tf file for each resource, and it will be named according to the environment we specify.

Finally, we have created a git repo for our entire Terraform collection, and have created 3 branches within it: dev, qa, and prod, with each one having successively more restrictions on committing to it than the previous. We’ve also setup Azure DevOps build and release pipelines which are triggered each time code is committed to the respective branch.

For example, anyone on our team can deploy changes to the dev branch because we don’t really care what happens in it. With the qa branch, it requires at least one of either myself or my co-worker to approve the commit. We want to make sure that people aren’t just adding unnecessary resources to QA, but it’s still not “live” so restrictions are relaxed somewhat. The prod branch requires the change ticket number from our ticketing system to be present in the Pull Request before being approved, ensuring that it has gone through the appropriate Change Approval Process before it becomes part of our daily operational management routine.

Now we’re ready to deploy! (Please note that the steps below simply replicate our pipelines in a manual way and will work for this demo. It is considered best practice to automate these steps once you’re comfortable with the process.)

First, we ensure we’re in the dev workspace:

$ terraform workspace select dev

Then, to test to make sure that we’re not complete idiots, we run the Terraform plan, including the appropriate environment’s variable file:

$ terraform plan -var-file=environments/dev.tfvars

And assuming nothing is screaming, we apply it:

$ terraform apply -var-file=environments/dev.tfvars

We now have a resource group in our Azure DEV subscription that we can use to deploy resources into, and named example-service-dev!

Now let’s say we perform the various development tasks like creating the other resources via Terraform and we’ve applied them using the same method as above (using the ${var.environment_name} placeholder at the end of each resource name), and we’re happy with how things look in the development environment. All we have to do is switch to the qa workspace and plan/apply it, using the QA variable file:

$ terraform workspace select qa
$ terraform plan -var-file=environments/qa.tfvars
$ terraform apply -var-file=environments/qa.tfvars

Now you’ve got example-service-qa all set up and ready to go.

And finally, once your QA team has validated and tested the service, you just run it one more time, this time using the prod environment settings:

$ terraform workspace select prod
$ terraform plan -var-file=environments/prod.tfvars
$ terraform apply -var-file=environments/prod.tfvars

And assuming that between each promotion (dev-to-qa, and qa-to-prod), the Terraform files were committed and promoted correctly, you should now have 3 fully functional environments (example-service-dev, example-service-qa, and example-service-prod) with only one set of Terraform files!

In the next part, we’ll walk through building an Azure DevOps Build pipeline to begin automating the deployment so that we don’t need to be so hands-on every time a Terraform change is made!

]]>
https://nexxai.dev/using-terraform-workspaces-for-fun-and-profit/feed/ 0
What is this place? https://nexxai.dev/what-is-this-place/?utm_source=rss&utm_medium=rss&utm_campaign=what-is-this-place https://nexxai.dev/what-is-this-place/#respond Thu, 13 Jun 2019 20:44:38 +0000 https://nexxai.dev/?p=12 The post What is this place? appeared first on nexxai.dev.

I am constantly both impressed with myself at how I solve complicated problems and disgusted at myself at how I manage to forget how I solved those same problems at some point in the future. This blog will primarily be used as a more permanent home for the various solutions that I come up with […]

]]>
The post What is this place? appeared first on nexxai.dev.

I am constantly both impressed with myself at how I solve complicated problems and disgusted at myself at how I manage to forget how I solved those same problems at some point in the future.

This blog will primarily be used as a more permanent home for the various solutions that I come up with as well as random thoughts I have about how best to utilize various cloud resources. Sometimes they may be specifically related to the industry I work in (airline / travel), and sometimes they may be more general.

If not a single person on the internet so much as visits this site, let alone finds its content interesting, that’s fine. I just wanted to put together a digital notebook that didn’t look like any of the other note-taking apps out there.

]]>
https://nexxai.dev/what-is-this-place/feed/ 0