Purge a soft-deleted Azure API Management instance

Azure recently implemented a change to the API Management service whereby deleting the instance only puts it into a soft-deleted state rather than completely nuking it from orbit. This may be desirable for data recovery purposes but it means that if you run a terraform destroy on an environment with an APIM instance on it and then you try and rebuild that environment, it will fail due to the fact that the name you’re trying to use is being held onto by the previously removed instance. So since neither Azure CLI nor Az PowerShell natively support purging, I’m going to show you how to manually purge a soft-deleted Azure API Management instance.

NOTE: The below script uses the basic Az PowerShell tools but with a little elbow grease could be adapted to bash/zsh (provided you have a way of retrieving your Azure access token using OAuth).

$token = Get-AzAccessToken

$request = @{
    Method = 'DELETE'
    Uri    = "https://management.azure.com/subscriptions/{subscriptionGuid}/providers/Microsoft.ApiManagement/locations/{region}/deletedservices/{apimName}?api-version=2020-06-01-preview"
    Headers = @{
        Authorization = "Bearer $($token.Token)"
    }
}

Invoke-RestMethod @request

The only values you’ll need to supply are the subscriptionGuid, region, and apimName in the Uri.

Now the next time you’re stuck wondering why you can’t tear down and rebuild your environments with your IaC tool of choice, you’ll know how to purge a soft-deleted Azure API Management instance.

Source: Microsoft docs

Convert a CRT SSL certificate chain to PFX format

Many SSL certificate authorities (CAs) do not natively support .PFX format certificates which means that if you plan on installing them on something like an Azure App Service, you may encounter issues. Today, let’s figure out how to convert a CRT SSL certificate chain to PFX format.

First, let’s generate a private key and certificate signing request. Run the following command, and answer the questions as accurately as possible. The private key file (domain.key) should be kept secret and protected.

openssl req \
        -newkey rsa:2048 -nodes -keyout domain.key \
        -out domain.csr

Next, take the contents of domain.csr (it is just a plaintext file with your answers and some other non-secret information base64-encoded; it can be opened in any text editor) and request your certificate through your CA. This process varies per certificate authority, and so is out of scope for this article.

[Time passes]

Now, your CA provides you with a .ZIP file with the following files.

your_domain_com.crt
AAACertificateServices.crt
DomainValidationSecureServerCA.crt
USERTrustRSAAAACA.crt

(where your_domain_com.crt is the actual certificate file and the other .CRT files represent the various certificates that will allow a browser to chain up to the root; while the filenames and number of files will almost certainly be different for each certificate authority, the point here is to illustrate that there will be some number of .CRT files and that they are all important)

Extract those files into the same folder that you have the domain.key file from earlier in.

Finally, let’s take our certificate and combine them with the rest of the chain to create a single .PFX file by running the following command. Your site’s certificate should be specified in the -in parameter, and for each of the chain certificates, adding another -certfile entry.

openssl pkcs12 -export -out certificate.pfx \
        -inkey domain.key \
        -in your_domain_com.crt \
        -certfile AAACertificateServices.crt \
        -certfile DomainValidationSecureServerCA.crt \
        -certfile USERTrustRSAAAACA.crt

NOTE: Azure App Services and Azure Key Vaults require a password-protected .PFX file, so ensure that you enter one when prompted. When you go to upload the certificate and you are required to select the .PFX file and a password, the password you created here is the one it’s referring to.

And you’re done! You now have a file in that folder (certificate.pfx) that you can upload/install and ensure your site is protected against MITM attacks.

Cannot use the SKU Basic with File Change Audit for site

The problem

We’ve recently begun attempting to scale our Azure App Services up and down for our test environments; scaling them up to match production performance levels (SKU: PremiumV2) during the day and then back down to minimal (SKU: Basic) at the end of the working day to save on costs. Just in the last month or two however, we’ve started to get the error “Cannot use the SKU Basic with File Change Audit for site XXX-XXX-XXX-XXX”.

Initially, we thought it had to do with the fact that we had a Diagnostic setting that was tracking AppServiceFileAuditLogs, but even after removing that Diagnostic setting before attempting the scale down, the issue persisted.

After banging our head against the walls with no progress being made, we opened a low-severity ticket with Azure Support to understand what was going on. They suggested we remove the following App Configuration settings:

  • DIAGNOSTICS_AZUREBLOBCONTAINERSASURL
  • DIAGNOSTICS_AZUREBLOBRETENTIONINDAYS
  • DiagnosticServices_EXTENSION_VERSION
  • WEBSITE_HTTPLOGGING_CONTAINER_URL
  • WEBSITE_HTTPLOGGING_RETENTION_DAYS

Again, these did not have any effect.

It was at this time that I was in the portal browsing around for something else and happened to notice the “JSON View” option at the top right of the App Service so I checked it out and grep’d for audit just to see what I’d find.

Bingo: fileChangeAuditEnabled

Not so bingo: fileChangeAuditEnabled: null

But seeing that setting got me to thinking. What if there’s a bug in what JSON View is showing. The error we’re receiving is clearly saying it’s enabled, but the website is showing null; what if there’s some kind of type-mismatch going on behind the portal that is showing null but actually has a setting? What if we could use a different mechanism to test that theory?

Well, it just so happens that Azure PowerShell has a Get-AzResource function that can do just that and this blog post shows us how to do that.

The solution

First, let’s get the resource:

$appServiceRG = "example-resource-group"
$appServiceName = "example-app-service-name"
$config = Get-AzResource -ResourceGroupName $appServiceRG `
    -ResourceType "Microsoft.Web/sites/config" `
    -ResourceName "$($appServiceName)/web" `
    -apiVersion 2016-08-01

We now have an object in $config that we can now check the properties of by doing:

$config.Properties

And there it is:

fileChangeAuditEnabled                 : True

Now all we need to do is configure it to false (and also unset a property called ReservedInstanceCount which is a duplicate of preWarmedInstanceCount but cannot be included when we try and reset the other setting due to what I assume is Azure just keeping it around for legacy reasons):

$config.Properties.fileChangeAuditEnabled = "false"
$config.Properties.PSObject.Properties.Remove('ReservedInstanceCount')

Next, as per the suggestion from Parameswaran in the comments (thank you!), create a new Array (since existing arrays are of fixed size and cannot be modified) while removing AppServiceFileAuditLogs from the list of azureMonitorLogCategories

$newCategories = @()

ForEach ($entry in $config.Properties.azureMonitorLogCategories) {
    If ($entry -ne "AppServiceFileAuditLogs") {
        $newCategories += $entry
    }
}

$config.Properties.azureMonitorLogCategories = $newCategories

And finally, let’s set the setting:

$config | Set-AzResource -Force

Next, for any Deployment Slots you have on this resource, repeat these steps again, but using the following resource retrieval code:

$config = Get-AzResource -ResourceGroupName $appServiceRG `
    -ResourceType "Microsoft.Web/sites/slots" `
    -ResourceName "$($appServiceName)" `
    -apiVersion 2016-08-01

Now, when you try to scale down from a PremiumV2 SKU to a Basic SKU, you will no longer receive the error of “Cannot use the SKU Basic with File Change Audit for site XXX-XXX-XXX-XXX”.

Posts navigation