Thursday, March 5, 2020

Upgrading Terraform from 11 to 12

The other week I ran into an interesting problem. An application running on AWS Elastic Beanstalk, provisioned with Terraform needed to be moved from the classic to application load balancer.  The Terraform modules used to create the infrastructure were from Cloud Posse.  The updated modules ran on Terraform v0.12, changed their inputs and outputs.  With so many changes how could we update the infrastructure?

Notes: All the Terraform code lives alongside the application code in a git repository.  The Terraform outputs are used by the CI/CD system to deploy the application, e.g. the container registry created by Terraform is saved as output and used by CI/CD to push the container to.

Upgrading from Terraform v0.11 (TF11) to 0.12 (TF12) is fairly straightforward.  Since all the Terraform code used modules there wasn't much to change except variables.  One of the benefits to using modules was that all the heavy lifting was abstracted away so we didn't have to worry about much.  However, there is a gotcha.

Because I was upgrading the Terraform code I could not manage both the old and new stacks simultaneously.  The old stack relied on modules that were tied to TF11 while the new stack relied on modules tied to TF12.

To upgrade I upgraded the Terraform code and modules in a branch, specified a new state file, built new infrastructure, deployed the application to Elastic Beanstalk, and cut over to the new stack via DNS change.

Seems easy but there was another gotcha, partially due to the pattern we use to maintain applications in AWS.  Each application has is deployed within its own AWS account.  This isn't a new application but a re-deploy of the same app on new infrastructure so this all happens in the same AWS account.  But new resources to be built by TF12 conflicted with resources managed by TF11, e.g. the Elastic Beanstalk application name, S3 bucket, etc.  Running Terraform with the new TF12 code failed due to these naming conflicts.

To work around that I used a concept of 'namespace' you'll see in the Cloud Posse modules.  'namespace' is an arbitrary string used, with a few other variables, to create uniquely named resources within an AWS account.  Setting 'namespace' allowed Terraform to build new resources along side the old within the same AWS account without naming conflicts.

Once the stack was build, application deployed and cutover complete I switched branches that contained the TF11 code, destroyed the old stack, and then merged my TF12 branch into the master branch.  The master branch now reflected the 'true' infrastructure state.

No comments:

Post a Comment