Table of contents
- Introduction
- Goal: use multiple profiles when deploying infrastructure
- Configuring multiple providers
- Terraform’s Back end configuration
- Using a Domain profile to deploy a hosted Zone
- Using an Infrastructure profile to deploy an ACM Certificate
- Conclusion: multiaccount environments enhance infrastructure security
Introduction
Terraform is an open-source Infrastructure as Code (IaC) tool that allows you to automatically build infrastructure defined as code. When working with Terraform it is very easy to deploy infrastructure (e.g. Load balancers, VPCs, virtual machines, etc) but unfortunately, it is also very easy to destroy infrastructure. Some pieces of infrastructure such as domain registrations or hosted zones should be protected from accidental wiping. The safest approach to avoid this problem (besides using the prevent_destroy lifecycle option) is isolating workloads using dedicated provider accounts to save those very important pieces of infrastructure. Hence, when developing infrastructure with Terraform it is common practice to employ numerous cloud provider accounts.
Indeed, it is not new that Terraform offers the option of selecting the provider account that you want to use to develop infrastructure using the provider tag. Still, this is one of my best practices when using Terraform hence I decided to describe it here for documenting purposes.
This post will demonstrate that by using multiaccount environments one can deploy a hosted zone and a SSL/TLS X.509 certificate in different provider’s accounts, using AWS as a cloud provider.
Goal: use multiple profiles when deploying infrastructure
Configuring multiple providers
First, I will clone the corresponding GITHub repo and update the cloud profile data showing two different profiles. Mind, this file can only be found in the global/providers/ folder.
git clone https://github.com/TorresAWS/aws-profiles
cd global/providers/
vi cloud.tf # make sure you update your AWS profile info
Here is an example of my $HOME/.aws/credentials file
vi $HOME/.aws/credentials
[Domain]
#Associated with email email1@gmail.com
#Account #1
aws_access_key_id = AKIAZA6SHHSHSFFKS
aws_secret_access_key = 5RMzkJmBXFakeLtU+KMe4a2ygjAQ/5X5
region=us-east-1
output = json
[Infrastructure]
#Associated with email email2@gmail.com
#Account #2
aws_access_key_id = AKIA2344553N67CLFN5
aws_secret_access_key = gzXKcDvfakeagainDL5+UMYN9bSE87dFdE
region=us-east-1
output = json
that needs to have the same names found in global/providers/cloud.tf
vi global/providers/cloud.tf
provider "aws" {
shared_config_files = ["$HOME/.aws/config"]
shared_credentials_files = ["$HOME/.aws/credentials"]
alias = "Infrastructure"
profile = "Infrastructure"
}
provider "aws" {
shared_config_files = ["$HOME/.aws/config"]
shared_credentials_files = ["$HOME/.aws/credentials"]
alias = "Domain"
profile = "Domain"
}
Normally, you will need a cloud.tf file with the provider block in each folder containing any of your infrastructure so that Terraform knows about your provider (e.g. AWS, Azure, GCP). Hence one ends up having the same file copied over and over in numerous folders. However, a convenient way to deal with this issue if to use symbolic links when initializing Terraform. In oder to start every piece of infrastructure in this example, you will have to execute a bash file called start.sh. If you open the file you will find:
vi global/tf-state/start.sh
#!/bin/bash
rm cloud.tf
ln -s ../../global/providers/cloud.tf ./cloud.tf
terraform init
terraform plan
terraform apply --auto-approve
As you can see, this file stablishes a symbolic link between the cloud.tf file locates in global/providers into the current folder. This way, I will have a single providers block located in global/providers which can be uses throughout the infrastructure.
Terraform’s Back end configuration
Now I will start Terraform’s backend which is defined with a single variable in backendname.tf. I will update the backend name to avoid conflict:
cd global/tf-state/
cd global/tf-state/
vi backendname.tf # make sure you update the bucket and dynamodb names into a unique name
bash start.sh # at this point the backend is setup
If you open the start.sh file you will see how a symbolic link was established between the global/providers/cloud.tf file and the current folder where infrastructure is being deployed. Also, notice that a profile tag was included in every Terraform resource. For example, below I show the file global/tf-state/bucket.tf responsible for creating an S3 bucket for the backend:
cd global/tf-state/bucket.tf
resource "aws_s3_bucket" "terraform_state" {
provider = aws.Infrastructure
bucket = local.aws_s3_bucket_bucket
lifecycle {
prevent_destroy = true
}
}
I achieved a backend defined by a single variable by means of the following trick. I used a local resource that creates a file backend.hcl with the bucket and DB name defined in a local variable called aws_s3_bucket_bucket. The local resource file is shown below:
cd global/tf-state/create-backend-file.tf
resource "local_file" "create-backend-file" {
content = <<EOF
bucket = "${local.aws_s3_bucket_bucket}"
dynamodb_table = "${local.aws_s3_bucket_bucket}"
region = "us-east-1"
encrypt = "true"
EOF
filename = "../../global/tf-state/backend.hcl"
}
At the same time, the backend unique name is saved as a variable in the variables folder so that it can be carried out throughout the infrastructure without having to repeate the name. This was achieved by means a local resource that saves the backend name as variable in the variable’s folder:
cd global/tf-state/exportvariable-to-global-variables.tf
resource "local_file" "exportbackend-to-global-variables" {
content = <<EOF
variable "backendname" {
default = "${local.aws_s3_bucket_bucket}"
}
EOF
filename = "../../global/variables/backendname-var.tf"
}
You can learm more about variables in another post.
As a quick note to set up Terraform’s backend, you need to create an S3 bucket to store the state file and a dynamoDB to save the lock— so that infrastucture can be saved in source control and for example numerous users can work on the same folder. The provider=aws.Infrastructure tags mean that Terraform should use Account 2 to deploy the infrastructure. At the same time, I use the prevent_destroy = true tag. Hence, If you try destroying the resource terraform will give an error. At this point, we have the backend all setup.
To be able to use a single variable to define our backend, I used a local-file resource that creates the backend file backend.hcl. This way both the DB and the bucket are named according to backendname.tf and hence this one variable defines the backend.
Using a Domain profile to deploy a hosted Zone
Before deploying the hosted zone, we will define all relevant variables:
cd global/variables
bash start.sh # at this point all variables are defined
Now we are ready to deploy the hosted zone in AWS account 1 by simply entering the vpcs/zone folder and executing the bash start.sh file.
cd vpcs/zone
cd vpcs/zone
bash start.sh # at this point all variables are defined
If you access your AWS account 1 you will see the newly created hosted zone in Route53/Hosted Zones.
Using another profile to deploy a ACM Certificates
Now we can deploy the certificate in AWS account 2, again simply by entering the vpcs/certs folder and executing the bash start.sh file
cd vpcs/certs
cd vpcs/certs
bash start.sh # at this point all variables are defined
If you now access your AWS account 2 you will see the newly created certificate in Certificate Manager/List certificates. By inspecting Terraform’s files you can see how the provider tag was used for example in the acm_certificate.tf file.
vi vpcs/certs/acm_certificate.tf
resource "aws_acm_certificate" "domain" {
provider = aws.Infrastructure
domain_name = "${data.terraform_remote_state.variables.outputs.domain}"
validation_method = "DNS"
subject_alternative_names = ["www.${data.terraform_remote_state.variables.outputs.domain}"]
lifecycle {
create_before_destroy = true
}
}
As you can see this resource will be deployed in AWS account 2, however by inspecting the route53_record.tf file you can see that the set of CNAME records needed for the certificate validation is indeed created in AWS account 1, the account that hosts the domain. As a note, CNAME records are just DNS record that maps an alias to the canonical domain name, allowing multiple names to point to the same location.
vi vpcs/certs/route53_record.tf
resource "aws_acm_certificate" "domain" {
provider = aws.Infrastructure
domain_name = "${data.terraform_remote_state.variables.outputs.domain}"
validation_method = "DNS"
subject_alternative_names = ["www.${data.terraform_remote_state.variables.outputs.domain}"]
lifecycle {
create_before_destroy = true
}
}
