Splitting a big tfstate into Multiple Environments with Remote Backend

Splitting tfstate files into multiple environments is very useful when your project is growing so much and you've a very big tfstate file and you need to monitor which resource are failing, which are deploying effectively.

It's hard to track resources specifically when you're using for example own mongodb in your dev cluster but want managed service for your production environment.

Let us consider a file structure as below. Currently, all your backend configurations are stored inside the mono/provider.tf. Now, your project is growing and you need to separate out your tfstate into multiple environments.

For this example, we consider s3 backend. But you may use any; the process is same.


 [FILE] envs ~/devops/learningtf/envs
      dev                                                                                 
     └  provider.tf                                                                        
      mono                                                                      
     └  provider.tf                                  
      prod                                                                           
     └  provider.tf

The code inside the mono/provider.tf is as follows.

mono/provider.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
  backend "s3" {
    bucket = "test-split-backend"
    key    = "dev"
    region = "us-east-1"

    dynamodb_table = "dev-test-split-backend"
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "dev-bucket" {
  bucket = "test-split-qa"
  acl    = "private"
}

resource "aws_s3_bucket" "qa-bucket" {
  bucket = "test-split-qa"
  acl    = "private"
}

Consider these two buckets dev-bucket and qa-bucket. Now, you want to move these two resources into it's own environment.

Let's make a scaffold for putting resources with provider and backend for dev environment.

dev/provider.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
  backend "s3" {
    bucket = "test-split-backend"
    key    = "dev-split/terraform.tfstate"
    region = "us-east-1"

    dynamodb_table = "dev-test-split-backend-final"
  }
}

Now, you may proceed to terraform apply so that terraform will generate state file and lock files on the remote backend.

If you run terraform state list, the output should be empty as we haven't added any resources in this environment.

Now, the first step is to dump the tfstate file which is very easy, just one command.

cd dev && terraform state pull > dev.tfstate

This will pull your current configuration of dev environment which should be not exactly empty but without any resources.

Now, you're ready to move the resource from mono into dev environment.

First of all identify which resource / module you want to move by looking at the tf files.

After identifying the resources, now move them into dev.tfstate using this simple command.

cd mono
terraform state mv -state-out=../dev/dev.tfstate aws_s3_bucket.dev-bucket aws_s3_bucket.dev-bucket

This will remove the resource tracking from mono and move into to dev.tfstate which is not synced with dev remote tfstate as of now.

Now, the last thing you need to do is push the dev tfstate into the remote server, which is as easy as the following command.

cd dev
terraform state push dev.tfstate

Now, if you run terraform state show, you should be able to see the different as it won't show in the mono but it'll show in the dev.

You can confirm if there's any drift by doing terraform plan inside the dev environment.