Remote State & Backends
Local state works fine when you are the only person running Terraform. The moment a second engineer joins — or a CI/CD pipeline enters the picture — local state creates race conditions, lost changes, and infrastructure drift. Remote state solves this by storing the state file in a shared, accessible location with locking so that only one operation runs at a time.
Why Remote State?
Local state (terraform.tfstate on disk) has three fundamental problems in team environments:
| Problem | What happens |
|---|---|
| No sharing | Engineer A's state is on their laptop. Engineer B runs plan and sees a blank state — Terraform thinks nothing exists. |
| No locking | Two applies run simultaneously. Both read the current state, make changes, write back — the last write wins and the first is silently lost. |
| Sensitive data exposure | State files contain resource attributes including secrets. A local file on a laptop has no access controls or audit trail. |
Remote backends address all three: the state file lives in a shared storage system with access controls, and a locking mechanism prevents concurrent applies.
Backend Configuration
Backends are configured in the terraform block, typically in versions.tf or providers.tf:
terraform {
required_version = ">= 1.6"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-terraform-state"
key = "prod/networking/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-locks"
}
}
After changing backend configuration, run terraform init — Terraform will offer to migrate existing state to the new backend.
S3 Backend (AWS)
The S3 backend is the most common choice for AWS users. It stores state in an S3 bucket and uses a DynamoDB table for locking.
Create the prerequisites
# Create the S3 bucket
aws s3api create-bucket \
--bucket my-terraform-state \
--region us-east-1
# Enable versioning (allows rollback to previous state)
aws s3api put-bucket-versioning \
--bucket my-terraform-state \
--versioning-configuration Status=Enabled
# Enable server-side encryption
aws s3api put-bucket-encryption \
--bucket my-terraform-state \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
# Block all public access
aws s3api put-public-access-block \
--bucket my-terraform-state \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
# Create DynamoDB table for state locking
aws dynamodb create-table \
--table-name terraform-state-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region us-east-1
Configure the backend
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/networking/terraform.tfstate"
region = "us-east-1"
# Encryption at rest
encrypt = true
# State locking via DynamoDB
dynamodb_table = "terraform-state-locks"
# Optional: assume an IAM role for cross-account access
# role_arn = "arn:aws:iam::123456789012:role/TerraformStateRole"
}
}
The key is the path within the bucket where the state file is stored. Use a convention like <environment>/<component>/terraform.tfstate — for example prod/networking/terraform.tfstate and prod/compute/terraform.tfstate. Each separate Terraform configuration must have a unique key, or they will overwrite each other's state.
IAM policy for the S3 backend
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
"Resource": "arn:aws:s3:::my-terraform-state/*"
},
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": "arn:aws:s3:::my-terraform-state"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:us-east-1:*:table/terraform-state-locks"
}
]
}
GCS Backend (GCP)
On Google Cloud, use a Cloud Storage bucket. GCS handles locking natively via object versioning — no separate locking table needed.
# Create the bucket
gsutil mb -l us-central1 gs://my-terraform-state
# Enable versioning
gsutil versioning set on gs://my-terraform-state
terraform {
backend "gcs" {
bucket = "my-terraform-state"
prefix = "prod/networking"
}
}
Authentication uses Application Default Credentials (ADC). In CI/CD, use a service account key or Workload Identity Federation.
Azure Blob Storage backend
terraform {
backend "azurerm" {
resource_group_name = "terraform-state-rg"
storage_account_name = "tfstateaccount"
container_name = "tfstate"
key = "prod.networking.tfstate"
}
}
HCP Terraform Backend
HashiCorp Cloud Platform (HCP) Terraform (formerly Terraform Cloud) provides a managed backend with a free tier. No infrastructure to maintain — HCP handles state storage, locking, and provides a UI for plan/apply review.
terraform {
cloud {
organization = "my-org"
workspaces {
name = "prod-networking"
}
}
}
Authenticate with:
terraform login
This opens a browser to generate an API token, which is stored in ~/.terraform.d/credentials.tfrc.json. In CI/CD, set the TF_TOKEN_app_terraform_io environment variable instead.
State Locking
When Terraform starts an operation that modifies state (plan, apply, destroy), it acquires a lock. Any concurrent operation that tries to acquire the lock will fail with a message like:
Error: Error acquiring the state lock
Error message: ConditionalCheckFailedException: The conditional request failed
Lock Info:
ID: 8b2a3c1e-9d4f-4b2a-8c1e-3f9d4b2a8c1e
Path: my-terraform-state/prod/networking/terraform.tfstate
Operation: OperationTypePlan
Who: engineer@corp.com
Version: 1.6.0
Created: 2026-05-05 14:32:11.123456789 +0000 UTC
Info:
If a lock is stuck (e.g., the applying process crashed), release it manually:
# Force-unlock with the Lock ID from the error message
terraform force-unlock 8b2a3c1e-9d4f-4b2a-8c1e-3f9d4b2a8c1e
Only force-unlock a state when you are certain no Terraform operation is actively running. Force-unlocking while an apply is in progress can corrupt state. Check your CI/CD system and ask teammates before releasing a lock you did not create.
Disabling locking (rarely appropriate)
# Skip locking for a single operation — dangerous in team environments
terraform apply -lock=false
Partial Configuration
Hard-coding backend bucket names and regions in the backend block creates a problem: you cannot use variables there (backend configuration is loaded before variables are resolved). The solution is partial configuration — leave some fields out of the backend block and supply them at init time.
# versions.tf — only the backend type is declared
terraform {
backend "s3" {}
}
# backend.hcl — the rest of the configuration (do not commit secrets)
bucket = "my-terraform-state"
key = "prod/networking/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-locks"
# Initialize with the backend config file
terraform init -backend-config=backend.hcl
This pattern lets you keep the backend configuration outside version control (or template it per environment), while the backend "s3" {} declaration stays in the committed code.
Migrating State
To move from local state to a remote backend, or from one backend to another:
# 1. Configure the new backend in versions.tf
# 2. Run init — Terraform detects the backend change
terraform init
# Terraform prompts:
# Do you want to copy existing state to the new backend?
# Pre-existing state was found in the previous backend. ...
# Enter a value: yes
# 3. Verify the state was copied
terraform state list
# 4. Remove the old local state file
rm terraform.tfstate terraform.tfstate.backup
Before any backend migration, make a manual copy of your current state file. If the migration fails partway through, you want a known-good copy to restore from. Terraform will not overwrite the source state during migration, but a backup provides extra confidence.
Sharing State Between Configurations
When infrastructure is split across multiple Terraform configurations (e.g., networking managed separately from compute), use the terraform_remote_state data source to read outputs from another configuration's state:
# In the networking configuration (outputs.tf)
output "vpc_id" {
value = aws_vpc.main.id
}
output "private_subnet_ids" {
value = aws_subnet.private[*].id
}
# In the compute configuration (main.tf)
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "my-terraform-state"
key = "prod/networking/terraform.tfstate"
region = "us-east-1"
}
}
resource "aws_instance" "app" {
subnet_id = data.terraform_remote_state.network.outputs.private_subnet_ids[0]
vpc_security_group_ids = [aws_security_group.app.id]
# ...
}
This creates a loose coupling: the compute configuration depends on networking outputs, but does not manage networking resources. If the network changes, the compute configuration reads the updated outputs on the next plan.
terraform_remote_state gives access to every output in the source state, including outputs marked sensitive = true. Ensure the IAM policy on your state bucket restricts read access to only the roles and CI systems that need it. Treat state bucket access like production secret access.
State key organization
A common pattern for multi-component infrastructure:
my-terraform-state/
├── global/
│ └── iam/terraform.tfstate
├── prod/
│ ├── networking/terraform.tfstate
│ ├── compute/terraform.tfstate
│ └── database/terraform.tfstate
└── staging/
├── networking/terraform.tfstate
└── compute/terraform.tfstate
Key Takeaways
- Local state causes race conditions and sharing problems in team environments. Move to remote state before adding a second engineer or CI/CD pipeline.
- The S3 backend (with DynamoDB locking) is the standard choice on AWS. Enable versioning and encryption on the bucket. Use unique
keypaths per configuration. - GCS and Azure Blob backends work similarly for GCP and Azure. HCP Terraform provides a fully managed option with a free tier.
- State locking prevents concurrent applies. If a lock is stuck after a crash, verify no operation is running before force-unlocking.
- Use partial configuration (
-backend-config=backend.hcl) to keep environment-specific backend settings outside the committed code. - Migrate state with
terraform init— it detects backend changes and offers to copy existing state. Always back up first. - Share outputs across configurations with
terraform_remote_state. Restrict state bucket access — remote state reads expose all outputs including sensitive values.