Understanding State
State is the mechanism that makes Terraform's declarative model work in practice. Without it, Terraform would have no way to know whether the resources in your configuration already exist, what their current attribute values are, or which real cloud objects they correspond to. Every successful Terraform workflow depends on a healthy, accurate state file — and many of the ways Terraform can go wrong come down to state problems.
What Is State?
Terraform state is a mapping between the resources declared in your configuration and the real infrastructure objects they represent. It is stored in a JSON file — typically named terraform.tfstate — and updated on every successful terraform apply.
When you run terraform plan, Terraform does three things:
- Reads your
.tfconfiguration — this is the desired state. - Reads the state file — this records what Terraform last created or modified.
- Calls provider APIs to refresh — this retrieves the actual current state of resources.
The plan is the diff between desired state and actual state. Without the state file, step 2 is missing — Terraform cannot compute what changed, and cannot safely determine what actions to take.
State records attributes that only exist after creation: the ARN of an IAM role, the IP address of a VM, the connection string of a database. These values are not in your configuration — they are assigned by the cloud provider and stored in state so other resources can reference them.
State File Anatomy
The state file is plain JSON. You should never edit it by hand, but understanding its structure helps when debugging:
{
"version": 4,
"terraform_version": "1.5.7",
"serial": 12,
"lineage": "a1b2c3d4-...",
"outputs": {
"bucket_arn": {
"value": "arn:aws:s3:::my-data-bucket",
"type": "string"
}
},
"resources": [
{
"mode": "managed",
"type": "aws_s3_bucket",
"name": "data",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"bucket": "my-data-bucket",
"arn": "arn:aws:s3:::my-data-bucket",
"id": "my-data-bucket",
"region": "us-east-1",
"tags": {}
}
}
]
}
]
}
Key fields:
| Field | Purpose |
|---|---|
serial | Monotonically increasing. Backends use this to detect concurrent writes. |
lineage | UUID identifying this state's lineage. Prevents mixing state files across projects. |
resources | Array of all managed resources with their full attribute values. |
outputs | All output values from the last apply. |
Why State Matters
Mapping configuration to real resources
Cloud resources have IDs assigned by the provider (an EC2 instance ID like i-0abc123, an S3 bucket name). State records the binding between aws_instance.web in your config and i-0abc123 in AWS. Without this binding, Terraform would try to create a new instance every time you run apply.
Storing computed attributes
Many resource attributes are only known after creation — the public IP of a VM, the ARN of a role, the DNS name of a load balancer. State stores these so they can be referenced by other resources via expressions like aws_lb.main.dns_name.
Performance
Refreshing the live state of every resource on every plan would be slow for large configurations. By default, Terraform uses cached state for planning and only makes targeted API calls. The -refresh-only flag forces a full refresh when you want to sync state with reality.
Detecting drift
When someone modifies infrastructure outside Terraform (manual console changes, other automation), the actual state diverges from what's in state. Running terraform plan -refresh-only shows this drift without making any changes.
Local vs Remote State
By default, Terraform stores state in terraform.tfstate in your working directory. This is fine for learning and solo projects, but fails in team environments:
| Local state | Remote state | |
|---|---|---|
| Concurrent access | Race condition — two applies corrupt state | Locking prevents concurrent writes |
| Sharing | Must commit to Git (dangerous — contains secrets) | Stored centrally, accessible to the whole team |
| History | Manual backup only | Versioned automatically (S3, GCS) |
| Encryption | Plaintext on disk | Encrypted at rest by the backend |
For any real project — even a side project you might later work on from another machine — use remote state from the start. Migrating from local to remote state later is possible but adds friction.
Remote Backends
A backend defines where and how Terraform stores its state. Configure it in the terraform block:
S3 + DynamoDB (AWS)
The most common backend for AWS users. S3 stores the state with versioning enabled; DynamoDB provides state locking:
terraform {
backend "s3" {
bucket = "my-terraform-state-prod"
key = "infra/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}
The DynamoDB table needs a primary key named LockID of type String. Create it once:
aws dynamodb create-table \
--table-name terraform-state-lock \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
GCS (Google Cloud)
terraform {
backend "gcs" {
bucket = "my-terraform-state"
prefix = "prod/state"
}
}
GCS backends include built-in locking — no separate table needed.
Azure Blob Storage
terraform {
backend "azurerm" {
resource_group_name = "terraform-state-rg"
storage_account_name = "tfstateaccountprod"
container_name = "tfstate"
key = "prod.terraform.tfstate"
}
}
HCP Terraform (formerly Terraform Cloud)
terraform {
cloud {
organization = "my-org"
workspaces {
name = "prod-infra"
}
}
}
HCP Terraform is a managed service that includes remote state, locking, a UI, run history, and policy enforcement. It has a free tier for small teams.
There is a chicken-and-egg problem: you need Terraform to create the S3 bucket, but you need the S3 bucket to store Terraform state. The standard approach is to create the state bucket and DynamoDB table with a separate bootstrap configuration using local state, then configure the main project's backend to use those resources.
State Locking
When Terraform starts a plan or apply operation that could modify state, it acquires a lock. Any other plan or apply running concurrently will fail immediately with a lock error rather than creating a race condition.
Error: Error acquiring the state lock
Error message: ConditionalCheckFailedException: The conditional request failed
Lock Info:
ID: abc123-...
Path: my-bucket/prod/terraform.tfstate
Operation: OperationTypePlan
Who: alice@laptop.local
Version: 1.5.7
Created: 2026-05-05 14:32:01.123 +0000 UTC
If a lock gets stuck (e.g. a process was killed mid-apply), you can force-unlock it — but only after verifying no other apply is genuinely in progress:
terraform force-unlock <LOCK_ID>
Force-unlocking while another apply is genuinely in progress will cause both to write state simultaneously, corrupting it. Always check with your team before running force-unlock.
Sensitive Values in State
State files can contain sensitive data — database passwords, API keys, TLS private keys. This happens because Terraform stores all resource attributes in state, including secrets passed into resource arguments.
Marking an output as sensitive prevents it from appearing in CLI output, but does not remove it from the state file:
output "db_password" {
value = random_password.db.result
sensitive = true # hides in CLI output, still in state
}
Since secrets end up in state, the state file itself must be treated as a secret:
- Enable encryption at rest on your backend (S3 with SSE-KMS, GCS with CMEK, etc.)
- Restrict access to the state bucket with IAM policies — not everyone who reads your code should read your state
- Never commit
terraform.tfstateto Git - Enable versioning on your state bucket so you can recover from accidental corruption
State Commands
The terraform state subcommands let you inspect and manipulate state without running an apply. Use them carefully — direct state manipulation bypasses the normal plan/apply review cycle.
List resources in state
terraform state list
# Example output:
aws_s3_bucket.data
aws_s3_bucket_versioning.data
aws_iam_role.app
aws_iam_role_policy_attachment.app
Show details for a resource
terraform state show aws_s3_bucket.data
# Example output:
# aws_s3_bucket.data:
resource "aws_s3_bucket" "data" {
arn = "arn:aws:s3:::my-data-bucket"
bucket = "my-data-bucket"
id = "my-data-bucket"
region = "us-east-1"
...
}
Move a resource in state
When you rename a resource in your configuration, Terraform would destroy and recreate it. state mv renames the state entry to match, preserving the existing resource:
# Rename aws_s3_bucket.old_name → aws_s3_bucket.new_name
terraform state mv aws_s3_bucket.old_name aws_s3_bucket.new_name
Remove a resource from state
Removes a resource from Terraform's management without destroying it in the cloud. Useful when you want to abandon a resource or import it into a different configuration:
terraform state rm aws_s3_bucket.data
Import an existing resource
Bring an existing cloud resource under Terraform management by adding it to state:
# Add an existing S3 bucket to state
terraform import aws_s3_bucket.data my-existing-bucket-name
After importing, run terraform plan to see if your configuration matches the actual resource. You may need to update your .tf files to match the real resource's attributes.
Newer Terraform versions support an import block in configuration, letting you codify imports as part of your plan/apply workflow instead of running imperative CLI commands. This makes imports reviewable and repeatable.
import {
to = aws_s3_bucket.data
id = "my-existing-bucket-name"
}
Refresh state
Sync state with the actual state of resources without making any changes:
# Show what has drifted (read-only, makes no changes)
terraform plan -refresh-only
# Apply the refresh (updates state to match reality)
terraform apply -refresh-only
Key Takeaways
- State is Terraform's record of what it manages — the mapping between your config and real cloud resources.
- State enables idempotency: Terraform reads state to determine what already exists before deciding what to create, update, or destroy.
- Use remote state for any real project. Local state is unsuitable for teams — it has no locking and risks secrets in Git.
- Remote backends (S3+DynamoDB, GCS, Azure Blob, HCP Terraform) provide locking, versioning, and encryption.
- State often contains sensitive values — encrypt the backend and restrict access via IAM. Never commit state to Git.
terraform state list/show/mv/rmlet you inspect and manipulate state, but use them carefully — they bypass the plan/apply review.