Terraform

Terraform Basics — What Is Terraform & IaC

● Beginner ⏱ 20 min read terraform

Before Terraform, standing up cloud infrastructure meant clicking through consoles, running ad-hoc scripts, or writing lengthy runbooks. The result was infrastructure that was hard to reproduce, impossible to review in a pull request, and prone to configuration drift — where what was running in production slowly diverged from what anyone thought was there. Terraform solves this by letting you describe the infrastructure you want in a plain text file, and then making it so.

What Is Infrastructure as Code?

Infrastructure as Code (IaC) means managing your servers, networks, databases, and other infrastructure resources using configuration files — exactly as you would manage application code. Instead of clicking through a cloud console or running manual commands, you declare what resources you want, commit the declaration to Git, and let a tool reconcile reality with your desired state.

💡
Declarative vs Imperative

IaC tools like Terraform are declarative — you describe the desired end-state, not the steps to get there. The tool figures out what needs to change. This is different from imperative scripts (Bash, Ansible tasks) where you specify every step. Declarative IaC is idempotent: running it twice has the same result as running it once.

The benefits of IaC are the same benefits you get from treating any system with software engineering discipline:

Benefit What it means in practice
Version control Every change is a commit. You can see who changed what, when, and why — and roll back if needed.
Reproducibility Spin up an identical staging environment from the same code that runs production.
Code review Infrastructure changes go through pull requests, just like application code.
Drift detection Run a plan at any time to see if reality has drifted from your declared configuration.
Automation Integrate with CI/CD to provision infrastructure on every merge to main.

Why Terraform?

Several IaC tools exist — AWS CloudFormation, Azure Bicep, Pulumi, Ansible. Terraform has become the most widely adopted for a few concrete reasons:

Multi-cloud and provider-agnostic

Terraform uses a provider model. A provider is a plugin that teaches Terraform how to talk to a specific platform's API — AWS, GCP, Azure, Kubernetes, GitHub, Cloudflare, Datadog, and thousands more. You can manage resources across all of these with the same tool, the same workflow, and the same language. CloudFormation only works with AWS; Bicep only with Azure.

Declarative HCL

HashiCorp Configuration Language (HCL) is designed to be readable. It is not YAML (no indentation hell), not JSON (no quotes everywhere), and not a general-purpose programming language (no accidental complexity). It is purpose-built for describing infrastructure, and most engineers can read an unfamiliar Terraform file and understand what it does within minutes.

Mature state management

Terraform maintains a state file that maps your configuration to the real resources in your cloud. This state is what makes idempotent operations possible — Terraform knows what already exists, so it only creates, updates, or destroys what needs to change.

Large ecosystem

The Terraform Registry hosts thousands of providers and reusable modules. If you need to provision an RDS instance with a standard VPC setup, there is almost certainly a community module that encodes best practices — you just pass variables.

🧭
OpenTofu

In 2023 HashiCorp changed Terraform's license from MPL to BSL, making it no longer open source. OpenTofu is the community-maintained open-source fork, hosted by the Linux Foundation. It is API-compatible with Terraform and is a drop-in replacement for most use cases. Everything in these guides applies equally to OpenTofu.

How Terraform Works

Terraform sits between your configuration files and the cloud APIs. When you run terraform apply, Terraform:

  1. Reads your .tf configuration files and builds a resource graph.
  2. Reads its state file to understand what already exists.
  3. Calls provider APIs to compare the desired state (your config) with the actual state (what's running).
  4. Computes a diff — a plan of what to create, update, or destroy.
  5. Executes the plan, calling provider APIs to make the changes.
  6. Updates the state file to reflect the new reality.
💡
Providers are plugins

Providers are separate binaries downloaded during terraform init. They are not part of the Terraform core binary. When you declare required_providers, Terraform downloads the right provider version from the registry and caches it in a .terraform/ directory. This is why you should add .terraform/ to your .gitignore.

The resource graph

Terraform builds a directed acyclic graph (DAG) of all your resources and their dependencies. Resources with no dependencies on each other can be provisioned in parallel, which is why a terraform apply that creates 20 independent S3 buckets is much faster than one that does 20 serial API calls. Resources that do depend on each other (e.g. a subnet that must exist before an EC2 instance) are created in the correct order automatically.

HCL Syntax

Every Terraform configuration is made up of blocks. A block has a type, optional labels, and a body containing arguments:

<BLOCK_TYPE> "<LABEL>" "<LABEL>" {
  argument = value
}

Here are the block types you will encounter most often:

terraform

Sets the Terraform version constraint and declares which providers the configuration requires:

terraform {
  required_version = ">= 1.5"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider

Configures a provider — typically credentials and region:

provider "aws" {
  region = "us-east-1"
}

Providers can read credentials from environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY), shared credentials files, or IAM instance profiles. Hard-coding credentials in .tf files is a security anti-pattern — never do it.

resource

The most common block. Declares a real infrastructure object you want Terraform to manage:

resource "aws_s3_bucket" "my_bucket" {
  bucket = "my-unique-bucket-name-2026"

  tags = {
    Environment = "dev"
    Team        = "platform"
  }
}

The first label ("aws_s3_bucket") is the resource type — it comes from the provider. The second label ("my_bucket") is the local name you use to refer to this resource elsewhere in your configuration. Together they form the resource's address: aws_s3_bucket.my_bucket.

variable

Declares an input parameter so the same configuration can be used for different environments:

variable "environment" {
  description = "Deployment environment (dev, staging, prod)"
  type        = string
  default     = "dev"
}

Reference it in other blocks with var.environment. Override at runtime with -var="environment=prod", a .tfvars file, or an environment variable prefixed TF_VAR_.

output

Exposes values after an apply — useful for passing IDs between configurations or displaying connection strings:

output "bucket_arn" {
  value       = aws_s3_bucket.my_bucket.arn
  description = "ARN of the S3 bucket"
}

locals

Named expressions computed once and reused — like constants or computed values:

locals {
  common_tags = {
    Project     = "learniac"
    Environment = var.environment
    ManagedBy   = "terraform"
  }
}

module

References a reusable configuration package (local directory or registry module):

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = "my-vpc"
  cidr = "10.0.0.0/16"
}
🧭
File organization

Terraform loads all .tf files in a directory together. Convention is to split configuration into main.tf (resources), variables.tf (variable declarations), outputs.tf (output declarations), and providers.tf (provider configuration). This is convention, not a requirement — a single main.tf is fine for small configurations.

Core Workflow

Every Terraform operation follows a three-command cycle: Write → Plan → Apply.

terraform init

Run once per workspace (and again if you add providers or change backend configuration). Initializes the working directory:

$ terraform init

Initializing the backend...
Initializing provider plugins...
- Installing hashicorp/aws v5.31.0...
- Installed hashicorp/aws v5.31.0 (signed by HashiCorp)

Terraform has been successfully initialized!
💡
Commit .terraform.lock.hcl, ignore .terraform/

The lock file ensures everyone on the team uses the same provider versions. Commit it. The .terraform/ directory contains downloaded binaries — it is large and should be in .gitignore.

terraform plan

Shows a diff of what Terraform would do, without making any changes. This is your safety net — read the plan output carefully before applying:

$ terraform plan

Terraform will perform the following actions:

  # aws_s3_bucket.my_bucket will be created
  + resource "aws_s3_bucket" "my_bucket" {
      + bucket = "my-unique-bucket-name-2026"
      + id     = (known after apply)
      + tags   = {
          + "Environment" = "dev"
          + "Team"        = "platform"
        }
      ...
    }

Plan: 1 to add, 0 to change, 0 to destroy.

The symbols in the plan output mean:

SymbolAction
+Create a new resource
-Destroy an existing resource
~Update in-place (change an attribute)
-/+Destroy and recreate (some changes force replacement)
⚠️
Watch for -/+ (destroy and recreate)

Some resource attributes are immutable — changing them forces the resource to be destroyed and recreated. For stateful resources like databases or Kubernetes namespaces, this means data loss. Always review the plan for -/+ symbols before applying to production.

terraform apply

Executes the plan. By default it shows the plan again and prompts for confirmation:

$ terraform apply

...plan output...

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_s3_bucket.my_bucket: Creating...
aws_s3_bucket.my_bucket: Creation complete after 3s [id=my-unique-bucket-name-2026]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Use terraform apply -auto-approve in CI/CD pipelines (after plan has been reviewed in a prior step).

terraform destroy

Destroys all resources managed by the current configuration. Equivalent to running terraform apply with every resource marked for deletion. Use with care in production.

$ terraform destroy

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Enter a value: yes

aws_s3_bucket.my_bucket: Destroying...
aws_s3_bucket.my_bucket: Destruction complete after 1s

Destroy complete! Resources: 1 destroyed.

State

Terraform's state file (terraform.tfstate) is the source of truth for what Terraform manages. It maps every resource in your configuration to a real object in your infrastructure — including the object's ID, current attribute values, and metadata Terraform needs to manage it.

Without state, Terraform would not know whether an S3 bucket called my-unique-bucket-name-2026 already exists or needs to be created. It would also not know the bucket's ARN — an attribute that is only assigned by AWS after creation, which you might need to pass to another resource.

Local state (default)

By default, state is stored in terraform.tfstate in your working directory. This works for learning and solo projects, but breaks down in teams because:

Remote state (recommended for teams)

Configure a backend to store state remotely with locking:

terraform {
  backend "s3" {
    bucket         = "my-terraform-state-bucket"
    key            = "prod/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-state-lock"
    encrypt        = true
  }
}

With an S3 backend, state is stored in S3 (versioned, encrypted) and a DynamoDB table provides state locking — preventing concurrent applies from corrupting state. HCP Terraform (formerly Terraform Cloud) provides remote state with locking as a managed service.

⚠️
Never commit terraform.tfstate to Git

State files often contain sensitive values in plaintext — database passwords, API keys, private keys. Add *.tfstate and *.tfstate.backup to your .gitignore. Use a remote backend with encryption for team workflows.

Inspecting state

# List all resources in state
terraform state list

# Show details for a specific resource
terraform state show aws_s3_bucket.my_bucket

# Show all outputs
terraform output

Your First Configuration

Here is a minimal working Terraform configuration that creates a local file (no cloud account needed) to validate that everything is installed correctly:

# main.tf

terraform {
  required_providers {
    local = {
      source  = "hashicorp/local"
      version = "~> 2.4"
    }
  }
}

resource "local_file" "hello" {
  filename = "${path.module}/hello.txt"
  content  = "Hello from Terraform!\n"
}

output "file_path" {
  value = local_file.hello.filename
}

Run it:

# Initialize — downloads the local provider
terraform init

# Preview the plan
terraform plan

# Create the file
terraform apply

# Verify
cat hello.txt
# Hello from Terraform!

# Clean up
terraform destroy

With this working, the next step is configuring a cloud provider (AWS, GCP, or Azure) and provisioning real infrastructure — which the next guides cover.

Key Takeaways