Skip to main content

Project Configuration

Introduction

Project Configuration is a core concept in Atlas that allows you to manage your database schema in a more advanced way. When using the Atlas Terraform Provider, you can inject a project configuration to customize its behavior.

How it works

The atlas_schema resource in the Terraform provider accepts a config attribute where you can provide the content of an Atlas project file. Full documentation is available here.

Guide: Using Variables for Multiple Environments

In this guide, we will demonstrate how to use a custom configuration to manage different database connection details for multiple environments (e.g., staging vs. production) using a single Terraform configuration.

Step 1: Set up the project

First, set up your Terraform project. Create a main.tf file with the provider configuration:

main.tf
terraform {
required_providers {
atlas = {
source = "ariga/atlas"
version = "~> 0.9.7"
}
}
}

provider "atlas" {
dev_url = "docker://postgres/15"
}

Initialize the project by running:

terraform init

Step 2: Provision local databases

Next, we will spin up two PostgreSQL databases to simulate a staging and production environment. This requires Docker.

docker run --name postgres-staging -p 5432:5432 -e POSTGRES_PASSWORD=pass -d postgres:15
docker run --name postgres-prod -p 5433:5432 -e POSTGRES_PASSWORD=pass -d postgres:15

Step 3: Define the AtlasSchema resource

Now, update your main.tf to define the atlas_schema resource with a custom configuration that uses variables.

main.tf
terraform {
required_providers {
atlas = {
source = "ariga/atlas"
version = "~> 0.9.7"
}
}
}

provider "atlas" {
dev_url = "docker://postgres/15"
}

locals {
# Define connection details for each environment
environments = {
staging = {
url = "postgres://postgres:pass@localhost:5432/postgres?sslmode=disable"
}
production = {
url = "postgres://postgres:pass@localhost:5433/postgres?sslmode=disable"
}
}
# Atlas project configuration
atlas_config = <<-EOT
variable "url" {
type = string
}
env "tf" {
url = var.url
}
EOT
}

resource "atlas_schema" "multi_env" {
env_name = "tf"
config = local.atlas_config
variables = jsonencode({
url = local.environments[terraform.workspace].url
})
hcl = <<-EOT
schema "public" {}
table "users" {
schema = schema.public
column "id" {
type = int
null = false
}
primary_key {
columns = [column.id]
}
}
EOT
}

In this configuration:

  • We use terraform.workspace to dynamically select the environment.
  • The atlas_config local variable holds the content of our atlas.hcl file. It declares a url variable.
  • The atlas_schema resource uses jsonencode to pass the correct URL to the Atlas configuration based on the active workspace.

Step 4: Create and apply to staging

Let's create and switch to the staging workspace.

terraform workspace new staging
terraform apply -auto-approve

The output should look like this:

Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
+ create

Terraform will perform the following actions:

# atlas_schema.multi_env will be created
+ resource "atlas_schema" "multi_env" {
+ config = <<-EOT
variable "url" {
type = string
}
env "tf" {
url = var.url
}
EOT
...
}
}
...

Plan: 1 to add, 0 to change, 0 to destroy.
atlas_schema.multi_env: Creating...
atlas_schema.multi_env: Creation complete after 2s [id=7ab6d901-9df0-a60a-9d86-9e66a57ef105]

│ Warning: Atlas Plan

│ with atlas_schema.multi_env,
│ on main.tf line 38, in resource "atlas_schema" "multi_env":
38: resource "atlas_schema" "multi_env" {

│ The following SQL statements will be executed:


│ CREATE TABLE "public"."users" (
"id" integer NOT NULL,
│ PRIMARY KEY ("id")
);

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Verify that the users table was created in the staging database:

docker exec -it postgres-staging psql -U postgres -d postgres -c "\\dt"

The output should look like this:

         List of relations
Schema | Name | Type | Owner
--------+-------+-------+----------
public | users | table | postgres
(1 row)

Step 5: Apply to production

Now, let's deploy the same schema to the production environment.

terraform workspace new production
terraform workspace select production
terraform apply -auto-approve

Verify that the users table was created in the production database:

docker exec -it postgres-prod psql -U postgres -d postgres -c "\\dt"

You have now managed two different environments using a single, parameterized Terraform configuration.