Versioned Migrations with Atlas Terraform Provider
Versioned migrations provide a controlled, sequential approach to evolving your database schema. When using the Atlas Terraform Provider with versioned migrations, your schema changes are tracked using timestamped migration files. Each file contains the SQL statements needed to shift your database schema from one version to the next.
Getting Started
Before you begin, ensure you have:
- Installed Terraform
- Installed the Atlas CLI
- Configured Terraform with the Atlas Terraform Provider
- Docker installed and running
- Access to your target database
Configuring Terraform for Versioned Migrations
To use versioned migrations with Terraform, you'll need to configure both a data source and a resource:
terraform {
required_providers {
atlas = {
source = "ariga/atlas"
version = "~> 0.9.7"
}
}
}
provider "atlas" {
dev_url = "docker://mysql/8/myapp"
}
// Inspect the target database and load its state.
// This is used to determine which migrations to run.
data "atlas_migration" "shop" {
dir = "migrations"
url = "mysql://root:pass@localhost:3306/shop"
}
// Sync the state of the target database with the migrations directory.
resource "atlas_migration" "shop" {
dir = data.atlas_migration.shop.dir
version = data.atlas_migration.shop.latest # Use latest to run all migrations
url = data.atlas_migration.shop.url
}
Configuration Parameters Explained
dir
: The path to your migration directory with optional format parameterurl
: The connection URL to your target databaseversion
: The target migration version to apply (uselatest
to apply all pending migrations). If not set, the latest migration will be applieddev_url
: The URL to a dev database used for validation and planning
The target database must already exist before running migrations. The Atlas Terraform Provider will apply
migrations to an existing database but will not create the database itself. For example, in the configuration above, you must
ensure the shop
database exists in your MySQL instance.
Applying Migrations
Adding and applying migrations using the Atlas Terraform Provider is a two-step process:
- Generate a new migration file using the Atlas CLI
- Apply the migration using Terraform
Generating New Migrations
You can create new migrations using Atlas CLI commands like migrate diff
for automatic generation or
migrate new
for manual creation.
Automatic creation
Using the migrate diff
command, Atlas can automatically generate a migration by comparing your desired schema state to the current database state:
atlas migrate diff add_users_name \
--dir "file://migrations" \
--to "mysql://root:pass@localhost:3306/your_database" \
--dev-url "docker://mysql/8/dev"
This generates a migration file with the SQL statements needed to transition your schema to the desired state.
Manual creation
You can also create a migration file manually:
atlas migrate new add_users_name
Then edit the new file created in your migrations directory to add your SQL statements:
-- modify "users" table
ALTER TABLE `users` ADD COLUMN `name` varchar(255) NOT NULL;
After manually editing the contents of a newly created migration file, the checksums for the directory must be recalculated. This can be done by running:
atlas migrate hash
Migration Directory Overview
Here's an example of what your migration directory might look like after creating a few migrations:
.
├── main.tf
├── migrations
│ ├── 20220811074144_create_users.sql
│ ├── 20220811074314_add_users_name.sql
│ └── atlas.sum
└── schema.sql
- 20220811074144_create_users.sql
- 20220811074314_add_users_name.sql
- atlas.sum
-- create "users" table
CREATE TABLE `users` (`id` int NOT NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- modify "users" table
ALTER TABLE `users` ADD COLUMN `name` varchar(255) NOT NULL;
h1:w2ODzVxhTKdBVBdzqntHw7rHV8lKQF98TmNevOEZfIo=
20220811074144_create_users_table.sql h1:KnMSZM/E4TBGidYCZ+UHxkHEWaRWeyuPIUjSHRybQqA=
20220811074314_add_users_name.sql h1:jUpaANgD0SjI5DjaHuJxtHZ6Wq98act0MmE5oZ+NRU0=
Applying Migrations
Now you can initialize the Terraform configuration:
terraform init
Then, apply the migrations:
terraform apply
This will show a plan of what changes will be made to the database:
data.atlas_migration.shop: Reading...
data.atlas_migration.shop: Read complete after 0s [id=file://migrations]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# atlas_migration.shop will be created
+ resource "atlas_migration" "shop" {
+ dir = "migrations"
+ id = (known after apply)
+ status = (known after apply)
+ url = (sensitive value)
+ version = "20220811074314"
}
Plan: 1 to add, 0 to change, 0 to destroy.
╷
│ Warning: data dependent changes detected
│
│ with atlas_migration.shop,
│ on main.tf line 22, in resource "atlas_migration" "shop":
│ 22: resource "atlas_migration" "shop" {
│
│ File: 20220811074314_add_users_name.sql
│
│ - MY101: Adding a non-nullable "varchar" column "name" on table "users" without a default value implicitly sets existing rows with ""
╵
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Notice that in the plan output, Terraform displays the target migration version (20220811074314
) that will be applied. This version corresponds to the migration filename 20220811074314_add_users_name.sql
in your migrations directory.
After confirming with yes
, Terraform will apply the migration and show the results:
atlas_migration.shop: Creating...
atlas_migration.shop: Creation complete after 1s [id=file://migrations]
╷
│ Warning: data dependent changes detected
│
│ with atlas_migration.shop,
│ on main.tf line 22, in resource "atlas_migration" "shop":
│ 22: resource "atlas_migration" "shop" {
│
│ File: 20220811074314_add_users_name.sql
│
│ - MY101: Adding a non-nullable "varchar" column "name" on table "users" without a default value implicitly sets existing rows with ""
╵
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
You can verify the changes by connecting to your database and describing the users
table:
mysql -h localhost -P 3306 -u root -ppass shop -e "DESCRIBE users;"
Which should show the updated table structure:
+-------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| id | int | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
+-------+--------------+------+-----+---------+-------+
Down Migrations and Rollbacks
The Atlas Terraform Provider supports down migrations for when migrations need to be undone.
When using Terraform, you can roll back to a specific version by setting the version
parameter:
resource "atlas_migration" "shop" {
dir = data.atlas_migration.shop.dir
# Roll back to a specific version
version = "20220811074144"
url = data.atlas_migration.shop.url
# Policy to control down migrations
protected_flows {
migrate_down {
allow = true
auto_approve = true
}
}
}
The protected_flows
block controls whether down migrations are allowed. By default,
down migrations are disabled as a safety measure. Setting allow = true
enables down migrations, and auto_approve = true
allows them to proceed without additional confirmation.
Applying the Rollback
After modifying the Terraform configuration to specify the previous version, run:
terraform apply
This will show you what changes will be made:
data.atlas_migration.shop: Reading...
data.atlas_migration.shop: Read complete after 1s [id=file://migrations]
atlas_migration.shop: Refreshing state... [id=file://migrations]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# atlas_migration.shop will be updated in-place
~ resource "atlas_migration" "shop" {
id = "file://migrations"
~ status = {
- current = "20220811074314"
- latest = "20220811074314"
- status = "OK"
} -> (known after apply)
~ version = "20220811074314" -> "20220811074144"
# (2 unchanged attributes hidden)
+ protected_flows {
+ migrate_down {
+ allow = true
+ auto_approve = true
}
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Notice that the plan shows a rollback from version 20220811074314
to version 20220811074144
, which means Atlas will revert the changes made in the newer migration.
After confirming with yes
, Terraform will apply the rollback:
atlas_migration.shop: Modifying... [id=file://migrations]
atlas_migration.shop: Modifications complete after 5s [id=file://migrations]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Verifying the Rollback
You can verify that the rollback was successful by inspecting the database:
mysql -h localhost -P 3306 -u root -ppass shop -e "DESCRIBE users;"
Which should show the table structure reverted to its previous state:
+-------+------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------+------+-----+---------+-------+
| id | int | NO | | NULL | |
+-------+------+------+-----+---------+-------+
The name
column that was added in the later migrations has been removed, as we've rolled back to the state defined in 20220811074144_create_users.sql
.
Conclusion
The Atlas Terraform Provider's versioned workflow offers a robust approach to managing database schema migrations within your infrastructure as code process. By defining and maintaining a series of sequential migration files, you gain:
- Complete control over each schema change
- A clear, historical record of your database evolution
- The ability to safely roll forward or backward between schema versions
- Seamless integration with your existing Terraform workflows
For more information, visit the provider registry page.
Need More Help?
Join the Ariga Discord Server for support, early access to features, and to provide feedback.
Sign up for our newsletter to stay up to date about Atlas and the cloud platform, Atlas Cloud.