Applying Migrations
With the atlas migrate apply
command, users can apply pending migrations to database(s). The typical flow for introducing
schema changes to databases is as follows: Develop ⇨ Check (CI) ⇨ Push (CD) ⇨ Deploy.
- Develop - Generate a migration file with the desired database changes using the
atlas migrate diff
command. - Check (CI) - Use
atlas migrate lint
to validate migrations, ensuring they don't conflict with other team members' changes and align with best practices. Add Atlas to your CI pipeline in GitHub Actions or GitLab to review migrations files before they get merged into the main branch. - Push (Delivery) - Use
atlas migrate push
, or set up a CI pipeline to push the latest migrations state to the Atlas Schema Registry. Alternatively, you can package the migrations directory into a custom Docker image and push it to an artifactory. - Deploy - Use
atlas migrate apply
to apply the pending migrations to your database(s).
Flags and Arguments
By default, atlas migrate apply
executes all pending migrations. However, you can pass an optional argument to limit the
number of migrations applied. For instance, atlas migrate apply 2
will apply up to 2 pending migrations.
The following flags are required:
--url
the URL to the database to apply migrations on.--dir
the URL to the migration directory. It defaults tofile://migrations
.
Users who have connected or pushed their migration directory to the Atlas Schema Registry can read the migrations' state
directly from there without needing to have them locally. For example, atlas migrate apply --dir "atlas://app"
will
apply the pending migrations of the app
project based on the most recent pushed state. To see it in action, run the following:
Login or signup:
atlas login
Push a local migration directory and name it app
:
- PostgreSQL
- MySQL
- Maria
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas migrate push app \
--dev-url "docker://postgres/15/dev?search_path=public"
atlas migrate push app \
--dev-url "docker://mysql/8/dev"
atlas migrate push app \
--dev-url "docker://mariadb/latest/dev"
atlas migrate push app \
--dev-url "sqlite://dev?mode=memory"
atlas migrate push app \
--dev-url "docker://sqlserver/2022-latest"
atlas migrate push app \
--dev-url "docker://clickhouse/23.11"
atlas migrate push app \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev"
Deploy to a local database the remote migration directory named app
:
- PostgreSQL
- MySQL
- Maria
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas migrate apply \
--dir "atlas://app" \
--url "postgres://postgres:pass@:5432/example?search_path=public&sslmode=disable"
atlas migrate apply \
--dir "atlas://app" \
--url "mysql://root:pass@:3306/example"
atlas migrate apply \
--dir "atlas://app" \
--url "maria://root:pass@:3306/example"
atlas migrate apply \
--dir "atlas://app" \
--url "sqlite://example.db"
atlas migrate apply \
--dir "atlas://app" \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master"
atlas migrate apply \
--dir "atlas://app" \
--url "clickhouse://root:pass@:9000/default"
atlas migrate apply \
--dir "atlas://app" \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable"
Deploy a specific tag to a local database the remote migration directory named app
:
- PostgreSQL
- MySQL
- Maria
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas migrate apply \
--dir "atlas://app?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5" \
--url "postgres://postgres:pass@:5432/example?search_path=public&sslmode=disable"
atlas migrate apply \
--dir "atlas://app?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5" \
--url "mysql://root:pass@:3306/example"
atlas migrate apply \
--dir "atlas://app?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5" \
--url "maria://root:pass@:3306/example"
atlas migrate apply \
--dir "atlas://app?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5" \
--url "sqlite://example.db"
atlas migrate apply \
--dir "atlas://app?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5" \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master"
atlas migrate apply \
--dir "atlas://app?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5" \
--url "clickhouse://user:pass@:9000/default"
atlas migrate apply \
--dir "atlas://app?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5" \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable"
Schema Revision Information
Atlas saves information about the applied migrations on a table called atlas_schema_revisions
in the connected
database schema (e.g. mysql://user@host/my_schema
or postgres://user@host/db?search_path=my_schema
). If the database
connection is not bound to a specific schema (e.g. mysql://user@host/
or postgres://user@host/db
), the table is
stored in its own schema called atlas_schema_revisions
. This behavior can be changed by setting the schema manually:
--revisions-schema my_schema
to store the data inmy_schema.atlas_schema_revisions
.
Transaction Configuration
By default, Atlas creates one transaction per migration file and will roll back that transaction if a statement in the wrapped migration fails to execute. Atlas supports three different transaction modes:
--tx-mode file
(default) will wrap each pending migration into its own transaction.--tx-mode all
will wrap all pending migration files into one transaction.--tx-mode none
will not create any transaction. If a statement fails, the execution will stop. However, Atlas is smart enough to detect which statement fails and on another migration attempt will continue with the failed statement. This means altering the migration file from the failed statements onwards is safe and recommended.
Please be aware, that non DDL transactional databases like MySQL (due to implicit commits) can not be safely rolled back completely, and you might end up with a mismatched schema and revision table state. Atlas will handle those cases in future releases. A good source of information can be found in the PostgreSQL wiki.
File level transaction mode
The --atlas:txmode
directive can be used to override the transaction mode for a specific migration file:
-- atlas:txmode none
CREATE INDEX CONCURRENTLY name_idx ON users (name);
Execution Order
The --exec-order
flag controls how Atlas computes and executes pending migration files to the database.
Atlas supports three different order execution modes:
linear
(default) - Atlas expects a linear history and fails if it encounters files that were added out of order. This option ensures files are executed in a consistent order, guaranteeing deterministic behavior. It can be enforced in CI using theatlas migrate lint
command. Learn more about non-linear changes in the documentation.linear-skip
- This option is a softer version oflinear
, meaning that if Atlas encounters a new file that was not added in sequential order (its version is lower than the database version), it will be skipped.non-linear
- This option directs Atlas to execute migration files that were added out of order. Note, although this option can be useful in local development, it is strongly discouraged in real production environments. Executing files out of order cannot guarantee deterministic behavior and may lead to failures (e.g., conflicted migrations). Rolling back to a specific version of the migration directory might be challenging, as the state of the database could differ from the state of the directory.
Handling Out-of-Order Errors
You've encountered this issue because your database is at version Z, but there is a file(s) in your migration directory pending to be applied with version Y, where Y < Z. This indicates it was added out of order, as its version is lower than the current database version. Below are multiple options to resolve it depending on your environment:
-
Local environment (development): Developers might encounter this issue if they have a migration file that was not yet pushed to the master branch (e.g., version Z), but upon pulling remote changes, new files with versions X and Y were added to the migration directory. In this scenario, there are two cases:
- If the new (remote) changes do not conflict with the local changes, users can use the
--exec-order non-linear
flag to execute files that were added out of order (X and Y). In case of failure or unexpected behavior, theatlas schema clean
command can be used to reset everything to a clean state. - If the new (remote) changes conflict with the local (not yet pushed) changes, the local database should be cleared
using
atlas schema clean
, and the local changes should be adjusted to align with the pulled changes.
- If the new (remote) changes do not conflict with the local changes, users can use the
-
Real environment: If you encountered this issue during deployment, it means that Atlas was not set up in your CI, which is why the issue was not detected beforehand. Let's go through the steps to fix the error and set up Atlas in your CI to prevent this issue from happening again:
- Configure Atlas to run for every PR or a change in the master branch that affects the migration directory. Learn more on how to integrate it into your GitHub Actions or GitLab CI pipelines.
- Pull the latest changes from the master branch and run
atlas migrate rebase <versions>
to rebase the problematic files reported by Atlas (the "out of order" ones). Then, create a PR. - At this stage, if Atlas CI is set up, it will verify that the migration directory is replay-able and that the rebased migration file(s) can be executed without any issues (e.g., no conflicts with previous migrations).
- Running
atlas migrate apply
(deployment) again should pass without this error.
Migration Hooks Atlas Pro
Atlas supports executing custom SQL statements in two stages of the migration process: after the transaction has been
started, and before it is committed or rolled back. A typical use case for migration hooks is to control the statement_timeout
or lock_timeout
in PostgreSQL to prevent migrations from blocking other ongoing operations.
hook "sql" "timeout" {
transaction {
after_begin = [
"SET statement_timeout TO '50ms'",
]
before_commit = [
// ...
]
}
}
env {
name = atlas.env
migration {
dir = "file://migrations"
}
}
Existing Databases
Baseline migration
If you have an existing database project and want to switch over to Atlas Versioned Migrations, you need to provide Atlas with a starting point. The first step is to create a migration file reflecting the current schema state. This can be easily done:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas migrate diff my_baseline \
--dir "file://migrations" \
--dev-url "docker://mysql/8/my_schema" \
--to "mysql://root:pass@localhost:3306/my_schema"
atlas migrate diff my_baseline \
--dir "file://migrations" \
--dev-url "docker://mariadb/latest/my_schema" \
--to "maria://root:pass@localhost:3306/my_schema"
atlas migrate diff my_baseline \
--dir "file://migrations" \
--dev-url "docker://postgres/15" \
--to "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"
atlas migrate diff my_baseline \
--dir "file://migrations" \
--dev-url "sqlite://file?mode=memory" \
--to "sqlite://file.db"
atlas migrate diff my_baseline \
--dir "file://migrations" \
--dev-url "docker://sqlserver/2022-latest" \
--to "sqlserver://sa:pass@localhost:1433?database=master"
atlas migrate diff my_baseline \
--dir "file://migrations" \
--dev-url "docker://clickhouse/23.11/default" \
--to "clickhouse://user:pass@remote:9000/default"
atlas migrate diff my_baseline \
--dir "file://migrations" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" \
--to "postgres://postgres:pass@localhost:5432/example?search_path=public&sslmode=disable"
Atlas will generate a "baseline" file from the database schema. For example:
CREATE TABLE `users` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`age` bigint(20) NOT NULL,
`name` varchar(255) COLLATE utf8mb4_bin NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `age` (`age`)
)
Regardless of whether you added additional migration files after the baseline, you need to specify the baseline version during your first migration execution. Atlas will mark this version as already applied and proceed with the next version after it. For example:
atlas migrate apply \
--url "mysql://root:pass@localhost:3306/example" \
--dir "file://migrations" \
--baseline "20220811074144"
Allow Dirty
If your database contains resources but no revision information yet, Atlas will refuse to execute migration files. One
way to override that behavior is by using the --baseline
flag. However, in cases where existing tables are not managed
by Atlas at all and should not be part of a baseline file, you can run the first migration execution with the
--allow-dirty
flag to operate on a non-clean database.
atlas migrate apply \
--url "mysql://root:pass@localhost:3306/example" \
--dir "file://migrations" \
--allow-dirty
Dry Run
Atlas allows users to review and verify the safety of migration plans before applying them to the database.
By using the dry-run
option, Atlas prints the migration files and their SQL statements that are pending to be applied
without executing them. However, if the migration plan contains pre-migration checks, Atlas executes
them on the database and report the results.
atlas migrate apply \
--url "mysql://root:pass@localhost:3306/example" \
--dir "file://migrations" \
--dry-run
Down migrations
Migrations that "roll back" or reverse changes made to the database schema are called "down migrations". These are often
used during local development to undo the changes made by corresponding "up migrations". Atlas follows a linear
migration history model, in which all migration files are "roll-forward". However, it is still possible to clean or
revert schema changes made by specific migration files using the atlas migrate down
command. For full details, see
the down migration documentation.
Migration status
In addition to the --dry-run
flag Atlas also provides the atlas migrate status
command, that provides in-depth
information about the migration status of the connected database.
Multi-Tenant environments
The Atlas configuration language provides built-in support for executing versioned workflows in multi-tenant
environments. Using the for_each
meta-argument, users can define a single env
block that is expanded to N instances,
one for each tenant:
env "prod" {
for_each = toset(var.tenants)
url = urlsetpath(var.url, each.value)
migration {
dir = "file://migrations"
}
format {
migrate {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
}
Read more about how to define versioned workflows using project files in multi-tenant environments.
Examples
First time apply with baseline on production environment:
atlas migrate apply \
--env "production" \
--baseline "20220811074144"
Execute 1 pending migration file, but don't run, but print SQL statements on screen:
atlas migrate apply 1 \
--env "production" \
--baseline "20220811074144" \
--dry-run
Specify revision table schema and custom migration directory path:
atlas migrate apply \
--url "mysql://root:pass@remote:3306/my_database" \
--revisions-schema "atlas_migration_history" \
--dir "file://custom/path/to/dir"
Ignore unclean database and run the first 3 migrations:
atlas migrate apply 3 \
--url "mysql://root:pass@remote:3306/my_database" \
--dir "file://custom/path/to/dir"
Run all pending migrations, but do not use a transaction:
atlas migrate apply \
--url "mysql://root:pass@remote:3306/my_database" \
--tx-mode "none"
Show information about the migration status of a deployment:
atlas migrate status \
--url "mysql://root:pass@remote:3306/my_database" \
--dir "file://custom/path/to/dir" \
--revisions-schema "atlas_migration_history"