Declarative Schema Migrations
With Atlas's declarative schema migrations (sometimes called state-based migrations), users don't need to manually craft SQL migration scripts. Instead, Atlas automatically plans and applies schema changes, safely transitioning the database from its current state to the desired state.
The desired schema state can be defined using an HCL or SQL schema definition, a database URL, external schemas like ORM, or a combination of these.
By default, the atlas schema apply
command manages only schemas, tables, and their associated indexes and
constraints such as foreign keys and checks.
Views, materialized views, functions, procedures, triggers, sequences, domains, extensions, and additional database features are available to Atlas Pro users. To include these resources in schema migrations, use the following command:
atlas login
Overview
- Atlas loads the desired state into the provided dev database (
--dev-url
) - Atlas compares this against the target database's current state by inspecting its schema
- Atlas plans a migration to get from the current state to the desired state
- Atlas prompts the user to approve and apply the migration plan to the target database
Flags
When running schema apply
, users may supply multiple parameters:
--url
or-u
(required) - URL of the database to be inspected.--to
(required) - URL(s) of the desired state. Can be defined using an HCL or SQL schema, a database URL, or external ORM.--dev-url
(required) - URL to the dev database.--schema
(may be supplied multiple times) - Specify which schema to inspect within the target database.--exclude
(may be supplied multiple times) - Filter out resources matching the given glob pattern.--format
- Go template to use to format the output.--edit
- Open the planned migration in the default editor, allowing the user to modify it.--config
- Path to the Atlas configuration file. Defaults toatlas.hcl
in the current directory, if the--env
flag is set.--env
- Environment to use from the Atlas configuration file.--dry-run
- Write and print the SQL statements of the migration plan without executing them on the target database.--auto-approve
- Apply the migration plan to the target database without first prompting the user for approval.
See detailed usage examples in the Examples section.
Approval Policy
The schema apply
command requires user review and approval before executing the migration against the target database.
Approving the migration can occur manually either locally or in CI, or automatically with the linting review policy. It can also be completely skipped, which is not recommended in production environments.
Let's cover all the options:
-
Manual Review (default): The
atlas schema apply
command will print the SQL statements it is going to run and prompt the user for approval. Users can review the migration plan and either approve or reject it.Atlas Pro users can set the
--dev-url
flag to run an analysis and simulation of the proposed changes on the dev database and get a detailed linting report when reviewing the migration. -
Using Review Policy: Atlas ships with an analysis engine that can detect the impact of proposed changes to the target database. For example, Atlas can detect irreversible destructive changes that can result in data loss or data-dependent changes that may fail due to data integrity constraints.
Users can configure Atlas to automatically approve migrations that pass the analysis engine checks (for example, no destructive changes were detected) and require manual review for migrations that fail the checks. Read more on how to configure the Review Policy for your project below.
-
Skip Review: The
--auto-approve
flag can be used to skip the review process and automatically apply the migration to the target database. Although this option is convenient for experimentation and development, it is not recommended for production environments, as it may lead to unexpected changes.
Review Policy
Atlas Pro users can define the cases in which their schema changes require manual review and approval. How does this work?
- During
atlas schema apply
, Atlas checks if there is an approved pre-planned migration for the given schema transition (State1 -> State2). If one exists, then Atlas will apply it without requiring manual review. - If no migration has been pre-planned, Atlas will lint the schema changes by running an analysis and simulation on the dev database to review their potential impact on the target database.
- Atlas then prints the linting report and, based on the review policy defined in the
atlas.hcl
file, decides whether to auto-approve the migration or require manual review.
The review policy can be set to one of the following values:
ERROR
- Atlas will require manual review and approval only if the linting report contains errors. See the Destructive Changes analyzer as an example.WARNING
- Atlas will require manual review and approval if the linting report contains warnings (diagnostics) or errors. See the list of checks that can result in warnings.ALWAYS
(default) - Atlas will always require manual review and approval, regardless of the linting report.
- Global Configuration
- Environment Configuration
lint {
review = ERROR // ERROR | WARNING | ALWAYS
destructive {
error = false
}
}
env "prod" {
lint {
review = ERROR // ERROR | WARNING | ALWAYS
destructive {
error = false
}
}
}
The typical workflow for applying schema changes without manual review is to run atlas schema plan
to pre-plan
the migration and use the review policy as a fall-back for schema transitions that were not pre-planned but are still safe
to apply.
The atlas schema plan
command can be integrated into the CI/CD pipeline to ensure that
all schema changes are reviewed and approved before being applied to the database.
Diff Policy
Atlas allows configuring the schema diffing policy in project configuration to fine-tune or modify suggested changes before applying them to the database:
- Skip Destructive
- Concurrent Indexes
- Materialized Views
- CREATE / DROP TABLE
To instruct Atlas to skip destructive statements, specifically DROP SCHEMA
or DROP TABLE
in this example, via a CLI variable,
define a variable in the project configuration and set its value dynamically when running the migrate diff
command:
variable "destructive" {
type = bool
default = false
}
env "local" {
diff {
skip {
drop_schema = !var.destructive
drop_table = !var.destructive
}
}
}
The usage is as follows:
atlas schema apply --env "local" --var "destructive=true"
To instruct Atlas to create and drop indexes concurrently, set the concurrent_index
option in the diff
block of the
environment configuration.
Note that such migrations are tagged with atlas:txmode none
to ensure they do not run within a transaction.
env "local" {
diff {
// By default, indexes are not added or dropped concurrently.
concurrent_index {
add = true
drop = true
}
}
}
To instruct Atlas to create materialized views without populating them (using the WITH NO DATA
clause), set the
with_no_data
option in the materialized
block of the diff
configuration:
diff {
materialized {
with_no_data = true
}
}
To control this behavior via a CLI variable:
variable "with_no_data" {
type = bool
default = false
}
env "local" {
diff {
materialized {
with_no_data = var.with_no_data
}
}
}
Run the command with the variable:
atlas schema apply --env local --var "with_no_data=true"
Atlas Pro users can control how Atlas generates CREATE
and DROP
table statements by configuring the add_table
and drop_table
blocks in the diff
configuration, respectively:
diff {
add_table {
if_not_exists = true // default: false
}
drop_table {
cascade = true // default: false
if_exists = true // default: false
}
}
Transaction Configuration
Similar to the migrate apply
command, the schema apply
command allows
configuring the transaction mode for declarative migrations using the --tx-mode
flag. The following options are available:
--tx-mode file
(default) wraps the planned migration in its own transaction. If a statement fails, the transaction is rolled back, and Atlas stops execution.--tx-mode none
will not create any transaction.
Examples
HCL Schema as Desired State
The following example demonstrates how to use Atlas DDL (HCL) as the desired state:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://schema.hcl" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://schema.hcl" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://schema.hcl" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://schema.hcl" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://schema.hcl" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://schema.hcl" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://schema.hcl" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
SQL Schema as Desired State
The following example demonstrates how to use an SQL schema file as the desired state:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://schema.sql" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://schema.sql" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://schema.sql" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://schema.sql" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://schema.sql" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://schema.sql" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://schema.sql" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
ORM Schema as Desired State
The following example demonstrates how to use Sequelize models as the desired state:
- JavaScript
- TypeScript
data "external_schema" "sequelize" {
program = [
"npx",
"@ariga/atlas-provider-sequelize",
"load",
"--path", "./path/to/models",
"--dialect", "postgres", // mariadb | mysql | sqlite | mssql
]
}
env "sequelize" {
src = data.external_schema.sequelize.url
dev = "docker://postgres/15"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}
data "external_schema" "sequelize" {
program = [
"npx",
"@ariga/ts-atlas-provider-sequelize",
"load",
"--path", "./path/to/models",
"--dialect", "postgres", // mariadb | mysql | sqlite | mssql
]
}
env "sequelize" {
src = data.external_schema.sequelize.url
dev = "docker://postgres/15"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}
atlas schema apply \
--env sequelize \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"
To see this in action, check out our Declarative Migrations for Sequelize.js video.
For more ORM examples, go to our ORM guides.
Migration Directory as Desired State
The following example demonstrates how to use the migration directory as the desired state.
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://migrations" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://migrations" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://migrations" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://migrations" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://migrations" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://migrations" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://migrations" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
The URL for the migration directory can also contain two optional query parameters:
format
- Migration directory format:atlas
(default),golang-migrate
,goose
,dbmate
,flyway
, orliquibase
.version
- Version number of the last migration file to read. By default, all migrations are read.
For example, "file://migrations?format=golang-migrate&version=20250909104354“
Multi-Tenant Environments
The Atlas configuration language provides built-in support for executing declarative workflows in multi-tenant
environments. Using the for_each
meta-argument, users can define a single env
block that is expanded to N
instances – one for each tenant:
- atlas.hcl
- schema.hcl
env "prod" {
for_each = toset(var.tenants)
url = urlsetpath(var.url, each.value)
src = "schema.hcl"
format {
schema {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
// Inject custom variables to the schema.hcl defined below.
tenant = each.value
}
variable "tenant" {
type = string
description = "The schema we operate on"
}
schema "tenant" {
name = var.tenant
}
table "users" {
schema = schema.tenant
// ...
}
Read more about defining declarative workflows using project files in multi-tenant environments.