Declarative Schema Migrations
With Atlas's declarative schema migrations (sometimes called state-based migrations), users don't need to manually craft SQL migration scripts. Instead, Atlas automatically plans and applies schema changes, safely transitioning the database from its current state to the desired state.
The desired schema state can be defined using an HCL or SQL schema definition, a database URL, external schemas like ORM, or a combination of these.
By default, the atlas schema apply command manages only schemas, tables, and their associated indexes and
constraints such as foreign keys and checks.
Views, materialized views, functions, procedures, triggers, sequences, domains, extensions, and additional database features are available to Atlas Pro users. To include these resources in schema migrations, use the following command:
atlas login
Overview
- Atlas loads the desired state into the provided dev database (
--dev-url) - Atlas compares this against the target database's current state by inspecting its schema
- Atlas plans a migration to get from the current state to the desired state
- Atlas prompts the user to approve and apply the migration plan to the target database
Flags
When running schema apply, users may supply multiple parameters:
--urlor-u(required) - URL of the database to be inspected.--to(required) - URL(s) of the desired state. Can be defined using an HCL or SQL schema, a database URL, or external ORM.--dev-url(required) - URL to the dev database.--schema(may be supplied multiple times) - Specify which schema to inspect within the target database.--exclude(may be supplied multiple times) - Filter out resources matching the given glob pattern.--format- Go template to use to format the output.--edit- Open the planned migration in the default editor, allowing the user to modify it.--config- Path to the Atlas configuration file. Defaults toatlas.hclin the current directory, if the--envflag is set.--env- Environment to use from the Atlas configuration file.--dry-run- Write and print the SQL statements of the migration plan without executing them on the target database.--auto-approve- Apply the migration plan to the target database without first prompting the user for approval.
See detailed usage examples in the Examples section.
Approval Policy
The schema apply command requires user review and approval before executing the migration against the target database.
Approving the migration can occur manually either locally or in CI, or automatically with the linting review policy. It can also be completely skipped, which is not recommended in production environments.
Let's cover all the options:
-
Manual Review (default): The
atlas schema applycommand will print the SQL statements it is going to run and prompt the user for approval. Users can review the migration plan and either approve or reject it.Atlas Pro users can set the
--dev-urlflag to run an analysis and simulation of the proposed changes on the dev database and get a detailed linting report when reviewing the migration. -
Using Review Policy: Atlas ships with an analysis engine that can detect the impact of proposed changes to the target database. For example, Atlas can detect irreversible destructive changes that can result in data loss or data-dependent changes that may fail due to data integrity constraints.
Users can configure Atlas to automatically approve migrations that pass the analysis engine checks (for example, no destructive changes were detected) and require manual review for migrations that fail the checks. Read more on how to configure the Review Policy for your project below.
-
Skip Review: The
--auto-approveflag can be used to skip the review process and automatically apply the migration to the target database. Although this option is convenient for experimentation and development, it is not recommended for production environments, as it may lead to unexpected changes.
Review Policy
Atlas Pro users can define the cases in which their schema changes require manual review and approval. How does this work?
- During
atlas schema apply, Atlas checks if there is an approved pre-planned migration for the given schema transition (State1 -> State2). If one exists, then Atlas will apply it without requiring manual review. - If no migration has been pre-planned, Atlas will lint the schema changes by running an analysis and simulation on the dev database to review their potential impact on the target database.
- Atlas then prints the linting report and, based on the review policy defined in the
atlas.hclfile, decides whether to auto-approve the migration or require manual review.
The review policy can be set to one of the following values:
ERROR- Atlas will require manual review and approval only if the linting report contains errors. See the Destructive Changes analyzer as an example.WARNING- Atlas will require manual review and approval if the linting report contains warnings (diagnostics) or errors. See the list of checks that can result in warnings.ALWAYS(default) - Atlas will always require manual review and approval, regardless of the linting report.
- Global Configuration
- Environment Configuration
lint {
review = ERROR // ERROR | WARNING | ALWAYS
destructive {
error = false
}
}
env "prod" {
lint {
review = ERROR // ERROR | WARNING | ALWAYS
destructive {
error = false
}
}
}
The typical workflow for applying schema changes without manual review is to run atlas schema plan to pre-plan
the migration and use the review policy as a fall-back for schema transitions that were not pre-planned but are still safe
to apply.
The atlas schema plan command can be integrated into the CI/CD pipeline to ensure that
all schema changes are reviewed and approved before being applied to the database.
Diff Policy
Atlas allows configuring the schema diffing policy in project configuration to fine-tune or modify suggested changes before applying them to the database:
- Skip Destructive
- Concurrent Indexes
- Materialized Views
- CREATE / DROP TABLE
To instruct Atlas to skip destructive statements, specifically DROP SCHEMA or DROP TABLE in this example, via a CLI variable,
define a variable in the project configuration and set its value dynamically when running the migrate diff command:
variable "destructive" {
type = bool
default = false
}
env "local" {
diff {
skip {
drop_schema = !var.destructive
drop_table = !var.destructive
}
}
}
The usage is as follows:
atlas schema apply --env "local" --var "destructive=true"
To instruct Atlas to create and drop indexes concurrently, set the concurrent_index option in the diff block of the
environment configuration.
Note that such migrations are tagged with atlas:txmode none
to ensure they do not run within a transaction.
env "local" {
diff {
// By default, indexes are not added or dropped concurrently.
concurrent_index {
add = true
drop = true
}
}
}
To instruct Atlas to create materialized views without populating them (using the WITH NO DATA clause), set the
with_no_data option in the materialized block of the diff configuration:
diff {
materialized {
with_no_data = true
}
}
To control this behavior via a CLI variable:
variable "with_no_data" {
type = bool
default = false
}
env "local" {
diff {
materialized {
with_no_data = var.with_no_data
}
}
}
Run the command with the variable:
atlas schema apply --env local --var "with_no_data=true"
Atlas Pro users can control how Atlas generates CREATE and DROP table statements by configuring the add_table
and drop_table blocks in the diff configuration, respectively:
diff {
add_table {
if_not_exists = true // default: false
}
drop_table {
cascade = true // default: false
if_exists = true // default: false
}
}
Transaction Configuration
Similar to the migrate apply command, the schema apply command allows
configuring the transaction mode for declarative migrations using the --tx-mode flag. The following options are available:
--tx-mode file(default) wraps the planned migration in its own transaction. If a statement fails, the transaction is rolled back, and Atlas stops execution.--tx-mode nonewill not create any transaction.
Pre & Post Deployment Scripts
To run custom scripts before or after a migration (for example, taking snapshots, seeding lookup tables, or cleaning up after deployment), refer to the pre/post deployment hooks documentation.
Examples
HCL Schema as Desired State
The following example demonstrates how to use Atlas DDL (HCL) as the desired state:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://schema.hcl" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://schema.hcl" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://schema.hcl" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://schema.hcl" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://schema.hcl" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://schema.hcl" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://schema.hcl" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
SQL Schema as Desired State
The following example demonstrates how to use an SQL schema file as the desired state:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://schema.sql" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://schema.sql" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://schema.sql" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://schema.sql" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://schema.sql" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://schema.sql" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://schema.sql" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
ORM Schema as Desired State
The following example demonstrates how to use Sequelize models as the desired state:
- JavaScript
- TypeScript
data "external_schema" "sequelize" {
program = [
"npx",
"@ariga/atlas-provider-sequelize",
"load",
"--path", "./path/to/models",
"--dialect", "postgres", // mariadb | mysql | sqlite | mssql
]
}
env "sequelize" {
src = data.external_schema.sequelize.url
dev = "docker://postgres/15"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}
data "external_schema" "sequelize" {
program = [
"npx",
"@ariga/ts-atlas-provider-sequelize",
"load",
"--path", "./path/to/models",
"--dialect", "postgres", // mariadb | mysql | sqlite | mssql
]
}
env "sequelize" {
src = data.external_schema.sequelize.url
dev = "docker://postgres/15"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}
atlas schema apply \
--env sequelize \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"
To see this in action, check out our Declarative Migrations for Sequelize.js video.
For more ORM examples, go to our ORM guides.
Migration Directory as Desired State
The following example demonstrates how to use the migration directory as the desired state.
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://migrations" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://migrations" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://migrations" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://migrations" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://migrations" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://migrations" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://migrations" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
The URL for the migration directory can also contain two optional query parameters:
format- Migration directory format:atlas(default),golang-migrate,goose,dbmate,flyway, orliquibase.version- Version number of the last migration file to read. By default, all migrations are read.
For example, "file://migrations?format=golang-migrate&version=20250909104354“
Multi-Tenant Environments
The Atlas configuration language provides built-in support for executing declarative workflows in multi-tenant
environments. Using the for_each meta-argument, users can define a single env block that is expanded to N
instances – one for each tenant:
- atlas.hcl
- schema.hcl
env "prod" {
for_each = toset(var.tenants)
url = urlsetpath(var.url, each.value)
src = "schema.hcl"
format {
schema {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
// Inject custom variables to the schema.hcl defined below.
tenant = each.value
}
variable "tenant" {
type = string
description = "The schema we operate on"
}
schema "tenant" {
name = var.tenant
}
table "users" {
schema = schema.tenant
// ...
}
Read more about defining declarative workflows using project files in multi-tenant environments.
Deployment Rollout Strategies
When applying schema changes to multiple tenants, you can use Atlas's deployment block to define staged rollout
strategies with fine-grained control over execution order, parallelism, and error handling:
deployment "staged" {
variable "name" {
type = string
}
variable "tier" {
type = string
}
// Stage 1: Deploy to canary tenants first to validate changes.
group "canary" {
match = startswith(var.name, "canary-")
}
// Stage 2: Roll out to free-tier tenants with high parallelism.
group "free" {
match = var.tier == "FREE"
parallel = 10
on_error = CONTINUE
depends_on = [group.canary]
}
// Stage 3: Deploy to paid customers with controlled parallelism.
group "paid" {
parallel = 3
depends_on = [group.free]
}
}
env "prod" {
for_each = toset(data.sql.tenants.values)
url = urlsetpath(var.url, each.value.schema)
src = "atlas://my-app"
rollout {
deployment = deployment.staged
vars = {
name = each.value.name
tier = each.value.tier
}
}
}
This configuration first applies schema changes to canary tenants, then to all free-tier tenants in parallel (up to 10 at a time), and finally to paid tenants (up to 3 at a time).
Read more about deployment rollout strategies for multi-tenant environments.
Applying to Multiple Databases with Shared Schema
A common deployment pattern involves the same schema replicated across multiple database servers (e.g., a regional deployment with identical databases in US, EU, and APAC). This section explains how the plan/approve/apply workflow operates in this scenario.
How Plan Matching Works
When atlas schema apply runs against a database, Atlas checks whether there is a pre-planned migration stored in
the Atlas Registry that matches the schema transition:
Plans are matched by schema state transition, not by database URL. If all your databases are in the same state (S1), a single approved plan (S1 → S2) works for all of them.
Handling State Divergence
Assuming your CI/CD pipeline is already configured with schema/plan
and schema/plan/approve actions, your typical
workflow creates and approves plans during the PR/merge process. However, databases may occasionally diverge due to
manual changes, partial failures, or different timing of previous migrations.
When a database is in a different state than expected and no matching pre-approved plan exists, Atlas computes the migration, runs analysis (linting), and handles it based on your Review Policy:
| Review Policy | Behavior |
|---|---|
ERROR | Auto-approve if no lint errors; otherwise, create ad-hoc plan and wait for approval |
WARNING | Auto-approve if no warnings/errors; otherwise, create ad-hoc plan and wait for approval |
ALWAYS | Always create ad-hoc plan and wait for approval |
When auto-approval doesn't pass, Atlas can use ad-hoc approvals: it automatically creates a plan, provides you a link to review it in Atlas Registry, and waits for your approval before continuing. This ensures no unexpected changes are applied without explicit review.
For more information on setting up ad-hoc approvals in your CI/CD pipeline: