Declarative schema migrations
With Atlas, users do not need to plan database schema changes themselves. Instead of figuring out the correct SQL statements to get their database to the desired state, Atlas supports a kind of workflow that we call declarative schema migration. With declarative schema migrations the user provides a URL (connection string) to the target database and the desired schema, while Atlas does the planning.
Read more about declarative workflows
By default, the atlas schema apply
command manages only schemas, tables, and their associated indexes and
constraints such as foreign keys and checks.
Views, materialized views, functions, procedures, triggers, sequences, domains, extensions, and additional database features are available to Atlas Pro users. To include these resources in schema migrations, use the following command:
atlas login
Flags
The schema apply
command auto-generates a migration plan and applies it to the database to bring it to the desired state.
The desired state can be defined using an HCL or SQL schema definition, a database URL, or an
external schemas like ORM.
--url
(-u
accepted as well) - the URL of the database to be inspected.--to
- a list of URLs to the desired state: can be a database URL, an HCL or SQL schema, or a migration directory.--dev-url
- a URL to the Dev-Database.--schema
(optional, may be supplied multiple times) - schemas to inspect within the target database.--exclude
(optional, may be supplied multiple times) - filter out resources matching the given glob pattern.--format
(optional) - Go template to use to format the output.--edit
(optional) - open the planned migration in the default editor, allowing the user to modify it.
Approval Policy
The schema apply
command requires user review and approval before executing the migration against the target database.
The approval process can occur during migration planning (locally or in CI), automatically in the applying stage using
the linting review policy, or be completely skipped, which is not recommended in production environments. Let's cover
all options:
-
Interactive Review (default): The
atlas schema apply
command will print the SQL statements it is going to run and prompt the user for approval. Users can review the migration plan and either approve or reject it. Atlas Pro users can set the--dev-url
flag to run analysis and simulation on the Dev-Database of the proposed changes, and get a detailed linting report when reviewing the migration. -
Skip review (auto-approval): The
--auto-approve
flag can be used to skip the review process and automatically apply the migration to the target database. Although this option is convenient for experimentation and development, it is not recommended for production environments, as it may lead to unexpected changes, such as destructive operations. -
Approve using Review Policy: Atlas ships with an analysis engine that can detect the impact of proposed changes to the target database. For example, Atlas can detect irreversible destructive changes that will result in data loss or data-dependent changes that may fail due to data integrity constraints.
Users can configure Atlas to automatically approve migrations that pass the analysis engine checks (for example, no destructive changes were detected) and require manual review for migrations that fail the checks. Read more on how to configure the Review Policy for your project.
-
Pre-plan (and Approve) Migrations: The
atlas schema plan
command allows users to pre-plan, review, and approve migrations before executingatlas schema apply
on the database. This enables users to preview and modify SQL changes, involve team members in the review process, and ensure that no human intervention is required during theatlas schema apply
phase. Read more aboutatlas schema plan
and how to integrate it into your CI/CD pipeline.
Review Policy
Users can define in which cases their schema changes require manual review and approval, if no migration has already been planned and approved for the given Schema Transition (State1 -> State2). How does it work?
- During
atlas schema apply
, Atlas checks if there is an approved pre-planned migration for the given schema transition (State1 -> State2). If there is one, Atlas will apply it without requiring manual review. - If no migration has been pre-planned, Atlas will run analysis and simulation on the Dev-Database to detect the impact of the proposed changes.
- Atlas then prints the linting report, and based on the review policy defined in the
atlas.hcl
file, it will decide whether to auto-approve the migration or require manual review.
The review policy can be set to one of the following values:
ERROR
- Atlas will require manual review and approval only if the linting report contains errors, i.e., one of the analyzers is configured to return an error. See the destructive-changes analyzer as an example.WARNING
- Atlas will require manual review and approval if the linting report contains warnings (diagnostics) or errors. See the list of checks that can be detected by the analyzers.ALWAYS
(default) - Atlas will always require manual review and approval, regardless of the linting report.
- Global Configuration
- Env Configuration
lint {
review = ERROR // ERROR | ALWAYS
destructive {
error = false
}
}
env "prod" {
lint {
review = ERROR // ERROR | ALWAYS
destructive {
error = false
}
}
}
The typical workflow for applying schema changes without manual review is to use atlas schema plan
to pre-plan the
migration and falling back to the "review policy" for schema transitions that were not pre-planned but are still safe
to apply. The atlas schema plan
command can be integrated into the CI/CD pipeline to ensure that all schema changes
are reviewed and approved before being applied to the database.
Auto-approval
Before executing the migration against the target database, Atlas will print the SQL
statements that it is going to run and prompt the user for approval. Users that wish
to automatically approve may run the schema apply
command with the --auto-approve
flag.
Dry-runs
In order to skip the execution of the SQL queries against the target database,
users may provide the --dry-run
flag. When invoked with this flag, Atlas will
connect to the target database, inspect its current state, calculate the diff
between the provided desired schema and print out a series of SQL statements to
reconcile any gaps between the inspected and desired schemas.
Dev-database
When storing schema definitions, many database engines perform some form of normalization. That is, despite us providing a specific definition of some aspect of the schema, the database will store it in another, equivalent form. This means in certain situations it may appear to Atlas as if some diff exists between the desired and inspected schemas, whereas in reality there is none.
To overcome these situations, users may use the --dev-url
flag to provide
Atlas with a connection string to a Dev-Database.
This database is used to normalize the schema prior to planning migrations and
for simulating changes to ensure their applicability before execution.
Diff Policy
Atlas allows configuring the schema diffing policy in project configuration to fine-tune or modify suggested changes before applying them to the database:
- Skip Destructive
- Concurrent Indexes
variable "destructive" {
type = bool
default = false
}
env "local" {
diff {
skip {
drop_schema = !var.destructive
drop_table = !var.destructive
}
}
}
The usage is as follows:
atlas schema apply --env "local" --var "destructive=true"
env "local" {
diff {
// By default, indexes are not added or dropped concurrently.
concurrent_index {
add = true
drop = true
}
}
}
Examples
HCL schema
The following example demonstrates how to use Atlas DDL (HCL) as the desired state and update the database schema to match it:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://schema.hcl" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://schema.hcl" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://schema.hcl" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://schema.hcl" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://schema.hcl" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://schema.hcl" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://schema.hcl" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
SQL schema
The following example demonstrates how to use an SQL schema file as the desired state and update the database schema to match it:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://schema.sql" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://schema.sql" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://schema.sql" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://schema.sql" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://schema.sql" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://schema.sql" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://schema.sql" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
Atlas loads the desired state by executing the SQL files onto the provided dev database, compares it against the database current state by inspecting its schema and writes a migration plan for moving from the current state to the desired state.
Migration directory
The following example demonstrates how to use the migration directory as the desired state and update the database schema to match it. The URL for the migration directory can contain two optional query parameters:
format
- migration directory format: atlas (default), golang-migrate, goose, dbmate, flyway, liquibase.version
- until which version of migration files to read. By default, all migrations are read.
- MySQL
- MariaDB
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://migrations" \
--dev-url "docker://mysql/8/example"
atlas schema apply \
--url "maria://root:pass@:3306/example" \
--to "file://migrations" \
--dev-url "docker://maria/latest/example"
atlas schema apply \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \
--to "file://migrations" \
--dev-url "docker://postgres/15"
atlas schema apply \
--url "sqlite://file.db" \
--to "file://migrations" \
--dev-url "sqlite://file?mode=memory"
atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \
--to "file://migrations" \
--dev-url "docker://sqlserver/2022-latest/dev"
atlas schema apply \
--url "clickhouse://localhost:9000/default" \
--to "file://migrations" \
--dev-url "docker://clickhouse/23.11/dev"
atlas schema apply \
--url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \
--to "file://migrations" \
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable"
Multi-Tenant environments
The Atlas configuration language provides built-in support for executing declarative workflows in multi-tenant
environments. Using the for_each
meta-argument, users can define a single env
block that is expanded to N instances,
one for each tenant:
- atlas.hcl
- schema.hcl
env "prod" {
for_each = toset(var.tenants)
url = urlsetpath(var.url, each.value)
src = "schema.hcl"
format {
schema {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
// Inject custom variables to the schema.hcl defined below.
tenant = each.value
}
variable "tenant" {
type = string
description = "The schema we operate on"
}
schema "tenant" {
name = var.tenant
}
table "users" {
schema = schema.tenant
// ...
}
Read more about how to define declarative workflows using project files in multi-tenant environments.