Verify Atlas and Export Your Schema
After successfully connecting to your database using Atlas, our next step will be to verify that Atlas is able to inspect your database schema and that it correctly normalizes it such that if re-applied to the database no diff will be detected.
Step 1: Export your database URL as an env var
To make the examples in this section brief, start by setting a local environment variable containing the URL you determined for your database:
export DB_URL="<url>"
Step 2: Export Database to Code
Use the Atlas schema inspect (or schema export) command to connect to your database and export
its schema as code.
Atlas supports two output formats:
- HCL: A declarative syntax similar to Terraform. Supports IDE features like jump-to-definition and auto-completion. Resources can be defined in any order across files.
- SQL: Standard DDL statements (
CREATE TABLE, etc.). Atlas automatically generatesimportdirectives to resolve dependencies between objects.
- HCL File
- SQL File
- HCL Directory
- SQL Directory
atlas schema inspect --url "$DB_URL" > schema.hcl
atlas schema inspect --url "$DB_URL" --format '{{ sql . }}' > schema.sql
atlas schema inspect --url "$DB_URL" --format '{{ hcl . | split | write "schema" }}'
The split function breaks the schema into one file per object (tables, views, functions, etc.).
The write function writes them to the schema/ directory. Resources can be defined in any order
across files, and Atlas resolves dependencies automatically.
atlas schema inspect --url "$DB_URL" --format '{{ sql . | split | write "schema" }}'
The split function breaks the schema into one file per object. The write function writes them
to the schema/ directory with a main.sql entry point and subdirectories per object type. Each
file includes atlas:import directives for its dependencies.
For the full reference, see Export Database Schema to Code.
Step 3: Determine your Dev Database URL
Notice that when we discuss a Dev Database in the Atlas documentation, we DO NOT refer to the local database you use for development, but to a different concept explained below.
To operate correctly, Atlas utilizes a Dev Database to normalize and verify schemas and migrations. Essentially, a dev-database is an empty database of the same type and version that you use in production. When Atlas runs, it may run some operations against this database, and is responsible for cleaning up after and leaving the database in an empty state.
While you may provide your own dev database to use, most Atlas users prefer our built-in docker:// driver
which will spin up a local, ephemeral Docker container and dispose of it after for you.
As we mentioned in the previous section, Atlas operates differently depending on the connection scope. Be sure to use the same scope for your dev-database as your target database to avoid miscalculations.
Select your database below to get the corresponding Docker Dev Database URL:
- PostgreSQL
- MySQL
- MariaDB
- SQLite
- SQL Server
- ClickHouse
- Redshift
- Oracle
- Spanner
- Snowflake
- Databricks
- Aurora DSQL
- CockroachDB
# When working on a single database schema, use the auth-created
# "public" schema as the search path.
--dev-url "docker://postgres/15/dev?search_path=public"
# When working on multiple database schemas.
--dev-url "docker://postgres/15/dev"
The docker driver also supports the postgis and pgvector images in its URL format:
--dev-url "docker://postgis/latest/dev"
--dev-url "docker://pgvector/pg17/dev"
To work with a custom Docker image, use one of the following formats:
# When working on a single database schema.
docker+postgres://org/image/dev?search_path=public
docker+postgres://ghcr.io/namespace/image:tag/dev?search_path=public
# For local/official images, leave host empty or use "_".
docker+postgres://_/local/dev?search_path=public
docker+postgres://_/official:latest/dev?search_path=public
# When working on multiple database schemas.
docker+postgres://org/image/dev
# Default database is "postgres".
docker+postgres://org/image:tag
# When working on a single database schema.
--dev-url "docker://mysql/8/dev"
# When working on multiple database schemas.
--dev-url "docker://mysql/8"
To work with a custom Docker image, use one of the following formats:
# When working on a single database schema.
docker+mysql://org/image/dev
docker+mysql://user/image:tag/dev
# For local/official images, leave host empty or use "_".
docker+mysql:///local/dev
docker+mysql://_/mysql:latest/dev
# When working on multiple database schemas.
docker+mysql://local
docker+mysql://org/image
docker+mysql://user/image:tag
docker+mysql://_/mysql:latest
# When working on a single database schema.
--dev-url "docker://maria/latest/schema"
# When working on multiple database schemas.
--dev-url "docker://maria/latest"
To work with a custom Docker image, use one of the following formats:
# When working on a single database schema.
docker+maria://org/image/dev
docker+maria://user/image:tag/dev
# For local/official images, leave host empty or use "_".
docker+maria:///local/dev
docker+maria://_/mariadb:latest/dev
# When working on multiple database schemas.
docker+maria://local
docker+maria://org/image
docker+maria://user/image:tag
docker+maria://_/mariadb:latest
# Atlas supports working with an in-memory database in SQLite.
--dev-url "sqlite://dev?mode=memory"
# When working on a single database schema.
--dev-url "docker://sqlserver/2022-latest/dev?mode=schema"
# When working on multiple database schemas.
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"
To work with an Azure SQL Edge docker image or SQLServer version, use one of the following formats:
# Run SQLServer 2017-latest in schema mode.
docker://sqlserver/2017-latest?mode=schema
docker://sqlserver/2019-latest?mode=schema
docker://sqlserver/2022-latest?mode=schema
# Run Azure SQL Edge 1.0.7 in schema mode.
docker+sqlserver://mcr.microsoft.com/azure-sql-edge:1.0.7?mode=schema
# When working on a single database schema
--dev-url "docker://clickhouse/23.11/dev"
# When working on multiple database schemas.
--dev-url "docker://clickhouse/23.11"
To work with a custom Docker image, use one of the following formats:
# When working on a single database schema.
docker+clickhouse://org/image/dev
docker+clickhouse://user/image:tag/dev
# For local/official images, leave host empty or use "_".
docker+clickhouse:///local/dev
docker+clickhouse://_/clickhouse:latest/dev
# When working on multiple database schemas.
docker+clickhouse://local
docker+clickhouse://org/image
docker+clickhouse://user/image:tag
docker+clickhouse://_/clickhousedb:latest
For complex setups involving a cluster, you need to configure it manually. You can refer to these examples for guidance: ClickHouse Docker Compose Recipes.
If you are using the ClickHouse Cloud, there are some advanced features that are not supported in the Docker image. We recommend using another ClickHouse Cloud instance as the dev-database.
Redshift is a managed service and does not support running in a Docker container. To user dev-database with Redshift, we recommend using a real Redshift cluster with separate database.
# When working on a single database schema, use the auth-created
# "public" schema as the search path.
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public"
# When working on multiple database schemas.
--dev-url "redshift://user:pass@redshift-cluster:5439/dev"
# When working on a single database schema.
--dev-url "docker://oracle/free:latest?mode=schema"
# When working on multiple database schemas.
--dev-url "docker://oracle/free:latest?mode=database"
To use different Oracle versions, you can use the following formats:
# All free Oracle images are available at
# https://container-registry.oracle.com/ords/ocr/ba/database/free
--dev-url "docker://oracle/free:latest"
--dev-url "docker://oracle/free:latest-lite"
--dev-url "docker://oracle/free:23.8.0.0-lite-amd64"
# For older version on ARM devices (Apple Silicon), use the `-arm64` suffix.
--dev-url "docker://oracle/free:23.7.0.0-arm64"
--dev-url "docker://oracle/free:23.7.0.0-lite-arm64"
# All express Oracle images are available at
# https://container-registry.oracle.com/ords/ocr/ba/database/express
--dev-url "docker://oracle/express:latest"
--dev-url "docker://oracle/free:23.7.0.0-lite-arm64"
# For enterprise Oracle images, you need to login to the Oracle Container Registry
# and accept the their terms of use.
# https://container-registry.oracle.com/ords/ocr/ba/database/enterprise
--dev-url "docker://oracle/enterprise:latest"
# The enterprise image allows you to change the PDB name.
--dev-url "docker://oracle/enterprise:21.3.0.0/ORCLPDB1"
The Oracle Docker Images are big, and can take a while to download. It's recommended to use the docker pull command
before using the --dev-url option to avoid waiting for the image to download during the schema apply or migrate diff commands.
The enterprise Oracle images takes a while to start up, so it's recommended to add the timeout=10m parameter to the --dev-url option to avoid timeout errors.
--dev-url "docker://oracle/enterprise:21.3.0.0/ORCLPDB1?timeout=10m"
# GoogleSQL dialect (default)
--dev-url "docker://spanner/latest"
# PostgreSQL dialect
--dev-url "docker://spannerpg/latest"
Snowflake is a managed service and does not support running in a Docker container. To user dev-database with Snowflake, we recommend using a another Snowflake database as the dev-database.
# When working on a single database schema.
--dev-url "snowflake://user:pass@account_identifier/database/schema"
# When working on multiple database schemas.
--dev-url "snowflake://user:pass@account_identifier/database"
Databricks is a managed service and does not support running in a Docker container. To use the dev-database with Databricks, we recommend using an additional Databricks catalog with a separate schema.
# When working on a single database schema
--dev-url "databricks://token@host:443/warehouse?catalog=dev&schema=default"
# When working on multiple database schemas
# catalog is optional and defaults to workspace
--dev-url "databricks://token@host:443/warehouse?catalog=dev"
The docker://dsql driver uses a PostgreSQL container but generates DSQL-compatible SQL (e.g., CREATE INDEX ASYNC).
# DSQL dev database (uses PostgreSQL under the hood)
--dev-url "docker://dsql/16"
# With specific schema scope
--dev-url "docker://dsql/16/postgres?search_path=public"
The Docker container runs PostgreSQL, not actual DSQL. Some DSQL limitations (e.g., no foreign keys, no triggers) are not enforced during local development. Ensure your schema only uses DSQL-supported features.
# When working on a single database schema.
--dev-url "docker://crdb/v25.1.1/dev?search_path=public"
# When working on multiple database schemas.
--dev-url "docker://crdb/v25.1.1/dev"
For more details, see the Dev Database guide.
Step 4: Verify Zero Diff
Next, use Atlas's schema apply --dry-run command to check that Atlas sees no difference between
your exported schema and the actual database:
- HCL File
- SQL File
- HCL Directory
- SQL Directory
atlas schema apply \
--url "$DB_URL" \
--to file://schema.hcl \
--dev-url <dev_db_url> \
--dry-run
atlas schema apply \
--url "$DB_URL" \
--to file://schema.sql \
--dev-url <dev_db_url> \
--dry-run
atlas schema apply \
--url "$DB_URL" \
--to file://schema/ \
--dev-url <dev_db_url> \
--dry-run
atlas schema apply \
--url "$DB_URL" \
--to file://schema/ \
--dev-url <dev_db_url> \
--dry-run
Be sure to replace <dev_db_url> with the Dev Database URL you determined in step 3.
If everything works correctly, Atlas should print out a message similar to:
Schemas are synced, no changes to be made.
If you encounter any issues during this step, reach out via your shared Slack Connect channel (commercial PoC) or use the Community Support channels.