Skip to main content

Getting Started with Atlas

Atlas is a language-independent tool for managing and migrating database schemas using modern DevOps principles. Define your desired schema, and Atlas will plan, lint, test, and safely apply the changes - like Terraform, but for databases. This guide takes about 5 minutes.

Using an AI Coding Agent?

If you're using Claude Code, Cursor, GitHub Copilot, Codex, or any AI agent that can run CLI commands - let it handle the setup. After installing Atlas (Step 1 below), tell your agent:

Set up Atlas for database schema management in this project.
Read https://atlasgo.io/guides/ai-tools for setup instructions.

Your agent will read the guide, install the Atlas skill, and configure your project.

Step 1: Install Atlas

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Log in to unlock Atlas Pro features such as migration linting, testing, schema visualization, and CI/CD integration:

atlas login

This opens your browser to sign up or log in. You can continue this guide without logging in, but Pro features will be unavailable.

Step 3: Define Your Schema

Create a file called schema.sql. This is your desired database schema - Atlas reads it and figures out what needs to change.

schema.sql
CREATE TABLE users (
id serial PRIMARY KEY,
name varchar(255) NOT NULL,
email varchar(255) UNIQUE NOT NULL,
created_at timestamptz DEFAULT now()
);

Atlas also supports MariaDB, DSQL, Redshift, Snowflake, Oracle, Spanner, CockroachDB, Databricks, and more.

Step 4: Set Up Your Database

Set your database URL. If you already have a database running, point to it directly. Otherwise, Atlas can spin one up for you using Docker:

# Spin up a local PostgreSQL database (scoped to the public schema)
export DATABASE_URL=$(atlas tool docker --url "docker://postgres/17/myapp?search_path=public" --name my-db)

# Or use an existing database
# export DATABASE_URL="postgres://user:password@localhost:5432/myapp?search_path=public&sslmode=disable"
Cleaning up

When you're done, stop the container with atlas tool docker kill --name my-db.

Step 5: Declarative Workflow

The declarative workflow lets you define the desired state of your database schema, and Atlas figures out what needs to change.

Apply the schema

Run atlas schema apply to push your schema.sql to the database. The --dev-url is a disposable database Atlas uses to compute the diff between your desired and current schema:

atlas schema apply \
--url "$DATABASE_URL" \
--to file://schema.sql \
--dev-url "docker://postgres/17/dev?search_path=public"

Atlas compares the desired state (schema.sql) with the current state of the database, plans the migration, and asks for confirmation:

Planning migration statements (1 in total):

-- create "users" table:
-> CREATE TABLE "users" (
"id" serial NOT NULL,
"name" character varying(255) NOT NULL,
"email" character varying(255) NOT NULL,
"created_at" timestamptz NULL DEFAULT now(),
PRIMARY KEY ("id"),
CONSTRAINT "users_email_key" UNIQUE ("email")
);

-------------------------------------------

? Approve or abort the plan:
▸ Approve and apply
Abort

Approve, and the table is created. Verify it by inspecting the database:

atlas schema inspect --url "$DATABASE_URL" --format '{{ sql . }}'
-- Create "users" table
CREATE TABLE "users" (
"id" serial NOT NULL,
"name" character varying(255) NOT NULL,
"email" character varying(255) NOT NULL,
"created_at" timestamptz NULL DEFAULT now(),
PRIMARY KEY ("id"),
CONSTRAINT "users_email_key" UNIQUE ("email")
);

The table exists. That's the core idea of Atlas automatic migration planning - you define the desired state, Atlas handles the rest.

Make a schema change

Now edit schema.sql to add a blog_posts table:

schema.sql
CREATE TABLE users (
id serial PRIMARY KEY,
name varchar(255) NOT NULL,
email varchar(255) UNIQUE NOT NULL,
created_at timestamptz DEFAULT now()
);

CREATE TABLE blog_posts (
id serial PRIMARY KEY,
title varchar(255) NOT NULL,
body text,
author_id int REFERENCES users(id),
created_at timestamptz DEFAULT now()
);

Apply schema changes

Run atlas schema apply again (same command as before). Atlas detects that users already exists and only creates what's new - the blog_posts table:

Planning migration statements (1 in total):

-- create "blog_posts" table:
-> CREATE TABLE "blog_posts" (
"id" serial NOT NULL,
"title" character varying(255) NOT NULL,
"body" text NULL,
"author_id" integer NULL,
"created_at" timestamptz NULL DEFAULT now(),
PRIMARY KEY ("id"),
CONSTRAINT "blog_posts_author_id_fkey" FOREIGN KEY ("author_id") REFERENCES "users" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION
);

-------------------------------------------

? Approve or abort the plan:
▸ Approve and apply
Abort

Atlas automatically computes the diff between your desired and current schema - you edit the schema, Atlas generates the migration.

Read more about the Declarative Workflow and Pre-planning Migrations for production deployments.

Step 6: Versioned Workflow

For teams that want migration files checked into version control, reviewed in PRs, and applied in CI/CD, Atlas can generate them automatically from your schema.

tip

The commands below assume the default migration directory (file://migrations). If your project uses a different directory, pass --dir "file://<your-migrations-dir>" to the commands below.

Generate the first migration

Your schema.sql already has both tables from Step 5. Generate the initial migration:

atlas migrate diff initial \
--to file://schema.sql \
--dev-url "docker://postgres/17/dev?search_path=public"

Atlas creates a migrations/ directory with the generated SQL file and a checksum file:

migrations/
├── 20260215120000_initial.sql
└── atlas.sum
migrations/20260215120000_initial.sql
-- Create "users" table
CREATE TABLE "users" (
"id" serial NOT NULL,
"name" character varying(255) NOT NULL,
"email" character varying(255) NOT NULL,
"created_at" timestamptz NULL DEFAULT now(),
PRIMARY KEY ("id"),
CONSTRAINT "users_email_key" UNIQUE ("email")
);
-- Create "blog_posts" table
CREATE TABLE "blog_posts" (
"id" serial NOT NULL,
"title" character varying(255) NOT NULL,
"body" text NULL,
"author_id" integer NULL,
"created_at" timestamptz NULL DEFAULT now(),
PRIMARY KEY ("id"),
CONSTRAINT "blog_posts_author_id_fkey" FOREIGN KEY ("author_id") REFERENCES "users" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION
);

Lint the migration

Before applying, validate the migration for safety issues (destructive changes, data loss, etc.):

atlas migrate lint \
--dev-url "docker://postgres/17/dev?search_path=public" \
--latest 1
Analyzing changes until version 20260215120000 (1 migration in total):

-- analyzing version 20260215120000
-- non-optimal columns alignment:
-- L2: Table "users" has 4 redundant bytes of padding per row. To reduce disk space, the
optimal order of the columns is as follows: "created_at", "id", "name", "email"
https://atlasgo.io/lint/analyzers#PG110
-- ok (563.708µs)

-------------------------
-- 115.851ms
-- 1 version with warnings
-- 2 schema changes
-- 1 diagnostic

No critical issues - just optimization suggestions for PostgreSQL.

Apply the migration

Since the database already has tables from the declarative workflow in Step 5, clean it first:

atlas schema clean --url "$DATABASE_URL" --auto-approve

Then apply the migration:

atlas migrate apply --url "$DATABASE_URL"
Migrating to version 20260215120000 (1 migrations in total):

-- migrating version 20260215120000
-> CREATE TABLE "users" (
"id" serial NOT NULL,
"name" character varying(255) NOT NULL,
"email" character varying(255) NOT NULL,
"created_at" timestamptz NULL DEFAULT now(),
PRIMARY KEY ("id"),
CONSTRAINT "users_email_key" UNIQUE ("email")
);
-> CREATE TABLE "blog_posts" (
"id" serial NOT NULL,
"title" character varying(255) NOT NULL,
"body" text NULL,
"author_id" integer NULL,
"created_at" timestamptz NULL DEFAULT now(),
PRIMARY KEY ("id"),
CONSTRAINT "blog_posts_author_id_fkey" FOREIGN KEY ("author_id") REFERENCES "users" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION
);
-- ok (2.896ms)

-------------------------
-- 1 migration
-- 2 sql statements

Add a second migration

Add an updated_at column to blog_posts in your schema.sql:

schema.sql
CREATE TABLE users (
id serial PRIMARY KEY,
name varchar(255) NOT NULL,
email varchar(255) UNIQUE NOT NULL,
created_at timestamptz DEFAULT now()
);

CREATE TABLE blog_posts (
id serial PRIMARY KEY,
title varchar(255) NOT NULL,
body text,
author_id int REFERENCES users(id),
created_at timestamptz DEFAULT now(),
updated_at timestamptz DEFAULT now()
);

Generate the migration:

atlas migrate diff add_updated_at \
--to file://schema.sql \
--dev-url "docker://postgres/17/dev?search_path=public"
migrations/20260215120100_add_updated_at.sql
-- Modify "blog_posts" table
ALTER TABLE "blog_posts" ADD COLUMN "updated_at" timestamptz NULL DEFAULT now();

Atlas compares the migration directory against the desired schema and generates only the diff - a single ALTER TABLE.

Lint and apply:

atlas migrate lint \
--dev-url "docker://postgres/17/dev?search_path=public" \
--latest 1
Analyzing changes from version 20260215120000 to 20260215120100 (1 migration in total):

-- analyzing version 20260215120100
-- no diagnostics found
-- ok (562.917µs)

-------------------------
-- 1 version ok
-- 1 schema change
atlas migrate apply --url "$DATABASE_URL"
Migrating to version 20260215120100 from 20260215120000 (1 migrations in total):

-- migrating version 20260215120100
-> ALTER TABLE "blog_posts" ADD COLUMN "updated_at" timestamptz NULL DEFAULT now();
-- ok (3.224ms)

-------------------------
-- 1 migration
-- 1 sql statement

Your migrations/ directory now contains the full history:

migrations/
├── 20260215120000_initial.sql
├── 20260215120100_add_updated_at.sql
└── atlas.sum

The workflow is: edit schema -> migrate diff -> migrate lint -> migrate apply. These files can be checked into version control, reviewed in PRs, and applied in CI/CD. Read more about Declarative vs Versioned workflows to decide which is right for your project.

Visualize Your Schema

Open an interactive ERD of your schema in the browser using the -w flag:

atlas schema inspect --url "$DATABASE_URL" -w
Loading ERD...

Next Step: Setup CI/CD

After getting familiar with the basics of Atlas, the next step is to integrate it into your development workflow. Sign up for a free trial to unlock Pro features, then set up your CI/CD pipeline by running the following command, and proceeding with the steps below:

Open a free trial account
atlas login

Create a Repo

What's Next?

Need help?

We have a super friendly #getting-started channel on our community chat on Discord.

Join our Discord server