Setting Up Your Project
After successfully verifying Atlas can connect to your database and understand its schema, it's time to set up your project.
Choose a workflow
Atlas supports two workflows for managing database schemas. In both, you work declaratively by defining the desired state of your schema as code. The difference is how changes reach your database.
- Versioned workflow: You define the desired state, and Atlas
generates versioned migration files (
migrations/*.sql) that are checked into source control, reviewed in PRs, and applied in order. This gives you an explicit, auditable history of every change. - Declarative workflow: Atlas compares the desired state directly against the target database and computes the diff at apply time. No migration files are stored. This is simpler, but offers less control over what exactly runs in production.
Most teams use the versioned workflow for shared environments (staging, production) and the declarative workflow for quick local iteration. For a detailed comparison, see Declarative vs Versioned Migrations.
Project layout
- Versioned
- Declarative (State-based)
.
├── atlas.hcl # Project configuration
├── schema/ # Desired state (HCL or SQL files)
└── migrations/ # Migration files applied in order
├── 20240520182315.sql
├── 20240520182336.sql
└── atlas.sum # Integrity file
atlas.hcl: Project configuration file. Defines environments, dev database URLs, schema sources, and migration settings. One per project (or one per Atlas project in a monorepo).schema/: The desired state of your database, as code. You edit these files to make schema changes, then runatlas migrate diffto generate a migration.migrations/: SQL migration files generated byatlas migrate diff. Each file represents a single change, checked into source control and reviewed in PRs. Theatlas.sumintegrity file protects against conflicts between developers working in parallel.
.
├── atlas.hcl # Project configuration
└── schema/ # Desired state (HCL or SQL files)
atlas.hcl: Project configuration file. Defines environments, dev database URLs, and schema sources. One per project (or one per Atlas project in a monorepo).schema/: The desired state of your database, as code. You edit these files to make schema changes, then runatlas schema applyto apply them directly. Atlas computes the diff against the target database at apply time. No migration directory is needed.
Step 1: Create your project config file
Create a file named atlas.hcl and place it at your project root.
In this file define an env block for local development:
env "local" {
url = getenv("DB_URL")
dev = "docker://postgres/16/dev" // <-- Replace with your dev database URL.
schema {
src = "file://schema"
}
}
Be sure to replace the dev URL with the URL you determined in
Step 3 of the previous section.
Roles, users, and permissions are excluded by default from inspection and schema management.
To enable them, configure schema.mode in your atlas.hcl:
env "local" {
url = getenv("DB_URL")
dev = "docker://postgres/16/dev"
schema {
src = "file://schema"
mode {
roles = true # Enable role inspection. Defaults to false.
permissions = true # Enable permission inspection. Defaults to false.
}
}
}
For more details, see Database Security as Code.
Step 2: Verify your configuration works
Let's repeat our verification from the previous section, this time using the --env flag instead
of spelling out URLs explicitly:
atlas schema apply --env local --dry-run
This command will calculate the diff between the schema in schema/ (defined by the schema.src field)
and the current state of the database (defined by url).
Unless your database schema changed in the interim, you should see the same output you got in the verification step of the previous section:
Schemas are synced, no changes to be made.
In the next section, we will explore how to define the desired schema of your database, as code.