Claude Code with Atlas
Claude Code is Anthropic's agentic coding tool that runs in your terminal. Atlas offers three levels of integration with Claude Code, from a simple prompt file to subagents and slash commands.
Option 1: CLAUDE.md File (Basic)
The simplest way to integrate Atlas with Claude Code. Create a CLAUDE.md file in your project root
(or ~/.claude/CLAUDE.md for global use) that teaches Claude Code about Atlas concepts, commands, and workflows.
Full CLAUDE.md content (click to expand)
# Atlas Database Schema Management
Atlas is a language-independent tool for managing and migrating database schemas using modern DevOps principles.
## Quick Reference
```bash
atlas schema inspect --env <name>
atlas schema validate --env <name>
atlas migrate status --env <name>
atlas migrate diff --env <name>
atlas migrate lint --env <name> --latest 1
atlas migrate apply --env <name>
atlas whoami
```
## Core Concepts and Configurations
### Configuration File Structure
Atlas uses `atlas.hcl` configuration files with the following structure:
```hcl
env "<name>" {
url = getenv("DATABASE_URL")
dev = "docker://postgres/15/dev?search_path=public"
migration {
dir = "file://migrations"
}
schema {
src = "file://schema.hcl"
}
}
```
### Dev Database
Atlas uses a temporary "dev-database" to process and validate schemas. The URL format depends on whether the project uses a single schema or multiple schemas:
```
# Schema-scoped (single schema — most common)
--dev-url "docker://mysql/8/dev"
--dev-url "docker://postgres/15/dev?search_path=public"
--dev-url "sqlite://dev?mode=memory"
--dev-url "docker://sqlserver/2022-latest/dev?mode=schema"
# Database-scoped (multiple schemas, extensions, or event triggers)
--dev-url "docker://mysql/8"
--dev-url "docker://postgres/15/dev"
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"
# PostGIS / pgvector
--dev-url "docker://postgis/latest/dev?search_path=public"
--dev-url "docker://pgvector/pg17/dev?search_path=public"
```
**Important:** Using the wrong scope causes errors (`ModifySchema is not allowed`) or silently drops database-level objects from migrations. Match the dev URL scope to the project's target database URL.
See https://atlasgo.io/concepts/dev-database for additional drivers and options.
### Environment Variables and Security
**DO**: Use secure configuration patterns
```hcl
// Using environment variables (recommended)
env "<name>" {
url = getenv("DATABASE_URL")
}
// Using external data sources
data "external" "envfile" {
program = ["npm", "run", "envfile.js"]
}
locals {
envfile = jsondecode(data.external.envfile)
}
env "<name>" {
url = local.envfile.DATABASE_URL
}
```
**DON'T**: Hardcode sensitive values
```hcl
// Never do this
env "prod" {
url = "postgres://user:password123@prod-host:5432/database"
}
```
### Schema Sources
#### HCL Schema
```hcl
data "hcl_schema" "<name>" {
path = "schema.hcl"
}
env "<name>" {
schema {
src = data.hcl_schema.<name>.url
}
}
```
#### External Schema (ORM Integration)
The `external_schema` data source imports SQL schema from an external program.
```hcl
data "external_schema" "drizzle" {
program = ["npx", "drizzle-kit", "export"]
}
data "external_schema" "django" {
program = ["python", "manage.py", "atlas-provider-django", "--dialect", "postgresql"]
}
env "<name>" {
schema {
src = data.external_schema.django.url
}
}
```
**Important:** The output must be a complete SQL schema (not a diff). If errors occur, run the program directly to isolate the issue.
#### Composite Schema (Pro)
Combine multiple schemas into one. Requires `atlas login`.
```hcl
data "composite_schema" "app" {
schema "users" {
url = data.external_schema.auth_service.url
}
schema "graph" {
url = "ent://ent/schema"
}
}
env "<name>" {
schema {
src = data.composite_schema.app.url
}
}
```
## Common Workflows
### 1. Schema Inspection / Visualization
1. Start by listing tables — don't inspect the entire schema at once for large databases.
2. Default output is HCL. Use `--format "{{ json . }}"` for JSON or `--format "{{ sql . }}"` for SQL.
3. Use `--include`/`--exclude` to filter specific tables or objects.
**Inspect the environment's schema source (`env://src`):**
```bash
atlas schema inspect --env <name> --url "env://src" --format "{{ sql . }}"
atlas schema inspect --env <name> --url "env://src" --format "{{ json . }}" | jq ".schemas[].tables[].name"
```
**Inspect the environment's target database:**
```bash
atlas schema inspect --env <name> --format "{{ sql . }}"
atlas schema inspect --env <name> --include "users" --format "{{ sql . }}"
```
**Inspect migration directory:**
```bash
atlas schema inspect --env <name> --url file://migrations --format "{{ sql . }}"
```
Add `-w` to open a web-based ERD visualization (requires `atlas login`).
### 2. Migration Status
Compare applied migrations against the migrations directory. Only use when you know the target database.
```bash
atlas migrate status --env <name>
atlas migrate status --dir file://migrations --url <url>
```
### 3. Migration Generation / Diffing
```bash
atlas migrate diff --env <name> "add_user_table"
atlas migrate diff \
--dir file://migrations \
--dev-url docker://postgres/15/dev \
--to file://schema.hcl \
"add_user_table"
```
**Configuration for migration generation:**
```hcl
env "<name>" {
dev = "docker://postgres/15/dev?search_path=public"
migration {
dir = "file://migrations"
}
schema {
src = "file://schema.hcl"
# Or: src = data.external_schema.<name>.url
# Or: src = getenv("DATABASE_URL")
}
}
```
### 4. Migration Linting
```bash
atlas migrate lint --env <name> --latest 1
atlas migrate lint --env <name> --latest 3
atlas migrate lint --env ci
```
**Linting configuration:**
```hcl
lint {
destructive {
error = false // Allow destructive changes with warnings
}
}
env "ci" {
lint {
git {
base = "main"
}
}
}
```
To suppress a specific lint error, add `-- atlas:nolint` before the SQL statement.
> **Important:** When fixing migration issues:
> - **Unapplied migrations:** Edit the file, then run `atlas migrate hash --env "<name>"`
> - **Applied migrations:** Never edit directly. Create a new corrective migration instead.
> - **Never use `-- atlas:nolint` without properly fixing the issue or getting user approval.**
### 5. Applying Changes
**Versioned (migration files):**
```bash
atlas migrate apply --env <name> --dry-run # Always preview first
atlas migrate apply --env <name>
```
**Declarative (direct apply — fast local iteration):**
```bash
atlas schema apply --env <name> --dry-run # Preview changes
atlas schema apply --env <name> # Apply directly to database
```
Use `schema apply` for fast edit-apply cycles on a local database without generating migration files. Add `--auto-approve` to skip the confirmation prompt during development.
## Troubleshooting
```bash
atlas version
atlas whoami
atlas migrate hash --env <name>
```
**Missing driver error**: Either `--url` or `--dev-url` is missing or incorrect.
## Key Reminders
1. **Always read `atlas.hcl` first** — use environment names from config
2. **Never hardcode database URLs** — use `getenv()` or secure data sources
3. **Run `atlas schema validate`** after editing schema files
4. **Run `atlas migrate hash`** after manually editing migration files
5. **Use `atlas migrate lint`** to validate migrations before applying
6. **Always use `--dry-run`** before applying migrations
7. **Use `--include`/`--exclude`** to filter tables in schema inspection
8. **Never ask for sensitive information** such as passwords or database URLs
9. **Never ignore linting errors** — fix them or get user approval
10. **Inspect schemas at high level first** — schemas might be very large
11. **Only use atlas commands listed here** — other commands may not be supported
12. **Prefer `atlas schema inspect`** over reading migration files directly
Option 2: Agent Skill (Recommended)
Agent Skills are a modern standard for packaging domain expertise for AI agents. Unlike a CLAUDE.md that loads on every conversation, skills activate only when relevant — keeping your context window clean.
Create a skill directory and add the SKILL.md file:
mkdir -p ~/.claude/skills/atlas/references
---
name: atlas
description: "Database schema management and migrations with Atlas CLI. Use when: generating migrations, diffing schemas, linting or testing migrations, applying schema changes, inspecting databases, working with atlas.hcl, schema.hcl, or ORM schemas (GORM, Drizzle, SQLAlchemy, Django, Ent, Sequelize, TypeORM), or validating schema definitions."
---
# Atlas Schema Migrations
## Security
Never hardcode credentials. Use environment variables:
```hcl
env "prod" {
url = getenv("DATABASE_URL")
}
```
## Quick Reference
Use `--help` on any command for comprehensive docs and examples:
```bash
atlas migrate diff --help
```
Always use `--env` to reference configurations from `atlas.hcl` — this avoids passing
database credentials to the LLM context.
```bash
# Common
atlas schema inspect --env <name> # Inspect schema
atlas schema validate --env <name> # Validate schema syntax/semantics
atlas schema diff --env <name> # Compare schemas
atlas schema lint --env <name> # Check schema policies
atlas schema test --env <name> # Test schema
# Declarative workflow
atlas schema plan --env <name> # Plan schema changes
atlas schema apply --env <name> --dry-run # Preview changes
atlas schema apply --env <name> # Apply schema changes
# Versioned workflow
atlas migrate diff --env <name> "migration_name" # Generate migration
atlas migrate lint --env <name> --latest 1 # Validate migration
atlas migrate test --env <name> # Test migration
atlas migrate apply --env <name> --dry-run # Preview changes
atlas migrate apply --env <name> # Apply migration
atlas migrate status --env <name> # Check status
```
## Choosing a Workflow
```
Schema change needed
├─ Project has migrations/ dir or migration config in atlas.hcl?
│ ├─ Yes → Versioned: migrate diff → lint → test → apply
│ └─ No → Declarative: schema apply --dry-run → apply
├─ Iterating on local database?
│ └─ Use schema apply --auto-approve for fast edit-apply cycles
└─ Not sure → Read atlas.hcl first
```
**Tip:** `atlas schema apply` applies schema changes directly to a local database without generating migration files. This is useful for fast iteration during development — edit the schema, run `schema apply`, and see the result immediately.
## Example
```
User: Add an email column to the users table
Agent steps:
1. atlas schema inspect --env dev # understand current state
2. Edit schema source file # add email column
3. atlas schema validate --env dev # verify syntax
4. atlas migrate diff --env dev "add_email" # generate migration
5. atlas migrate lint --env dev --latest 1 # check for issues
6. atlas migrate apply --env dev --dry-run # preview before applying
```
## Core Concepts
### Configuration File (atlas.hcl)
Always read the project's `atlas.hcl` first — it contains environment configurations:
```hcl
env "<name>" {
url = getenv("DATABASE_URL")
dev = "docker://postgres/15/dev?search_path=public"
migration {
dir = "file://migrations"
}
schema {
src = "file://schema.hcl"
}
}
```
### Dev Database
Atlas uses a temporary "dev-database" to process and validate schemas. The URL format depends on whether you work with a **single schema** or **multiple schemas**:
```bash
# Schema-scoped (single schema — most common)
--dev-url "docker://mysql/8/dev"
--dev-url "docker://postgres/15/dev?search_path=public"
--dev-url "sqlite://dev?mode=memory"
--dev-url "docker://sqlserver/2022-latest/dev?mode=schema"
# Database-scoped (multiple schemas, extensions, or event triggers)
--dev-url "docker://mysql/8"
--dev-url "docker://postgres/15/dev"
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"
```
**Important:** Using the wrong scope causes errors (`ModifySchema is not allowed`) or silently drops database-level objects (extensions, event triggers) from migrations. Match the dev URL scope to the project's target database URL. For PostGIS or pgvector schemas, use `docker://postgis/latest/dev` or `docker://pgvector/pg17/dev`.
If the schema depends on extensions or external objects, use a `docker` block with a `baseline`:
```hcl
docker "postgres" "dev" {
image = "postgres:15"
schema = "public"
baseline = <<SQL
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
SQL
}
env "local" {
src = "file://schema.hcl"
dev = docker.postgres.dev.url
}
```
## Workflows
### 1. Schema Inspection
Start with a high-level overview before diving into details. The default output is HCL.
Use `--format "{{ json . }}"` for JSON or `--format "{{ sql . }}"` for SQL.
```bash
# List tables (overview first, JSON output)
atlas schema inspect --env <name> --format "{{ json . }}" | jq ".schemas[].tables[].name"
# Full SQL schema
atlas schema inspect --env <name> --format "{{ sql . }}"
# Filter with --include/--exclude (useful for large schemas)
atlas schema inspect --env <name> --include "users_*" # Only matching tables
atlas schema inspect --env <name> --exclude "*_backup" # Skip matching tables
atlas schema inspect --env <name> --exclude "*[type=trigger]" # Skip triggers
# Open visual ERD in browser (requires atlas login)
atlas schema inspect --env <name> -w
```
### 2. Schema Comparison (Diff)
Compare any two schema states:
```bash
# Compare current state to desired schema
atlas schema diff --env <name>
# Compare specific sources
atlas schema diff --env <name> --from file://migrations --to file://schema.hcl
```
### 3. Migration Generation
Generate migrations from schema changes:
```bash
# Generate migration from schema diff
atlas migrate diff --env <name> "add_users_table"
# With explicit parameters
atlas migrate diff \
--dir file://migrations \
--dev-url docker://postgres/15/dev \
--to file://schema.hcl \
"add_users_table"
```
### 4. Schema Validation
Validate schema definitions before generating migrations:
```bash
# Validate schema syntax and semantics
atlas schema validate --env <name>
# Validate against dev database
atlas schema validate --dev-url docker://postgres/15/dev --url file://schema.hcl
```
If valid, exits successfully. If invalid, prints detailed error (unresolved references, syntax issues, unsupported attributes).
### 5. Migration Linting
```bash
atlas migrate lint --env <name> --latest 1 # Lint latest migration
atlas migrate lint --env ci # Lint since git branch
atlas schema lint --env <name> # Check schema policies
```
Fixing lint issues:
- Unapplied migrations: Edit file, then `atlas migrate hash --env <name>`
- Applied migrations: Create corrective migration (never edit directly)
### 6. Migration Testing
```bash
atlas migrate test --env <name> # Requires atlas login
atlas whoami # Check login status first
```
### 7. Applying Migrations
```bash
atlas migrate apply --env <name> --dry-run # Always preview first
atlas migrate apply --env <name> # Apply
atlas migrate status --env <name> # Verify
```
## Standard Workflow
1. `atlas schema inspect --env <name>` — Understand current state
2. Edit schema files
3. `atlas schema validate --env <name>` — Check syntax
4. `atlas migrate diff --env <name> "change_name"` — Generate migration
5. `atlas migrate lint --env <name> --latest 1` — Validate
6. `atlas migrate test --env <name>` — Test (requires login)
7. If issues: edit migration, then `atlas migrate hash`
8. `atlas migrate apply --env <name> --dry-run` then apply
## Schema Sources
For HCL schemas, ORM integrations (GORM, Drizzle, SQLAlchemy, Django, Ent, Sequelize, TypeORM),
composite schemas, and dev-database dialect URLs, see `references/schema-sources.md`.
## Onboarding an Existing Project
### Baseline an existing database
To start managing an existing database with versioned migrations:
```bash
# 1. Export current schema to code
atlas schema inspect -u '<database-url>' --format '{{ sql . | split | write "src" }}'
# 2. Generate a baseline migration from the exported schema
atlas migrate diff "baseline" --to "file://src" --dev-url '<dev-url>'
# 3. Mark baseline as applied on existing databases (use version from filename)
atlas migrate apply --url '<database-url>' --baseline '<version>'
```
The baseline migration captures the current state without executing it on existing databases.
On new databases, it runs in full to create the initial schema.
## Troubleshooting
```bash
# Check installation and login
atlas version
atlas whoami
# Repair migration integrity after manual edits
atlas migrate hash --env <name>
```
**Missing driver error**: Ensure `--url` or `--dev-url` is correctly specified.
## Key Rules
1. Read `atlas.hcl` first — use environment names from config
2. Never hardcode credentials — use `getenv()`
3. Run `atlas schema validate` after schema edits
4. Always lint before applying migrations
5. Always dry-run before applying
6. Run `atlas migrate hash` after editing migration files
7. Use `atlas login` to unlock views, triggers, functions, ERD, and migration testing
8. Write migration tests for data migrations
9. Never ignore lint errors — fix them or get user approval
## Documentation
- [CLI Reference](https://atlasgo.io/cli-reference)
- [Versioned Migrations](https://atlasgo.io/versioned/diff)
- [Declarative Workflow](https://atlasgo.io/declarative/apply)
- [Migration Linting](https://atlasgo.io/versioned/lint)
- [Migration Testing](https://atlasgo.io/testing/migrate)
- [Onboard Existing Database](https://atlasgo.io/versioned/import)
- [ORM Integrations](https://atlasgo.io/guides/orms)
- [Dev Database](https://atlasgo.io/concepts/dev-database)
# Schema Sources Reference
## HCL Schema
```hcl
data "hcl_schema" "<name>" {
path = "schema.hcl"
}
env "<name>" {
schema {
src = data.hcl_schema.<name>.url
}
}
```
## External Schema (ORM Integration)
The `external_schema` data source imports SQL schema from an ORM or external program.
```hcl
# GORM (Go)
data "external_schema" "gorm" {
program = ["go", "run", "-mod=mod", "ariga.io/atlas-provider-gorm", "load", "--path", "./models", "--dialect", "postgres"]
}
# Drizzle (TypeScript)
data "external_schema" "drizzle" {
program = ["npx", "drizzle-kit", "export"]
}
# SQLAlchemy (Python)
data "external_schema" "sqlalchemy" {
program = ["python", "-m", "atlas_provider_sqlalchemy", "--path", "./models", "--dialect", "postgresql"]
}
# Django (Python)
data "external_schema" "django" {
program = ["python", "manage.py", "atlas-provider-django", "--dialect", "postgresql"]
}
# Ent (Go)
env "<name>" {
schema {
src = "ent://ent/schema"
}
}
# Sequelize (Node.js)
data "external_schema" "sequelize" {
program = ["npx", "@ariga/atlas-provider-sequelize", "load", "--path", "./models", "--dialect", "postgres"]
}
# TypeORM (TypeScript)
data "external_schema" "typeorm" {
program = ["npx", "@ariga/atlas-provider-typeorm", "load", "--path", "./entities", "--dialect", "postgres"]
}
```
Wire into an environment:
```hcl
env "<name>" {
schema {
src = data.external_schema.<orm>.url
}
}
```
## Composite Schema (Pro)
Combine multiple schema sources into one:
```hcl
data "composite_schema" "app" {
schema "users" {
url = data.external_schema.auth_service.url
}
schema "graph" {
url = "ent://ent/schema"
}
schema "shared" {
url = "file://schema/shared.hcl"
}
}
```
## Dev-Database Dialects
The dev URL format depends on whether your project uses **schema-scoped** or **database-scoped** migrations. Getting this wrong causes errors like `ModifySchema is not allowed` or silently drops database-level objects (extensions, event triggers) from migrations.
**Schema-scoped** (single schema — most common): include the database name and schema scope so Atlas creates objects in the correct schema. Use this when all tables live in one schema (e.g., `public`).
| Dialect | Dev URL (schema-scoped) |
|------------|------------------------------------------------------|
| MySQL | `docker://mysql/8/dev` |
| MariaDB | `docker://maria/latest/dev` |
| PostgreSQL | `docker://postgres/17/dev?search_path=public` |
| SQLite | `sqlite://dev?mode=memory` |
| SQL Server | `docker://sqlserver/2022-latest/dev?mode=schema` |
| ClickHouse | `docker://clickhouse/23.11/dev` |
**Database-scoped** (multiple schemas or database-level objects): omit the schema scope so Atlas can manage multiple schemas and detect database-level objects like extensions and event triggers.
| Dialect | Dev URL (database-scoped) |
|------------|------------------------------------------------------|
| MySQL | `docker://mysql/8` |
| MariaDB | `docker://maria/latest` |
| PostgreSQL | `docker://postgres/17/dev` |
| SQL Server | `docker://sqlserver/2022-latest/dev?mode=database` |
| ClickHouse | `docker://clickhouse/23.11` |
**PostgreSQL with extensions** — use PostGIS or pgvector images when the schema uses those extensions:
```
docker://postgis/latest/dev?search_path=public
docker://pgvector/pg17/dev?search_path=public
```
**How to choose:** Check the project's `atlas.hcl` or target database URL. If it includes `search_path=public` (Postgres) or a specific database name (MySQL), use schema-scoped. If the project manages multiple schemas, extensions, or event triggers, use database-scoped.
See https://atlasgo.io/concepts/dev-database for additional drivers and options.
Claude Code will automatically load the skill when database operations are requested.
The skill includes:
- Decision tree for choosing Declarative vs Versioned workflows
- Step-by-step workflows for inspect → diff → lint → validate → test → apply
atlas schema validatefor verifying schema correctness after AI-generated edits- ORM integration references (GORM, Drizzle, SQLAlchemy, Django, Ent, Sequelize, TypeORM)
- Security best practices (never hardcode credentials)
- Troubleshooting guides
See the Agent Skills page for more details.
Option 3: Subagent (Advanced)
Claude Code subagents are isolated Claude instances
with their own context window. The Atlas subagent is specialized for schema change workflows —
it reads your atlas.hcl, inspects the current schema, generates migrations, lints them,
and runs tests, all without polluting your main conversation context.
Create a subagent definition at .claude/agents/atlas-migration.md:
Subagent definition (click to expand)
---
name: atlas-migration
description: >
Specialized agent for Atlas database schema changes. Use when making
schema modifications, generating migrations, or debugging migration issues.
Reads atlas.hcl, inspects schemas, generates diffs, lints, validates, tests, and applies.
tools: Bash, Read, Write, Glob, Grep
---
You are an Atlas database migration specialist. Your job is to safely execute
schema changes using Atlas CLI. Atlas supports two workflows:
**Declarative** (Terraform-like): Define desired state, Atlas computes and applies the diff.
**Versioned** (migration files): Atlas generates migration files checked into source control.
## Versioned Workflow (Most Common)
For every schema change request using versioned migrations, follow this exact sequence:
1. **Read config**: `cat atlas.hcl` to understand environments and schema sources
2. **Login**: Run `atlas login` if not logged in (required for views, triggers, functions, migration testing)
3. **Inspect current schema**: `atlas schema inspect --env <env> --format "{{ sql . }}" | head -100`
4. **Make schema changes**: Edit the schema source files as requested
5. **Validate schema**: `atlas schema validate --env <env>` to verify schema correctness
6. **Generate migration**: `atlas migrate diff --env <env> "<descriptive_name>"`
7. **Lint**: `atlas migrate lint --env <env> --latest 1`
8. **Fix issues**: If lint reports errors, edit the migration file, then run `atlas migrate hash --env <env>`
9. **Test**: Run `atlas migrate test --env <env>` (requires login)
10. **Report**: Summarize what changed and the lint results
## Declarative Workflow
For direct schema application (no migration files):
1. **Read config**: `cat atlas.hcl`
2. **Inspect current state**: `atlas schema inspect --env <env>`
3. **Edit desired state**: Modify schema source files (HCL, SQL, or ORM)
4. **Validate**: `atlas schema validate --env <env>`
5. **Preview**: `atlas schema apply --env <env> --dry-run`
6. **Apply**: `atlas schema apply --env <env>` (with user approval)
## Rules
- NEVER hardcode database URLs — use `getenv("DATABASE_URL")` or secret managers
- NEVER edit applied migrations — create corrective migrations instead
- ALWAYS run lint before declaring a migration ready
- ALWAYS use `--dry-run` before `atlas migrate apply` or `atlas schema apply`
- ALWAYS run `atlas schema validate` after editing schema files
- If a linting error cannot be fixed, explain it to the user — do NOT add `--atlas:nolint` without approval
Use it:
@atlas-migration Add a priority column to the todos table with a default value of 'medium'
Slash Commands
Create shortcuts for common Atlas workflows. Save these as .claude/commands/ files:
.claude/commands/atlas-diff.md — Generate a migration:
Read atlas.hcl to find the active environment. Then run:
1. `atlas schema validate --env <env>` to verify schema files are valid
2. `atlas migrate diff --env <env> "$ARGUMENTS"`
3. `atlas migrate lint --env <env> --latest 1`
4. If lint passes, show the generated SQL. If it fails, fix and re-lint.
.claude/commands/atlas-lint.md — Lint recent migrations:
Read atlas.hcl to find the active environment. Then run:
1. `atlas migrate lint --env <env> --latest 1`
2. For each issue, explain what it means and suggest a fix.
3. If there are no issues, confirm the migration is clean.
.claude/commands/atlas-status.md — Check migration status:
Read atlas.hcl to find the active environment. Then run:
1. `atlas version`
2. `atlas whoami`
3. `atlas migrate status --env <env>`
4. Summarize: login status, and how many migrations are pending.
5. If not logged in, suggest `atlas login` to unlock additional features.
.claude/commands/atlas-validate.md — Validate schema after edits:
Read atlas.hcl to find the active environment. Then run:
1. `atlas schema validate --env <env>`
2. If valid, confirm "Schema is valid."
3. If invalid, show the error and suggest fixes.
Usage:
/atlas-diff add_priority_column
/atlas-lint
/atlas-status
/atlas-validate
Hooks (Optional)
Use hooks to automatically lint migrations after Claude generates them.
Add to .claude/settings.json:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write(migrations/*.sql)",
"hooks": [
{
"type": "command",
"command": "atlas migrate hash --env local 2>/dev/null; atlas migrate lint --env local --latest 1 2>&1 | head -20"
}
]
}
]
}
}
This ensures every migration file Claude writes is immediately linted, catching issues before they accumulate.