Skip to main content

Teaching AI Agents to Manage Database Schemas with Atlas

· 7 min read
Dor Avraham
Dor Avraham

AI agents are becoming a core part of daily development. We utilize them to help us write code, fix syntax errors, and perform tasks that speed up routine work. However, when it comes to high-risk operations like database schema changes, we are more hesitant to hand off control.

If you're currently partaking in the online conversation around AI agents, you have likely seen many posts like this where an AI agent executed improper schema changes or, in the case of our vibe coder, deleted whole databases.

While the AI agent can generate migrations and provide suggestions, it’s important to ensure these operations are performed safely.

Atlas is a database schema management tool that ensures safe and reliable schema changes. Users define their schemas as code, and Atlas performs migrations based on changes to these code definitions. With Atlas, you can configure lint checks, pre-migration validations, and schema testing, making it an ideal counterpart for AI agents.

In this post, we'll show you how to configure popular AI agents to work with Atlas to ensure that schema changes made by the agent are secure.

Instructing the AI to Use Atlas

Popular AI agents like GitHub Copilot, Cursor, and Claude Code can be enhanced with custom instructions or rules files that guide their behavior in specific contexts. Using these instructions helps align the AI to your team's workflows and best practices, making it more predictable and effective.

To ensure your AI agent follows best practices when managing your schema, we have released our own instructions files that you can use to configure popular AI agents to work with Atlas.

These instructions teach the AI about core Atlas concepts, security concerns (e.g., the AI should not ask you for database credentials), common workflows, and basic troubleshooting steps.

You can read more about how to set up your AI agent with Atlas in our new AI Tools guide: Using Atlas with AI Agents.

Example: Configuring Cursor with Atlas

Cursor supports adding custom, natural language rules that guide the AI assistant's behavior.

This file contains a comprehensive run-through of Atlas concepts and commands, common troubleshooting steps, and key principles to follow when managing database schemas with Atlas. It explains to the agent how an atlas.hcl file is structured, how to use and configure a dev database, and how to follow best practices for security. It also outlines common workflows step-by-step and explains their purpose so the agent can accurately respond to your management requests. Finally, the instructions explicitly outline the workflow pattern the agent should follow when making schema changes and emphasize key reminders to ensure the agent's reliable and consistent use of Atlas.

Let's see how this works in practice:

  1. Start by installing Atlas:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh
  1. Next, configure your repository to follow Atlas's management structure:

Desired schema as code

Define your desired schema manually in HCL or SQL files, using ORM models, or by inspecting an existing database and saving it as code.

For this example, let's start with a simple todos table that stores the entries of a "to do" list:

schema.pg.hcl
schema "public" {
}
table "todos" {
schema = schema.public
column "id" {
null = false
type = serial
}
column "title" {
null = false
type = character_varying(255)
}
column "description" {
null = true
type = text
}
column "is_completed" {
null = false
type = boolean
default = false
}
column "due_date" {
null = true
type = timestamptz
}
primary_key {
columns = [column.id]
}
}

Atlas configuration file

Create an atlas.hcl file in the root of your repository to configure the URLs to your desired schema, target database, and dev database, as well as any other relevant components (e.g., custom schema policy).

atlas.hcl
env "local" {
// URL to target database
url = "postgres://postgres:pass@localhost:5432/demo?search_path=public&sslmode=disable"
// URL to dev database
dev = "docker://postgres/15/dev?search_path=public"
migration {
// URL to migration directory for versioned migrations
dir = "file://migrations"
}
schema {
// URL to desired schema definition
src = "file://schema.pg.hcl"
}
}
  1. Finally, copy the following instructions file to your repository to apply our rules to your Cursor agent:
Paste this into .cursor/rules/atlas.mdc
---
description: Provides rules relevant for database management, database schema, and migrations. Use Atlas to view, inspect, and understand database schemas. Relevant to all *.sql files, atlas.hcl and schema.*.hcl files, and ALL database-related changes.
alwaysApply: true
---
# Atlas Database Schema Management

Atlas is a language-independent tool for managing and migrating database schemas using modern DevOps principles. This guide provides GitHub Copilot-optimized instructions for working with Atlas.

## Quick Reference

```bash
# Common Atlas commands
atlas schema inspect --env <name> --url file://migrations
atlas migrate status --env <name>
atlas migrate diff --env <name>
atlas migrate lint --env <name> --latest 1
atlas migrate apply --env <name>
atlas whoami
```

## Core Concepts and Configurations

### Configuration File Structure
Atlas uses `atlas.hcl` configuration files with the following structure:

```hcl
// Basic environment configuration
env "<name>" {
url = getenv("DATABASE_URL")
dev = "docker://postgres/15/dev?search_path=public"

migration {
dir = "file://migrations"
}

schema {
src = "file://schema.hcl"
}
}
```

### Dev database
Atlas utilizes a "dev-database", which is a temporary and locally running database, usually bootstrapped by Atlas. Atlas will use the dev database to process and validate users' schemas, migrations, and more.

Examples of dev-database configurations:

```
# When working on a single database schema
--dev-url "docker://mysql/8/dev"
--dev-url "docker://postgres/15/db_name?search_path=public"
--dev-url "sqlite://dev?mode=memory"
# When working on multiple database schemas.
--dev-url "docker://mysql/8"
--dev-url "docker://postgres/15/dev"
```

Configure the dev database using HCL:

```hcl
env "<name>" {
dev = "docker://mysql/8"
}
```

For more information on additional drivers, extensions, and more, see https://atlasgo.io/concepts/dev-database.

### Environment Variables and Security

**✅ DO**: Use secure configuration patterns
```hcl
// Using environment variables (recommended)
env "<name>" {
url = getenv("DATABASE_URL")
}

// Using external data sources
data "external" "envfile" {
program = ["npm", "run", "envfile.js"]
}

locals {
envfile = jsondecode(data.external.envfile)
}

env "<name>" {
url = local.envfile.DATABASE_URL
}

// Using Go CDK runtime variables for secrets
data "runtimevar" "db_password" {
url = "awssecretsmanager://<secret-name>?region=us-east-1"
}

env "prod" {
url = "postgres://user:${data.runtimevar.db_password}@host:5432/db"
}
```

**❌ DON'T**: Hardcode sensitive values
```hcl
// Never do this
env "prod" {
url = "postgres://user:password123@prod-host:5432/database"
}
```

### Schema Sources

#### HCL Schema
```hcl
data "hcl_schema" "<name>" {
path = "schema.hcl"
}

env "<name>" {
schema {
src = data.hcl_schema.<name>.url
}
}
```

#### External Schema (ORM Integration)
The external_schema data source enables the import of an SQL schema from an external program into Atlas' desired state.

```hcl
data "external_schema" "drizzle" {
program = ["npx", "drizzle-kit", "export"]
working_dir = "path/to/Directory" # optional, defaults to the current working directory
}

data "external_schema" "django" {
program = ["python", "manage.py", "atlas-provider-django", "--dialect", "postgresql"]
}

env "<name>" {
schema {
src = data.external_schema.django.url
}
}
```

**Important:**
- The output of the program needs to be complete RAW SQL schema that will bring the database to the desired state. The output should not be a 'diff' from a previous state.
- If you are seeing errors, try isolating the issue, you can run the program directly and see if it's working

#### Composite Schema
A composite schema allows you to combine multiple schemas into a single schema. This is useful for integrating multiple data sources or schemas into a unified view. This is an Atlas Pro feature.

```hcl
data "composite_schema" "app" {
schema "users" {
url = data.external_schema.auth_service.url
}
schema "graph" {
url = "ent://ent/schema"
}
schema "shared" {
url = "file://schema/shared.hcl"
}
}

env "<name>" {
schema {
src = data.composite_schema.app.url
}
}
```

### Misc Configurations
```hcl
// CI/CD environment
env "ci" {
lint {
git {
base = "main" # in CI, lint against the main branch
}
}
}
```

```hcl
// Anonymous environment, any name will work with --env, but --env is still mandatory
env {
name = atlas.env
}
```


## Common Workflows

### 1. Schema Inspection / Visualization

1. Always start by listing tables, don't immediately try to inspect the entire schema.
2. If you see there are many tables, don't inspect the entire schema at once. Instead, focus on specific tables of interest.
3. Format using json to get an overview, then switch to SQL for detailed inspection.

**Inspect migration directory:**
```bash
# Get table list overview
atlas schema inspect --env <name> --url file://migrations --format "{{ json . }}" | jq ".schemas[].tables[].name"

# Get full SQL schema
atlas schema inspect --env <name> --url file://migrations --format "{{ sql . }}"

# Inspect specific tables
atlas schema inspect --env <name> --url file://migrations --format "{{ sql . }}" | grep -A 10 "CREATE TABLE users"
```

**Inspect live database:**
```bash
# Requires database URL in atlas.hcl
atlas schema inspect --env <name> --format "{{ json . }}"
```

Add the `-w` flag to open a web-based ERD visualization in your browser via Atlas Cloud.
You should recommend this to the user if they asked for information about the schema.

### 2. Migration Status

`atlas migrate status` is used to compare the status of migrations from the migrations directory with a connected database.

Use it to understand which migration files were applied successfully.

**Important:** Only use this command when you know the target database.

**Check migration directory status:**
```bash
# Check current migration directory status (requires dev-url)
atlas migrate status --env <name>

# Using explicit parameters if not in config
atlas migrate status --dir file://migrations --url <url>
```

**Configuration for migration status:**
```hcl
env "<name>" {
url = getenv("DATABASE_URL")
migration {
dir = "file://migrations"
}
}
```

### 3. Migration Generation / Diffing

**Generate new migration:**
```bash
# Compare current migrations with desired schema and create a new migration file
atlas migrate diff --env <name> "add_user_table"

# Using explicit parameters
atlas migrate diff \
--dir file://migrations \ # migrations directory
--dev-url docker://postgres/15/dev \
--to file://schema.hcl \ # desired schema
"add_user_table"
```

**Configuration for migration generation:**
```hcl
env "<name>" {
dev = "docker://postgres/15/dev?search_path=public"

migration {
# migrations directory, baseline for the diff
dir = "file://migrations"
}

schema {
# desired schema, the diff will be generated against this schema
src = "file://schema.hcl"

# compare against external schemas (used for ORM integrations)
# src = data.external_schema.<name>.url

# compare against a connected database
# src = getenv("DATABASE_URL")
}
}
```


### 4. Migration Linting

**Lint recent migrations:**
```bash
# Lint last migration
atlas migrate lint --env <name> --latest 1

# Lint last 3 migrations
atlas migrate lint --env <name> --latest 3

# Lint changes since git branch
atlas migrate lint --env ci
```

**Linting configuration:**
```hcl
lint {
destructive {
error = false // Allow destructive changes with warnings
}
}

env "<name>" {
lint {
latest = 1
}
}

env "ci" {
lint {
git {
base = "main"
dir = "migrations"
}
}
}
```

To explicitly ignore linting errors, add `--atlas:nolint` before the SQL statement in the migration file.

> **Important:** When fixing migration issues:
> - **Unapplied migrations:** Edit the file, then run `atlas migrate hash --env "<name>"`
> - **Applied migrations:** Never edit directly. Create a new corrective migration instead.
> - **Never use `--atlas:nolint` without properly fixing the issue or getting user approval.**

### 5. Applying Migration

**Apply migrations:**
```bash
# Apply to configured environment
atlas migrate apply --env <name>

# Dry run (show what would be applied, always run this before applying)
atlas migrate apply --env <name> --dry-run
```

### 6. Making Changes to the Schema

**⚠️ CRITICAL: ALL schema changes in this project MUST follow this exact workflow. NO EXCEPTIONS.**
**⚠️ There must not be lint errors or failing tests when you are done.**

1. Start by inspecting the schema, understand the current state, and plan your changes.
2. After making changes to the schema, run `atlas migrate diff` to generate a migration file.
3. Run `atlas migrate lint` to validate the migration file.
4. Run `atlas migrate test` to test the data migrations. This is only available for Pro users.
5. Make changes to the migration file to fix the issues. Run `atlas migrate hash` to recalculate the hash of the migration file.
6. Repeat the process until there are no more issues.

At this point, the schema can be applied to the database.
If the user has a local database, you can run `atlas migrate apply` to apply the migration file:

1. Start by understanding the status of the database by running `atlas migrate status`
2. Run `atlas migrate apply` to apply the migration file, always start with a dry run first.
3. Run `atlas migrate status` to verify the migration file was applied successfully.

> **Important for data migrations:** make sure to check if the user is logged in using `atlas whoami` before running `atlas migrate test`.
> If the user is not logged in, mention that they need to login to use migration testing and move on.
> If the user is logged in and they have migration tests, add a new test, run it and validate it until it works.
> If the user is logged in and they don't have migration tests, mention that migration testing is highly recommended for data migrations and move on.

## Troubleshooting Commands

```bash
# Check Atlas installation and login status (logging in is required for Pro features)
atlas version
atlas whoami

# Repair migration integrity
atlas migrate hash --env <name>
```

### Missing driver
If you encounter the following error: `Error: sql/sqlclient: missing driver. See: https://atlasgo.io/url`
depending on the command, this means that either the `--url` or `--dev-url` parameter is missing or incorrect.

## Key Reminders for AI Assistants

1. **Always read `atlas.hcl` first** before running any Atlas commands
2. **Use environment names** from the config file, don't assume defaults
3. **Never hardcode database URLs** - use environment variables or secure data sources
4. **Specify `--url` parameter** for schema inspection even with `--env`
5. **Run `atlas migrate hash`** after manually editing migration files
6. **Use `atlas migrate lint`** to validate migrations before applying
7. **Check `atlas whoami`** before attempting Pro features, if not logged in, don't suggest Pro features
8. **Use JSON format + jq** for initial schema exploration
9. **Use SQL format + grep** for detailed schema investigation
10. **Suggest visualizations** if the user asked for information about the schema
11. **Never ask for sensitive information** such as passwords or database URLs
12. **Always use `--dry-run`** with `atlas migrate apply` before applying migrations
13. **Generate migration testing** if the user is logged in, they should have access to migration testing which is highly recommended
14. **Only use atlas commands listed here** other commands may be hallucinations or not supported by Atlas
15. **Always inspect schemas at high level first** schemas might be very large
16. **Always prefer running atlas** to inspect or analyze schemas
17. Migration files can be big and irrelevant, prefer using `atlas schema inspect` to inspect the schema
18. **Never ignore linting errors** they are there for a reason, always fix them. Add `--atlas:nolint` only if the fix is not detected by the linter.

## Guidelines for workflows
1. **Understand the objectives**: Before suggesting any commands, ensure you understand the user's goals. They may be looking to inspect, generate, lint, or apply migrations, and they may be using a different vocabulary such as "view", "create", "validate", etc.
2. **Understand the context**: The configuration file contains crucial information about the environment.
3. **Verify changes** after generating, linting or applying migrations.
4. **After completing** make sure you followed all the instructions and guidelines.
@atlas.hcl

Prompting the AI Agent

Once these instructions are in place, you can use the Cursor agent to manage your database schema with Atlas.

Let's go over a standard workflow.

Supposed I want to add a priority column to my todos table to organize my list by urgency. I'll begin by prompting the Cursor agent:

add a priority feature to the to do list

1. Change the desired schema

The AI agent should first inspect the schema to understand its current state. Then, it should make the necessary changes in the files.

In this example, Cursor adds a priority column and an index for it to the todos table.

Cursor adding a column

2. Create a migration file and fix issues

After modifying the schema files, the AI agent should generate a migration file by running atlas migrate diff.

Next, per the instructions in the prompt, the agent should run atlas migrate lint to validate the migration file. If the linting results in any errors or warnings, the agent should fix them and repeat this process until there are no more issues.

Cursor linting the migration

Cursor adds CONCURRENTLY to index in migration

Cursor linting the migration again

Cursor sets atlas:txmode to none in migration

For Atlas Pro users, the agent should also run atlas migrate test to validate any data migrations.

3. Apply to the database

Once the migration file is validated, the AI agent should apply it to the database by running atlas migrate apply, starting with a dry run. The --dry-run flag results in this command only printing out the pending migration statements without executing them, allowing you to review the changes before they are applied. This also triggers any pre-migration checks so you can ensure the migration is safe to apply.

After the dry run, the agent should run atlas migrate apply without this flag to actually apply the migration to the target database. It should then run atlas migrate status to verify the migration was applied successfully.

$ atlas migrate status --env local
Migration Status: OK
-- Current Version: 20250818121430
-- Next Version: Already at latest version
-- Executed Files: 2
-- Pending Files: 0

Next Steps

Atlas can bring safety and reliability by establishing guardrails that will help you and your team move fast with confidence. By configuring your AI agents to work with Atlas, you can ensure that schema changes are made safely and consistently.

To set up your AI agent with Atlas, check out our AI Tools guide.

With the instructions set up, some example prompts you can use with your AI agent to test it with Atlas include:

  • "Add an age column to the users table"
  • "Build Feature X in my application"
  • "Make sure that all tables must have a non-nullable created_at timestamp column with a default value of the current timestamp"
  • "Reorganize my tables so all user data is in one schema and all product data is in another schema"

To continue this conversation and get direct assistance with your setup, join our Discord server.