Skip to main content

Automatic Aurora DSQL Schema Migrations with Atlas

Amazon Aurora DSQL is a serverless, distributed relational database service optimized for transactional workloads. Aurora DSQL is PostgreSQL-compatible (version 16), offering ACID transactions with strong consistency and snapshot isolation. It provides 99.99% single-Region and 99.999% multi-Region availability with automatic scaling and self-healing architecture.

Support for Aurora DSQL is available exclusively to Pro users. To use this feature, run:

atlas login

Enter: Atlas

Atlas helps developers manage their database schema as code - abstracting away the intricacies of database schema management. With Atlas, users provide the desired state of the database schema and Atlas automatically plans the required migrations.

In this guide, we will dive into setting up Atlas for Aurora DSQL schema migrations, and introduce the different workflows available.

Prerequisites

  1. An AWS account with the necessary permissions to create an Aurora DSQL cluster.
  2. Atlas installed on your machine:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Logging in to Atlas

To use Aurora DSQL with Atlas, you'll need to log in to Atlas. If it's your first time, you will be prompted to create both an account and a workspace (organization):

atlas login

Connecting to Aurora DSQL

Atlas uses the dsql:// scheme for connecting to Aurora DSQL clusters:

Connect to a database (all schemas):

dsql://admin:<password>@<cluster-endpoint>/?sslmode=require

Example:

dsql://admin:mypassword@abc123xyz.dsql.us-east-1.on.aws/?sslmode=require
note
  • Aurora DSQL requires SSL connections (sslmode=require)
  • The admin user is the default DSQL user
  • Password is obtained from AWS DSQL token generation

Inspecting the Schema

The atlas schema inspect command supports reading the database description provided by a URL and outputting it in different formats, including Atlas DDL (default), SQL, and JSON. In this guide, we will demonstrate the flow using both the Atlas DDL and SQL formats, as the JSON format is often used for processing the output using jq.

To inspect your Aurora DSQL cluster, use the -u flag and write the output to a file named schema.hcl:

atlas schema inspect \
-u "dsql://admin:pass@cluster.dsql.us-east-1.on.aws/?sslmode=require" > schema.hcl

Open the schema.hcl file to view the Atlas schema that describes your database.

schema.hcl
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
}
column "name" {
null = false
type = character_varying(255)
}
primary_key {
columns = [column.id]
}
}

schema "public" {
}
info

For in-depth details on the atlas schema inspect command, covering aspects like inspecting specific schemas, handling multiple schemas concurrently, excluding tables, and more, refer to our documentation here.

Declarative Migrations

The declarative approach, sometimes called "state-based migrations", lets users manage schemas by defining the desired state of the database as code. Atlas then inspects the target database and calculates an execution plan to reconcile the difference between the desired and actual states. Let's see this in action.

We will start off by making a change to our schema file, such as adding a repos table:

table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
}
column "name" {
null = false
type = character_varying(255)
}
primary_key {
columns = [column.id]
}
}
table "repos" {
schema = schema.public
column "id" {
type = bigint
null = false
}
column "name" {
type = character_varying(255)
null = false
}
column "owner_id" {
type = bigint
null = false
}
primary_key {
columns = [column.id]
}
}
schema "public" {
}

Now that our desired state has changed, to apply these changes to our database, Atlas will plan a migration for us by running the atlas schema apply command:

atlas schema apply \
-u "dsql://admin:pass@cluster.dsql.us-east-1.on.aws/?sslmode=require" \
--to file://schema.hcl \
--dev-url "docker://dsql/16"

Approve the proposed changes, and that's it! You have successfully run a declarative migration.

info

For a more detailed description of the atlas schema apply command refer to our documentation here.

Pushing schemas to Atlas

Similar to how Docker images are pushed to Docker Hub, you can push your schema to Atlas Cloud for versioning, collaboration, and deployment:

atlas schema push app \
--dev-url "docker://dsql/16"

Once pushed, Atlas prints a URL to the schema. You can then apply it to any database using the schema URL:

atlas schema apply \
-u "dsql://admin:pass@cluster.dsql.us-east-1.on.aws/?sslmode=require" \
--to "atlas://app" \
--dev-url "docker://dsql/16"

This workflow allows you to manage your schema centrally and deploy it to multiple environments without having the schema files locally.

For more advanced workflows, you can use atlas schema plan to pre-plan and review migrations before applying them. This enables teams to plan, lint, and review changes during the PR stage, edit generated SQL if needed, and ensure no human intervention is required during deployment.

Versioned Migrations

Alternatively, the versioned migration workflow, sometimes called "change-based migrations", allows each change to the database schema to be checked-in to source control and reviewed during code-review. Users can still benefit from Atlas intelligently planning migrations for them, however they are not automatically applied.

Creating the first migration

In the versioned migration workflow, our database state is managed by a migration directory. The migration directory holds all of the migration files created by Atlas, and the sum of all files in lexicographical order represents the current state of the database.

To create our first migration file, we will run the atlas migrate diff command, and we will provide the necessary parameters:

  • --dir the URL to the migration directory, by default it is file://migrations.
  • --to the URL of the desired state. A state can be specified using a database URL, HCL or SQL schema, or another migration directory.
  • --dev-url a URL to a Dev Database that will be used to compute the diff.
atlas migrate diff initial \
--to file://schema.hcl \
--dev-url "docker://dsql/16"

Run ls migrations, and you'll notice that Atlas has automatically created a migration directory for us, as well as two files:

-- Create "users" table
CREATE TABLE "public"."users" (
"id" bigint NOT NULL,
"name" character varying(255) NOT NULL,
PRIMARY KEY ("id")
);
-- Create "repos" table
CREATE TABLE "public"."repos" (
"id" bigint NOT NULL,
"name" character varying(255) NOT NULL,
"owner_id" bigint NOT NULL,
PRIMARY KEY ("id")
);

The migration file represents the current state of our database, and the sum file is used by Atlas to maintain the integrity of the migration directory. To learn more about the sum file, read the documentation.

Pushing migration directories to Atlas

Now that we have our first migration, we can apply it to a database. There are multiple ways to accomplish this, with most methods covered in the guides section. In this example, we'll demonstrate how to push migrations to Atlas Cloud, much like how Docker images are pushed to Docker Hub.

Let's name our new migration project app and run atlas migrate push:

atlas migrate push app \
--dev-url "docker://dsql/16"

Once the migration directory is pushed, Atlas prints a URL to the created directory.

Applying migrations

Once our app migration directory has been pushed, we can apply it to a database from any CD platform without necessarily having our directory there.

We'll create a simple Atlas configuration file (atlas.hcl) to store the settings for our environment:

atlas.hcl
variable "dsql_password" {
type = string
default = getenv("DSQL_PASSWORD")
}

locals {
dsql_pass = urlescape(var.dsql_password)
}

env "dsql" {
url = "dsql://admin:${local.dsql_pass}@cluster.dsql.us-east-1.on.aws/?sslmode=require"
migration {
dir = "atlas://app"
}
}

The final step is to apply the migrations to the database. Let's run atlas migrate apply with the --env flag to instruct Atlas to select the environment configuration from the atlas.hcl file:

atlas migrate apply --env dsql
note

Migrations generated with docker://dsql or a DSQL cluster as a dev-database automatically include the -- atlas:txmode none directive. For manually written migration files, add this directive at the top or use --tx-mode none when applying.

After applying the migration, you should receive a link to the deployment and the database where the migration was applied.

Next Step: Setup CI/CD

Once you have your migration directory set up, the next step is to integrate Atlas into your CI/CD pipeline. Atlas provides native integrations for popular platforms:

DSQL Limitations

Aurora DSQL has some limitations compared to standard PostgreSQL that you should be aware of:

Unsupported Schema Objects and Commands

Advisory Locks, Procedures, Triggers, Sequences, Extensions, JSON/JSONB, Foreign Key Constraints, Partitions, Temporary Tables, TRUNCATE, ON DELETE CASCADE

For complete compatibility information, see the Aurora DSQL SQL Feature Compatibility documentation.

No Parallel Atlas Execution

DSQL does not support PostgreSQL advisory lock functions (pg_try_advisory_lock), so Atlas cannot acquire locks to prevent concurrent migrations. Ensure only one Atlas process runs against a DSQL database at a time. We recommend following the Modern Database CI/CD guide to set up proper serialization of migration deployments.

Dev Database

Atlas uses a dev database to normalize schemas, validate them, and simulate migrations. This temporary database allows Atlas to detect errors early and generate accurate migration plans.

Recommended approach: For full compatibility, use a real DSQL cluster as your dev database:

  • Database scope: When your schema project spans across multiple schemas, use a dedicated DSQL cluster for development, as DSQL doesn't support creating multiple databases within a cluster.
  • Schema scope: When working with a single schema, use a separate schema (e.g., dev) in the same cluster
env "dsql" {
src = "file://schema.sql"
url = "dsql://admin:${local.dsql_pass}@cluster.dsql.us-east-1.on.aws/?sslmode=require"

# Separate dev schema in the same cluster
dev = "dsql://admin:${local.dsql_pass}@cluster.dsql.us-east-1.on.aws/?sslmode=require&search_path=dev"
}

Alternative: Docker-based dev database

For local development without provisioning additional infrastructure, Atlas offers docker://dsql. This uses a PostgreSQL container but generates DSQL-compatible SQL (e.g., CREATE INDEX ASYNC):

env "dsql" {
src = "file://schema.sql"
url = "dsql://admin:${local.dsql_pass}@cluster.dsql.us-east-1.on.aws/?sslmode=require"

// For schema-scope, use: "docker://dsql/16/postgres?search_path=public"
dev = "docker://dsql/16"
}
note

The Docker container runs PostgreSQL, not actual DSQL. Some DSQL limitations (e.g., no foreign keys, no triggers) are not enforced during local development. Ensure your schema only uses DSQL-supported features.

Transaction Mode

DSQL does not support DDL and DML in the same transaction, and a transaction can include only one DDL statement. When planning migrations against a DSQL database, Atlas automatically adds the -- atlas:txmode none directive to migration files.

For manually written migration files, you can either add the directive at the top of each file:

-- atlas:txmode none

CREATE TABLE users (
id INT PRIMARY KEY,
name VARCHAR(255)
);

Or use the --tx-mode flag when applying migrations:

atlas migrate apply --url "dsql://..." --tx-mode none

Index Creation

DSQL requires async index creation. Atlas automatically uses CREATE INDEX ASYNC instead of CREATE INDEX CONCURRENTLY for DSQL connections.

Wrapping Up

In this guide, we demonstrated how to set up Atlas to manage your Aurora DSQL database schema. We covered both declarative and versioned migration workflows, and showed how to generate migrations, push them to an Atlas workspace, and apply them to your databases. Atlas has many more features to explore. To learn more, check out the Atlas documentation.

As always, we would love to hear your feedback and suggestions on our Discord server.

Additional Resources

To learn more about Aurora DSQL, check out the official AWS documentation: