Schema as Code: HCL Syntax
Atlas enables you to define your database schema as code, whether in SQL, through ORMs, custom programs, or the Atlas HCL language. The HCL-based language allows developers to describe database schemas in a declarative manner, and it supports all SQL features supported by Atlas. The main advantages of using HCL are that it enables developers to manage their database schemas like regular code, facilitates sharing and reusing files between projects, allows variable injection, and provides the ability to attach annotations to objects, such as PII or sensitive data.
Editor Support
The Atlas HCL language provides plugins for popular editors like VSCode and JetBrains to enhance your day-to-day editing, navigation, formatting, and testing experience.
File Naming for Editor Support
To enable your editor to recognize Atlas HCL files and provide syntax support, use the following file extensions:
| Database | File Suffix |
|---|---|
| MySQL | .my.hcl |
| MariaDB | .ma.hcl |
| PostgreSQL | .pg.hcl |
| SQLite | .lt.hcl |
| ClickHouse | .ch.hcl |
| SQL Server | .ms.hcl |
| Redshift | .rs.hcl |
| Oracle | .oc.hcl |
| Spanner | .sp.hcl |
| Snowflake | .sf.hcl |
| Databricks | .dbx.hcl |
Name your files accordingly, for example: warehouse_schema.pg.hcl for PostgreSQL, test_users.test.hcl for a test file.
Schema
The schema object describes a database schema. A DATABASE in MySQL and SQLite, or a SCHEMA in PostgreSQL.
An HCL file can contain 1 or more schema objects.
- MySQL
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
In MySQL and MariaDB, the schema resource can contain the charset and collate attributes. Read more about them
in MySQL or
MariaDB websites.
# Schema with attributes.
schema "market" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
comment = "A schema comment"
}
# Schema without attributes.
schema "orders" {}
schema "public" {
comment = "A schema comment"
}
schema "private" {}
Atlas does not support attached databases, and support only the default
database (i.e. main).
schema "main" {}
schema "dbo" {
comment = "A schema comment"
}
schema "private" {}
In Clickhouse, the schema resource can contain the engine attribute. If not specified, the default engine depends on ClickHouse settings.
Use sql() to specify the engine in advanced cases.
Read more about database engines in ClickHouse documentation.
schema "default" {
engine = sql("Replicated('/clickhouse/databases/default', '{shard}', '{replica}')")
}
schema "atomic" {
engine = Atomic
}
schema "public" {
comment = "A schema comment"
}
schema "private" {}
Table
A table describes a table in a SQL database. A table hold its columns, indexes, constraints, and additional attributes
that are supported by the different drivers.
table "users" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
column "manager_id" {
type = int
}
primary_key {
columns = [
column.id
]
}
index "idx_name" {
columns = [
column.name
]
unique = true
}
foreign_key "manager_fk" {
columns = [column.manager_id]
ref_columns = [column.id]
on_delete = CASCADE
on_update = NO_ACTION
}
}
Check
A check is a child resource of a table that describes a CHECK constraint.
table "products" {
column "price" {
type = float
}
check "positive price" {
expr = "price > 0"
}
}
Partitions
The partition option is a PostgreSQL-specific option that allows defining partitioned tables. Table partitioning refers
to splitting logical large tables into smaller physical ones.
table "logs" {
schema = schema.public
column "date" {
type = date
}
column "text" {
type = integer
}
partition {
type = RANGE
columns = [column.date]
}
}
table "metrics" {
schema = schema.public
column "x" {
type = integer
}
column "y" {
type = integer
}
partition {
type = RANGE
by {
column = column.x
}
by {
expr = "floor(y)"
}
}
}
Defining Partitions
Partitions are available only to Atlas Pro users. To use this feature, run:
atlas login
The top-level partition block allows defining partitions. The of attribute is used to specify the parent table
that the partition belongs to, and the range, list and hash attributes are used to define the partition boundaries.
- LIST
- HASH
- RANGE
table "traffic" {
schema = schema.public
column "id" {
null = false
type = serial
}
column "region" {
null = false
type = text
}
column "ts" {
null = false
type = timestamp
}
primary_key {
columns = [column.region]
}
partition {
type = LIST
columns = [column.region]
}
}
partition "traffic_default" {
schema = schema.public
of = table.traffic
}
partition "traffic_eu" {
schema = schema.public
of = table.traffic
list {
in = ["'de'", "'fr'", "'it'"]
}
}
partition "traffic_us" {
schema = schema.public
of = table.traffic
list {
in = ["'us-east'", "'us-west'"]
}
}
// ...
table "metrics" {
schema = schema.public
column "device_id" {
null = false
type = integer
}
column "ts" {
null = false
type = timestamp
}
column "value" {
null = true
type = double_precision
}
primary_key {
columns = [column.device_id]
}
partition {
type = HASH
columns = [column.device_id]
}
}
partition "metrics_0" {
schema = schema.public
of = table.metrics
hash {
modulus = 4
remainder = 0
}
}
partition "metrics_1" {
schema = schema.public
of = table.metrics
hash {
modulus = 4
remainder = 1
}
}
// ...
table "invoices" {
schema = schema.public
column "id" {
null = false
type = serial
}
column "issued_at" {
null = false
type = date
default = sql("CURRENT_DATE")
}
column "customer_id" {
null = false
type = integer
}
column "amount" {
null = false
type = numeric
default = 0
}
primary_key {
columns = [column.id, column.issued_at]
}
partition {
type = RANGE
columns = [column.issued_at]
}
}
partition "invoices_2025_06" {
schema = schema.public
of = table.invoices
range {
from = ["'2025-06-01'"]
to = ["'2025-07-01'"]
}
}
partition "invoices_2025_08" {
schema = schema.public
of = table.invoices
range {
from = ["'2025-08-01'"]
to = ["'2025-09-01'"]
}
}
Computed Partitions
The HCL language allows defining computed partitions using the for_each meta-argument. This is useful for creating
partitions based on dynamic values, such as dates or other computed values. For example, given the logs table below,
the following HCL code maintains partitions for the last 7 days, creating a new partition each day and deleting old ones.
table "logs" {
schema = schema.public
column "log_time" {
null = false
type = timestamptz
}
// ...
partition {
type = RANGE
columns = [column.log_time]
}
}
locals {
now = timestamp()
days_ahead = [0, 1, 2, 3, 4, 5, 6, 7]
partition_dates = [for d in local.days_ahead : formatdate("YYYY-MM-DD", timeadd(local.now, "${d * 24}h"))]
}
partition {
for_each = [
for i in range(0, length(local.partition_dates) - 1) : {
name = format("logs_%s", replace(local.partition_dates[i], "-", ""))
from = local.partition_dates[i]
to = local.partition_dates[i + 1]
}
]
name = each.value.name // logs_20250720, ..., logs_20250726.
schema = schema.public
of = table.logs
range {
from = ["'${each.value.from}'"]
to = ["'${each.value.to}'"]
}
}
Foreign Tables
The foreign_table option is a PostgreSQL-specific block that allows defining a foreign table.
A foreign table is a table that is not local to the current database but is defined in another database.
foreign_table "table_1" {
schema = schema.public
column "id" {
type = integer
}
// If the server is defined in the
// same schema, we can reference it.
server = server.test_server
options = {
schema_name = "public"
table_name = "tablex_1"
}
}
foreign_table "table_2" {
schema = schema.public
column "id" {
type = integer
}
column "name" {
type = integer
options = {
column_name = "other_name"
}
}
// If the server is not defined in the same
// schema, we can define its name as a string.
server = "external_server"
}
server "test_server" {
fdw = extension.postgres_fdw
comment = "test server"
options = {
dbname = "postgres"
host = "localhost"
port = "5429"
}
}
Row Level Security
The row_security option is a PostgreSQL-specific option that allows enabling row-level security policies for a table.
table "users" {
schema = schema.public
column "id" {
type = int
}
row_security {
enabled = true // ENABLE ROW LEVEL SECURITY
enforced = true // FORCE ROW LEVEL SECURITY
}
}
To define row-level security policies for a table, refer to the policy example.
Full-Text Index
- SQL Server
- MySQL
The fulltext option is a SQL Server-specific option that allows enabling full-text index for a table.
table "t1" {
schema = schema.dbo
column "c1" {
null = false
type = bigint
identity {
seed = 701
increment = 1000
}
}
column "c2" {
null = false
type = varbinary(-1)
}
column "c3" {
null = false
type = char(3)
}
index "uq_t1" {
unique = true
columns = [column.c1]
}
fulltext {
unique_key = index.uq_t1
filegroup = "PRIMARY"
catalog = "FT_CD"
on {
column = column.c2
type = column.c3
language = "English"
}
}
}
schema "dbo" {
}
To able to use this feature, you need to enable the Full-Text Search feature in SQL Server both on the production server and the devdb. Bellow is example of enabling the feature on the devdb, with custom dockerfile.
- atlas.hcl
- Dockerfile
- schema.sql
docker "sqlserver" "dev" {
image = "mssql:2017-fts"
build {
context = "."
platform = ["linux/amd64"] # SQL Server 2017 only supports amd64 architecture.
}
database = "awesome" # Can't use full-text search in master database.
collate = "Vietnamese_CI_AS" # Collation for the database.
# We need create the default catalog for full-text search on the database.
baseline = <<SQL
CREATE FULLTEXT CATALOG [FT_CD] ON FILEGROUP [PRIMARY] AS DEFAULT;
SQL
}
env {
name = atlas.env
dev = docker.sqlserver.dev.url
src = "file://schema.sql"
}
The content is taken from this example on the official Microsoft repository.
# mssql-agent-fts-ha-tools
# Maintainers: Microsoft Corporation (twright-msft on GitHub)
# GitRepo: https://github.com/Microsoft/mssql-docker
# Base OS layer: Latest Ubuntu LTS
FROM ubuntu:16.04
# Install prerequistes since it is needed to get repo config for SQL server
RUN export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
apt-get install -yq curl apt-transport-https && \
# Get official Microsoft repository configuration
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server-2017.list | tee /etc/apt/sources.list.d/mssql-server.list && \
apt-get update && \
# Install SQL Server from apt
apt-get install -y mssql-server && \
# Install optional packages
apt-get install -y mssql-server-ha && \
apt-get install -y mssql-server-fts && \
# Cleanup the Dockerfile
apt-get clean && \
rm -rf /var/lib/apt/lists
# Run SQL Server process
CMD /opt/mssql/bin/sqlservr
-- Create "t1" table
CREATE TABLE [t1] (
[c1] bigint IDENTITY (701, 1000) NOT NULL,
[c2] varbinary(MAX) NOT NULL,
[c3] char(3) NOT NULL
);
-- Create index "uq_t1" to table: "t1"
CREATE UNIQUE CLUSTERED INDEX [uq_t1] ON [t1] ([c1] ASC);
-- Add full-text index on table "t1"
CREATE FULLTEXT INDEX ON [t1] (
[c2] TYPE COLUMN [c3] LANGUAGE 'English'
) KEY INDEX [uq_t1] ON ([FT_CD], FILEGROUP [PRIMARY]);
Then we can run atlas schema inspect --env local --url "env://src" to get the schema in HCL format.
To use full-text indexes in MySQL, you need to create a full-text index on the table.
table "users" {
schema = schema.myschema
column "text" {
null = false
type = text
}
index "users_text" {
type = FULLTEXT
columns = [column.text]
}
index "ngram_text" {
type = FULLTEXT
parser = ngram
columns = [column.text]
}
}
schema "myschema" {}
Table Qualification
In some cases, an Atlas DDL document may contain multiple tables of the same name. This usually happens
when the same table name appears in two different schemas. In these cases, the table names must be
disambiguated by using resource qualifiers. The following document describes a
database that contains two schemas named a and b, and both of them contain a table named users.
schema "a" {}
schema "b" {}
table "a" "users" {
schema = schema.a
// .. columns
}
table "b" "users" {
schema = schema.b
// .. columns
}
Storage Engine
The engine attribute allows for overriding the default storage engine of the table. Supported by MySQL, MariaDB and ClickHouse.
- MySQL & MariaDB
- ClickHouse
table "users" {
schema = schema.public
engine = MyISAM
}
table "posts" {
schema = schema.public
engine = InnoDB
}
table "orders" {
schema = schema.public
engine = "MyRocks"
}
In ClickHouse, the engine can be specified using either enumerated values or string literals
table "users" {
schema = schema.public
engine = MergeTree
}
table "posts" {
schema = schema.public
engine = "ReplacingMergeTree"
}
For complex engines, use the sql() function to specify the engine in advanced cases.
table "users" {
schema = schema.public
engine = sql("Replicated('/clickhouse/tables/orders', '{shard}', '{replica}')")
}
TimeSeries
The TimeSeries engine is designed to optimize storage and query performance for time-series data.
To create a table with the TimeSeries engine, use the following syntax:
table "orders" {
schema = schema.public
engine = TimeSeries
}
Use the engine block to specify the TimeSeries engine with
External Target Tables.
table "orders" {
schema = schema.public
engine "TimeSeries" {
data = public.data_table
tags = public.tags_table
metrics = public.metrics_table
}
}
For adjusting the types of columns, specify them explicitly while defining the TimeSeries table.
table "orders" {
schema = schema.public
engine = TimeSeries
column "timestamp" {
type = DateTime64(6)
}
}
Inner UUID of the TimeSeries engine is not supported by Atlas currently.
System-Versioned Tables
System-Versioned tables are available only to Atlas Pro users. To use this feature, run:
atlas login
- SQL Server
- MariaDB
The system_versioned block allows marking a table as a system-versioned temporal table.
Supported by SQL Server. This block can be used to define the history table, retention period, and the period unit.
The period block defines the period columns for the system-versioned table. It requires two columns: start and end,
which are used to define the period of validity for each row. These columns must be of type datetime2, non-nullable, and
have the generated always as row start and row end respectively.
schema "dbo" {}
table "t1" {
schema = schema.dbo
column "c1" {
type = int
null = false
}
column "c2" {
type = money
null = false
}
column "c3" {
type = datetime2(7)
null = false
generated_always {
as = ROW_START
}
}
column "c4" {
type = datetime2(7)
null = false
generated_always {
as = ROW_END
}
}
primary_key {
on {
column = column.c1
desc = true
}
}
period "system_time" {
type = SYSTEM_TIME
start = column.c3
end = column.c4
}
system_versioned {
history_table = "dbo.t1_History"
retention = 3
retention_unit = MONTH
}
}
The system_versioned attribute allows marking a table as a system-versioned table.
Supported by MariaDB.
table "f" {
schema = schema.public
column "id" {
type = int
}
// ...
system_versioned = true
}
Distribution
The distribution block is a Redshift-specific option that allows specifying the distribution method of the table.
table "users" {
schema = schema.public
column "id" {
type = int
}
distribution {
style = KEY // EVEN | ALL | AUTO
key = column.id // only for KEY style
}
}
Sorting
The sort block is a Redshift-specific option that allows specifying the sorting method of the table.
table "users" {
schema = schema.public
column "id" {
type = int
}
sort {
style = COMPOUND // INTERLEAVED | COMPOUND | AUTO
columns = [column.id]
}
}
AUTORedshift restricts user access to certain external tables which are used to inspect the sort style.
Therefore, Atlas will ignore differences when changing the style to AUTO.
You will need to manually adjust the sort style on your target Redshift database after modifying it in the Atlas schema.
To change the sort style to AUTO, run the following SQL command:
ALTER TABLE "my_table" ALTER SORTKEY AUTO;
View
A view is a virtual table in the database, defined by a statement that queries rows from one or more existing
tables or views.
Views are available only to Atlas Pro users. To use this feature, run:
atlas login
view "clean_users" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = text
}
as = <<-SQL
SELECT u.id, u.name
FROM ${table.users.name} AS u
JOIN ${view.active_users.name} AS au USING (id)
SQL
depends_on = [table.users, view.t1]
comment = "A view to active users without sensitive data"
}
view "comedies" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = text
}
as = "SELECT id, name FROM films WHERE kind = 'Comedy'"
depends_on = [table.films]
check_option = CASCADED
security = INVOKER // DEFINER | INVOKER (MySQL/MariaDB only).
}
Atlas's testing framework allows you to write unit tests for your views. The following example demonstrates how
to write tests for the clean_users view defined above. For more detail, read the schema testing docs
or see the full example.
- Simple Test
- Table-driven Test
test "schema" "view" {
# Seeding to test view.
exec {
sql = "INSERT INTO users (id, name) VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Charlie');"
}
log {
message = "Seeded the database"
}
# Expected exec to pass.
exec {
sql = <<SQL
SELECT id, name
FROM clean_users;
SQL
}
log {
message = "Tested the view"
}
# Validates data.
exec {
sql = "SELECT id, name FROM clean_users;"
format = table
output = <<TAB
id | name
----+---------
1 | Alice
2 | Bob
3 | Charlie
TAB
}
log {
message = "Table is as expected"
}
}
test "schema" "view" {
for_each = [
{id = 1, name = "Alice"},
{id = 2, name = "Bob"},
{id = 3, name = "Charlie"}
]
# Seed the `users` table.
exec {
sql = "INSERT INTO users (id, name) VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Charlie');"
}
# Query the `clean_users` view.
exec {
sql = "SELECT id, name FROM clean_users WHERE id IN (1, 2, 3);"
}
# Check each ID returns the right user.
log {
message = "Testing ${each.value.id} -> ${each.value.name}"
}
}
Materialized View
A materialized view is a table-like structure that holds the results of a query. Unlike a regular view, the results of
a materialized view are stored in the database and can be refreshed periodically to reflect changes in the underlying data.
Materialized views are available only to Atlas Pro users. To use this feature, run:
atlas login
- PostgreSQL
- Clickhouse
- Redshift
materialized "mat_view" {
schema = schema.public
column "total" {
null = true
type = numeric
}
index "idx_expr" {
unique = true
on {
expr = "((total > (0)::numeric))"
}
}
index "idx_pred" {
unique = true
columns = [column.total]
where = "(total < (0)::numeric)"
}
as = <<-SQL
SELECT sum(total) AS total
FROM m1;
SQL
depends_on = [materialized.m1]
}
When creating materialized views with TO [db.]table,
the view will be created with the same structure as the table or view specified in the TO clause.
materialized "mat_view" {
schema = schema.public
to = table.dest
as = "SELECT * FROM table.src"
depends_on = [table.src]
}
The engine and primary_key attributes are required if the TO clause is not specified.
In this syntax, populate can be used for the first time to populate the materialized view.
materialized "mat_view" {
schema = schema.public
engine = MergeTree
column "id" {
type = UInt32
}
column "name" {
type = String
}
primary_key {
columns = [column.id]
}
as = "SELECT * FROM table.src"
populate = true
depends_on = [table.src]
}
materialized "mat_view" {
schema = schema.public
column "c1" {
null = true
type = smallint
}
as = "SELECT * FROM t1;"
comment = "example materialized view"
distribution {
style = EVEN
}
sort {
style = AUTO
}
depends_on = [table.t1]
}
Column
A column is a child resource of a table.
column "name" {
type = text
null = false
}
column "age" {
type = integer
default = 42
}
column "active" {
type = tinyint(1)
default = true
}
Properties
| Name | Kind | Type | Description |
|---|---|---|---|
| null | attribute | bool | Defines whether the column is nullable. |
| type | attribute | *schemahcl.Type | Defines the type of data that can be stored in the column. |
| default | attribute | *schemahcl.LiteralValue | Defines the default value of the column. |
Generated Columns
Generated columns are columns whose their values are computed using other columns or by deterministic expressions.
- MySQL
- PostgreSQL
- SQLite
- SQL Server
table "users" {
schema = schema.test
column "a" {
type = int
}
column "b" {
type = int
# In MySQL, generated columns are VIRTUAL by default.
as = "a * 2"
}
column "c" {
type = int
as {
expr = "a * b"
type = STORED
}
}
}
table "users" {
schema = schema.test
column "a" {
type = int
}
column "b" {
type = int
# In PostgreSQL, generated columns are STORED by default.
as = "a * 2"
}
column "c" {
type = int
as {
expr = "a * b"
type = STORED
}
}
}
table "users" {
schema = schema.test
column "a" {
type = int
}
column "b" {
type = int
# In SQLite, generated columns are VIRTUAL by default.
as = "a * 2"
}
column "c" {
type = int
as {
expr = "a * b"
type = STORED
}
}
}
table "users" {
schema = schema.test
column "a" {
type = int
}
column "b" {
type = int
as = "a * 2"
}
column "c" {
type = int
as {
expr = "a * b"
# In SQLServer, computed columns are non-PERSISTED by default.
type = PERSISTED
}
}
}
Note, it is recommended to use the --dev-url option when generated columns are used.
Encodings
Encodings are used to define the compression algorithm for the column data. Supported by ClickHouse and Redshift.
- Redshift
table "users" {
schema = schema.public
column "name" {
type = text
encode = LZ4 // AZ64 | RAW | LZ4 | ZSTD
}
}
Column Types
The SQL dialects supported by Atlas (Postgres, MySQL, MariaDB, and SQLite) vary in the types they support. At this point, the Atlas DDL does not attempt to abstract away the differences between various databases. This means that the schema documents are tied to a specific database engine and version. This may change in a future version of Atlas as we plan to add "Virtual Types" support. This section lists the various types that are supported in each database.
For a full list of supported column types, click here.
Primary Key
A primary_key is a child resource of a table, and it defines the table's
primary key.
Example
- Columns
- Expressions
primary_key {
columns = [column.id]
}
Properties
| Name | Kind | Type | Description |
|---|---|---|---|
| columns | attribute | reference (list) | A list of references to columns that comprise the primary key. |
Note, primary key expressions are supported by ClickHouse.
primary_key {
on {
column = column.id
}
on {
expr = "c1 + c2"
}
}
Properties
| Name | Kind | Type | Description |
|---|---|---|---|
| on | resource | schema.IndexPart (list) | The index parts that comprise the index |
Foreign Key
Foreign keys are child resources of a table, and it defines some columns in the table
as references to columns in other tables.
Example
table "users" {
schema = schema.public
column "id" {
type = integer
}
primary_key {
columns = [column.id]
}
}
table "orders" {
schema = schema.market
// ...
column "owner_id" {
type = integer
}
foreign_key "owner_id" {
columns = [column.owner_id]
ref_columns = [table.users.column.id]
on_update = NO_ACTION
on_delete = NO_ACTION
}
}
Referencing Qualified Tables
If a foreign key references a column in a qualified table, it is referenced
using table.<qualifier>.<table_name>.column.<column_name>:
table "public" "users" {
schema = schema.public
column "id" {
type = integer
}
primary_key {
columns = [column.id]
}
}
table "admin" "users" {
schema = schema.admin
// ...
column "external_id" {
type = integer
}
foreign_key "external_id" {
columns = [column.external_id]
ref_columns = [table.public.users.column.id]
on_update = NO_ACTION
on_delete = NO_ACTION
}
}
Properties
| Name | Kind | Type | Description |
|---|---|---|---|
| columns | attribute | reference (list) | The columns that reference other columns. |
| ref_columns | attribute | reference (list) | The referenced columns. |
| on_update | attribute | schema.ReferenceOption | Defines what to do on update. |
| on_delete | attribute | schema.ReferenceOption | Defines what to do on delete. |
Index
Indexes are child resources of a table, and it defines an index on the table.
Example
# Columns only.
index "idx_name" {
unique = true
columns = [column.name]
}
# Columns and order.
index "idx_name" {
unique = true
on {
column = column.rank
}
on {
column = column.score
desc = true
}
}
# Custom index type.
index "idx_name" {
type = GIN
columns = [column.data]
}
# Include non-key columns.
index "idx_include" {
columns = [column.range]
include = [column.version]
}
# Define operator class.
index "idx_operator_class" {
type = GIN
on {
column = column.j
ops = jsonb_path_ops
}
}
# Full-text index with ngram parser.
index "index_parser" {
type = FULLTEXT
columns = [column.text]
parser = ngram
}
# Postgres-specific NULLS [NOT] DISTINCT option.
index "index_nulls_not_distinct" {
unique = true
columns = [column.text]
nulls_distinct = false
}
Properties
| Name | Kind | Type | Description |
|---|---|---|---|
| unique | attribute | boolean | Defines whether a uniqueness constraint is set on the index. |
| type | attribute | IndexType (enum) | Defines the index type. e.g. HASH, GIN, FULLTEXT. |
| columns | attribute | reference (list) | The columns that comprise the index. |
| on | resource | schema.IndexPart (list) | The index parts that comprise the index. |
| options | attribute | schema.Attr | Additional driver specific attributes. e.g. storage_params |
Index Expressions
Index expressions allow setting indexes over functions or computed expressions. Supported by PostgreSQL, SQLite and MySQL8.
table "t" {
schema = schema.test
column "c1" {
type = int
}
column "c2" {
type = int
}
index "i" {
on {
expr = "c1 - c2"
}
on {
expr = "c2 - c1"
}
}
}
Note, it is recommended to use the --dev-url option when index expressions are used.
Partial Indexes
Partial indexes allow setting indexes over subset of the table. Supported by PostgreSQL and SQLite.
table "t" {
schema = schema.public
column "b" {
type = bool
}
column "c" {
type = int
}
index "i" {
columns = [column.c]
where = "b AND c > 0"
}
}
Note, it is recommended to use the --dev-url option when partial indexes are used.
Index Prefixes
Index prefixes allow setting an index
on the first N characters of string columns. Supported by MySQL and MariaDB.
table "users" {
schema = schema.test
column "name" {
type = varchar(255)
}
index "user_name" {
on {
column = column.name
prefix = 128
}
}
}
Unique Constraints
The unique block allows defining a unique constraint
supported by PostgreSQL:
# Columns only.
unique "name" {
columns = [column.name]
}
# Include non-key columns.
unique "name_include_version" {
columns = [column.name]
include = [column.version]
}
In order to add a unique constraint in non-blocking mode, the index supporting the constraint needs to be created concurrently first and then converted to a unique constraint. To achieve this, follow the steps below:
- Define a unique
indexblock on the desired table. - Ensure a Diff Policy is used to instruct Atlas to create the index concurrently.
- Apply the migration and ensure the index was created.
- Replace the
indexblock with auniqueblock to create a new unique constraint using the existing index.
Exclude Constraints
The exclude block allows defining a exclusion constraint
supported by PostgreSQL:
exclude "excl_speaker_during" {
type = GIST
on {
column = column.speaker
op = "="
}
on {
column = column.during
op = "&&"
}
}
# Include non-key columns.
exclude "excl_speaker_during" {
type = GIST
on {
column = column.speaker
op = "="
}
on {
column = column.during
op = "&&"
}
include = [column.another]
}
Storage Parameters
The storage_params block allows configuring the storage parameters of the index. Supported by PostgreSQL.
- IVFFlat (pgvector)
- HNSW (pgvector)
- BRIN
- SPGIST
- GIST
- BTREE
Atlas supports defining IVFFlat indexes when the vector extension is defined in the schema, enabling efficient
similarity searches by organizing vectors into lists and searching only those closest to the query vector. Atlas Pro
index "index_name" {
type = "IVFFlat"
on {
column = column.embedding
ops = "vector_l2_ops"
}
storage_params {
lists = 100
}
}
extension "vector" {
schema = schema.public
}
Atlas supports defining HNSW indexes when the vector extension is defined in the schema. Atlas Pro
index "index_name" {
type = "HNSW"
on {
column = column.embedding
ops = "vector_l2_ops"
}
storage_params {
m = 16
ef_construction = 64
}
}
extension "vector" {
schema = schema.public
}
index "index_name" {
type = BRIN
columns = [column.range]
storage_params {
page_per_range = 128
}
}
index "index_name" {
columns = [column.point]
type = SPGIST
storage_params {
fillfactor = 100
}
}
index "index_name" {
columns = [column.point]
type = GIST
storage_params {
buffering = ON
fillfactor = 50
}
}
index "index_name" {
columns = [column.id]
storage_params {
deduplicate_items = false
}
}
Trigger
Triggers are available only to Atlas Pro users. To use this feature, run:
atlas login
The trigger block allows defining SQL triggers in HCL format.
- PostgreSQL
- MySQL
- SQLite
- SQL Server
function "audit_orders" {
schema = schema.public
lang = PLpgSQL
return = trigger
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, TG_OP);
RETURN NEW;
END;
SQL
}
trigger "trigger_orders_audit" {
on = table.orders
after {
insert = true
update_of = [table.orders.column.amount]
}
execute {
function = function.audit_orders
}
}
trigger "after_orders_insert" {
on = table.orders
after {
insert = true
}
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, changed_at, operation)
VALUES (NEW.order_id, NOW(), 'INSERT');
END
SQL
}
trigger "after_orders_update" {
on = table.orders
after {
update = true
}
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, changed_at, operation)
VALUES (NEW.order_id, NOW(), 'UPDATE');
END
SQL
}
trigger "after_orders_insert" {
on = table.orders
after {
insert = true
}
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, 'INSERT');
END
SQL
}
trigger "after_orders_update" {
on = table.orders
after {
update_of = [table.orders.column.amount]
}
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, 'UPDATE');
END
SQL
}
trigger "t1_trg" {
on = table.orders
after {
insert = true
update = true
delete = true
}
as = <<-SQL
BEGIN
SET NOCOUNT ON;
DECLARE @c INT;
SELECT @c = COUNT(*) FROM [dbo].[orders];
IF @c > 1000
RAISERROR('Too many rows in orders', 16, 1);
END
SQL
}
trigger "t2_trg" {
on = table.customers
instead_of {
insert = true
}
as = <<-SQL
BEGIN
SET NOCOUNT ON;
INSERT INTO [dbo].[customers] ([name])
SELECT [ins].[name]
FROM [inserted] [ins]
WHERE [ins].[name] NOT IN (
SELECT [name] FROM [dbo].[blacklist_customers]
);
END
SQL
}
Computed Triggers
To configure the same trigger for multiple tables/views, users can utilize the for_each meta-argument. By setting it
up, a trigger block will be computed for each item in the provided value. Note that for_each accepts either a map
or a set of references.
trigger "audit_log_trigger" {
for_each = [table.users, table.orders, table.payments]
on = each.value
after {
insert = true
update = true
delete = true
}
execute {
function = function.audit_log_table
}
}
Event Trigger
Event Triggers are available only to Atlas Pro users. To use this feature, run:
atlas login
The event_trigger block allows defining PostgreSQL event trigger functions
that automatically execute in response to specific events within the database system, like table creation or schema modifications.
# Block table rewrites.
event_trigger "block_table_rewrite" {
on = table_rewrite
execute = function.no_rewrite_allowed
}
# Filter specific events.
event_trigger "record_table_creation" {
on = ddl_command_start
tags = ["CREATE TABLE"]
execute = function.record_table_creation
}
Function
Functions are available only to Atlas Pro users. To use this feature, run:
atlas login
The function block allows defining functions in HCL format. The lang attribute specifies the language of the
function. For example, PLpgSQL, SQL, CRL, etc.
- PostgreSQL
- MySQL
- SQL Server
function "positive" {
schema = schema.public
lang = SQL
arg "v" {
type = integer
}
return = boolean
as = "SELECT v > 0"
}
function "sql_body1" {
schema = schema.public
lang = SQL
arg "v" {
type = integer
}
return = integer
as = <<-SQL
BEGIN ATOMIC
SELECT v;
END
SQL
}
function "sql_body2" {
schema = schema.public
lang = SQL
arg {
type = integer
}
return = integer
as = "RETURN $1"
volatility = IMMUTABLE // STABLE | VOLATILE
leakproof = true // NOT LEAKPROOF | LEAKPROOF
strict = true // (CALLED | RETURNS NULL) ON NULL INPUT
security = INVOKER // DEFINER | INVOKER
}
function "add2" {
schema = schema.public
arg "a" {
type = int
}
arg "b" {
type = int
}
return = int
as = "return a + b"
deterministic = true // NOT DETERMINISTIC | DETERMINISTIC
data_access = NO_SQL // CONTAINS_SQL | NO_SQL | READS_SQL_DATA | MODIFIES_SQL_DATA
security = INVOKER // DEFINER | INVOKER
}
function "f1" {
schema = schema.public
arg "x" {
type = int
}
return = int
as = <<-SQL
BEGIN
INSERT INTO t1 VALUES (RAND(x));
RETURN x+2;
END
SQL
}
function "fn_return_scalar" {
schema = schema.dbo
lang = SQL
arg "@a" {
type = int
}
arg "@b" {
type = int
default = 1
}
return = int
as = <<-SQL
BEGIN
RETURN @a * @a + @b * @b
END
SQL
schema_bound = true // SCHEMABINDING
null_call = RETURNS_NULL // (RETURNS NULL | CALLED) ON NULL INPUT
inline = true // INLINE = { (OFF | ON) }
}
function "fn_return_inline" {
schema = schema.dbo
lang = SQL
arg "@a" {
type = int
}
arg "@b" {
type = int
default = 1
}
return = sql("table")
as = "RETURN SELECT @a as [a], @b as [b], (@a+@b)*2 as [p], @a*@b as [s]"
}
function "fn_return_table" {
schema = schema.dbo
lang = SQL
arg "@a" {
type = int
}
arg "@b" {
type = int
default = 1
}
return_table "@t1" {
column "c1" {
null = false
type = int
}
column "c2" {
null = false
type = nvarchar(255)
}
column "c3" {
null = true
type = nvarchar(255)
default = sql("N'G'")
}
column "c4" {
null = false
type = int
}
primary_key {
columns = [column.c1]
}
index {
unique = true
nonclustered = true
on {
desc = true
column = column.c3
}
on {
column = column.c4
}
}
index {
unique = true
nonclustered = true
on {
column = column.c2
}
on {
desc = true
column = column.c3
}
}
index "idx" {
columns = [column.c2]
nonclustered = true
}
check {
expr = "([c4]>(0))"
}
}
as = <<-SQL
BEGIN
INSERT @t1
SELECT 1 AS [c1], 'A' AS [c2], NULL AS [c3], @a * @a + @b AS [c4];
RETURN
END
SQL
}
Atlas's testing framework allows you to write unit tests for your functions. The following example demonstrates how
to write tests for the positive function defined above. For more detail, read the schema testing docs
or see the full example.
- Simple Test
- Table-driven Test
test "schema" "simple_test" {
parallel = true
assert {
sql = "SELECT positive(1)"
}
log {
message = "First assertion passed"
}
assert {
sql = <<SQL
SELECT NOT positive(0);
SELECT NOT positive(-1);
SQL
}
}
test "schema" "simple_test" {
parallel = true
for_each = [
{input: 1, expected: "t"},
{input: 0, expected: "f"},
{input: -1, expected: "f"},
]
exec {
sql = "SELECT positive(${each.value.input})"
output = each.value.expected
}
}
Aggregate Functions
The aggregate block defines a function that computes a single result from a set of values. Supported by
PostgreSQL.
aggregate "sum_of_squares" {
schema = schema.public
arg {
type = double_precision
}
state_type = double_precision
state_func = function.sum_squares_sfunc
}
function "sum_squares_sfunc" {
schema = schema.public
lang = PLpgSQL
arg "state" {
type = double_precision
}
arg "value" {
type = double_precision
}
return = double_precision
as = <<-SQL
BEGIN
RETURN state + value * value;
END;
SQL
}
Procedure
Procedures are available only to Atlas Pro users. To use this feature, run:
atlas login
The procedure block allows defining SQL procedure in HCL format.
- PostgreSQL
- MySQL
- SQL Server
procedure "proc" {
schema = schema.public
lang = SQL
arg "a" {
type = integer
}
arg "b" {
type = text
}
arg "c" {
type = integer
default = 100
}
as = <<-SQL
INSERT INTO t1 VALUES(a, b);
INSERT INTO t2 VALUES(c, b);
SQL
}
procedure "p1" {
schema = schema.public
arg "x" {
type = varchar(10)
}
as = "INSERT INTO t1 VALUES(x)"
comment = "A procedure comment"
deterministic = true
}
procedure "p2" {
schema = schema.public
arg "x" {
type = char(10)
mode = INOUT
charset = "latin1"
}
arg "y" {
type = char(10)
mode = OUT
}
as = <<-SQL
BEGIN
DECLARE перем1 CHAR(10) CHARACTER SET utf8;
// ...
END
SQL
}
procedure "p1" {
schema = schema.dbo
as = <<-SQL
SET NOCOUNT ON;
SELECT [c1], [c2], [c3]
FROM [dbo].[t1];
SQL
}
procedure "p2" {
schema = schema.dbo
as = <<-SQL
BEGIN
SELECT TOP(10) [c1], [c2], [c3] FROM [dbo].[t1];
SELECT TOP(10) [c1], [c4] FROM [dbo].[t2]; END
SQL
}
procedure "p3" {
schema = schema.dbo
arg "@c2" {
type = nvarchar(50)
}
arg "@c3" {
type = nvarchar(50)
}
as = <<-SQL
SET NOCOUNT ON;
SELECT [c1], [c2], [c3]
FROM [dbo].[t1]
WHERE [c2] = @c2 AND [c3] = @c3;
SQL
}
procedure "p4" {
schema = schema.dbo
arg "@c2" {
type = nvarchar(50)
default = "D%"
}
arg "@c3" {
type = nvarchar(50)
default = "%"
}
as = <<-SQL
BEGIN
SET NOCOUNT ON;
SELECT [c1] as [c1], [c2], [c3]
FROM [dbo].[t1]
WHERE [c2] LIKE @c2 AND [c3] LIKE @c3;
END
SQL
}
procedure "p5" {
schema = schema.dbo
arg "@a" {
type = int
}
arg "@b" {
type = int
}
arg "@s" {
type = int
mode = OUT
}
arg "@p" {
type = int
mode = OUT
}
as = <<-SQL
SET NOCOUNT ON;
SET @s = @a * @b;
SET @p = (@a + @b) * 2;
SQL
}
procedure "p7" {
schema = schema.dbo
as = "TRUNCATE TABLE [dbo].[t1];"
}
procedure "p8" {
schema = schema.dbo
arg "@c" {
type = cursor
mode = OUT
}
as = <<-SQL
SET NOCOUNT ON;
SET @c = CURSOR
FORWARD_ONLY STATIC FOR
SELECT [c1], [c2]
FROM [dbo].[t1];
OPEN @c;
SQL
}
Atlas's testing framework allows you to write unit tests for your procedures. The following example demonstrates how
to write tests for a stored procedure, archive_old_sales, that moves old sales from the sales table to the archive_sales table according to a specified cutoff date.
For more detail, read the schema testing docs
or see the full example.
- Simple Test
- Table-driven Test
test "schema" "procedure" {
# Seed data
exec {
sql = <<-SQL
INSERT INTO sales (id, sale_amount, sale_date) VALUES
(1, 150.00, '2024-07-18'),
(2, 200.00, '2024-06-20'),
(1, 350.00, '2024-07-10');
SQL
}
# Execute the procedure with a specific cutoff date
exec {
sql = "CALL archive_old_sales('2024-07-18')" # Archive sales before this date
}
# Verify data in archive_sales table
exec {
sql = "SELECT COUNT(*) FROM archive_sales WHERE sale_date < '2024-07-18'"
output = "2" # Expect 2 archived sales
}
# Verify data in sales table
exec {
sql = "SELECT COUNT(*) FROM sales"
output = "1" # Expect 1 sale remaining in the sales table after cutoff date
}
}
test "schema" "procedure" {
# Seed data
exec {
sql = <<-SQL
INSERT INTO sales (id, sale_amount, sale_date) VALUES
(1, 150.00, '2024-07-18'),
(2, 200.00, '2024-06-20'),
(1, 350.00, '2024-07-10');
SQL
}
for_each = [
{cutoff_date: "2024-07-18", expected_archived: "2", expected_remaining: "1"},
{cutoff_date: "2024-08-01", expected_archived: "3", expected_remaining: "0"}
]
# Execute the procedure with the current test case cutoff date
exec {
sql = "CALL archive_old_sales('${each.value.cutoff_date}')"
}
# Verify data in archive_sales table
exec {
sql = "SELECT COUNT(*) FROM archive_sales WHERE sale_date < '${each.value.cutoff_date}'"
output = each.value.expected_archived
}
# Verify data in sales table
exec {
sql = "SELECT COUNT(*) FROM sales"
output = each.value.expected_remaining
}
log {
message = "Testing cutoff date: ${each.value.cutoff_date} - Expected Archived: ${each.value.expected_archived}, Expected Remaining: ${each.value.expected_remaining}"
}
}
Domain
Domains are available only to Atlas Pro users. To use this feature, run:
atlas login
The domain type is a user-defined data type that is based on an existing data type but with optional constraints
and default values. Supported by PostgreSQL.
domain "us_postal_code" {
schema = schema.public
type = text
null = true
check "us_postal_code_check" {
expr = "((VALUE ~ '^\\d{5}$'::text) OR (VALUE ~ '^\\d{5}-\\d{4}$'::text))"
}
}
domain "username" {
schema = schema.public
type = text
null = false
default = "anonymous"
check "username_length" {
expr = "(length(VALUE) > 3)"
}
}
table "users" {
schema = schema.public
column "name" {
type = domain.username
}
column "zip" {
type = domain.us_postal_code
}
}
schema "public" {
comment = "standard public schema"
}
Atlas's testing framework allows you to write unit tests for your domains. The following example demonstrates how
to write tests for the us_postal_code domain defined above. For more detail, read the schema testing docs
or see the full example.
- Simple Test
- Table-driven Test
test "schema" "postal" {
parallel = true
exec {
sql = "select '12345'::us_postal_code"
}
catch {
sql = "select 'hello'::us_postal_code"
}
}
test "schema" "us_postal_code_check_valid" {
parallel = true
for_each = [
{input = "12345", expected = "valid"},
{input = "12345-6789", expected = "valid"},
]
log {
message = "Testing postal code: ${each.value.input} -> Expected: ${each.value.expected}"
}
exec {
sql = "SELECT '${each.value.input}'::us_postal_code"
}
}
test "schema" "us_postal_code_check_invalid" {
parallel = true
for_each = [
{input = "hello", expected = "invalid"},
{input = "123", expected = "invalid"},
]
log {
message = "Testing postal code: ${each.value.input} -> Expected: ${each.value.expected}"
}
catch {
sql = "SELECT '${each.value.input}'::us_postal_code"
}
}
Composite Type
Composite types are available only to Atlas Pro users. To use this feature, run:
atlas login
The composite type is a user-defined data type that represents the structure of a row or record. Supported by PostgreSQL.
composite "address" {
schema = schema.public
field "street" {
type = text
}
field "city" {
type = text
}
}
table "users" {
schema = schema.public
column "address" {
type = composite.address
}
}
schema "public" {
comment = "standard public schema"
}
Range Type
Range types are available only to Atlas Pro users. To use this feature, run:
atlas login
Range types in Postgres are types that store a range of values for a specific subtype such as timestamps or numbers, allowing you to query for overlaps, containment, and boundaries.
- Standard Range Types
- Custom Range Types
- Customize Multirange Types
Using the builtin range types provided by PostgreSQL, such as int4range, numrange, tsrange, and tstzrange, does
not require custom definitions
table "users" {
schema = schema.public
column "validity_period" {
type = tsrange
}
column "age_range" {
type = int4range
}
}
# Custom function to compute the difference between two numeric values.
function "numeric_diff" {
schema = schema.public
lang = SQL
arg "a" {
type = numeric
}
arg "b" {
type = numeric
}
return = double_precision
as = "SELECT b - a;"
volatility = IMMUTABLE
}
# Custom range type definition using the numeric subtype.
range "price_range" {
schema = schema.public
subtype = numeric
subtype_diff = function.numeric_diff
}
# Custom range type usage in a table.
table "products" {
schema = schema.public
column "price_interval" {
null = false
type = range.price_range
}
}
You can use the multirange_name attribute to customize the name of the multirange type that is automatically created by
PostgreSQL when defining a range type. If omitted, PostgreSQL generates a default name according to the following rules:
If the range type name contains the substring
range, then the multirange type name is formed by replacement of therangesubstring withmultirangein the range type name. Otherwise, the multirange type name is formed by appending a_multirangesuffix to the range type name.
range "time_range" {
schema = schema.public
subtype = time
multirange_name = "multirange_name"
}
Note, Atlas hides the auto-created constructor functions for both the range and multirange types, along with the multirange type itself, since these are PostgreSQL internals that cannot be edited by the user and are automatically removed when the range type is dropped.
Policies
Policies are available only to Atlas Pro users. To use this feature, run:
atlas login
The policy block allows defining row-level security policies. Supported by PostgreSQL.
policy "sales_rep_access" {
on = table.orders
for = SELECT
to = [PUBLIC]
using = "(sales_rep_id = (CURRENT_USER)::integer)"
}
policy "restrict_sales_rep_updates" {
on = table.orders
as = RESTRICTIVE
for = UPDATE
to = ["custom_role"]
check = "(sales_rep_id = (CURRENT_USER)::integer)"
comment = "This is a restrictive policy"
}
To enable and force row-level security on a table, refer to the table row-level security example.
Computed Policies
To configure the same policy for multiple tables, users can utilize the for_each meta-argument. By setting it
up, a policy block will be computed for each item in the provided value. Note that for_each accepts either a map
or a set of references.
policy "tenant_access_policy" {
for_each = [table.users, table.orders, table.payments]
on = each.value
as = RESTRICTIVE
using = "tenant_isolation_policy()"
}
Sequence
Sequences are available only to Atlas Pro users. To use this feature, run:
atlas login
The sequence block allows defining sequence number generator. Supported by PostgreSQL and SQL Server.
- PostgreSQL
- SQL Server
Note, a sequence block is printed by Atlas on inspection, or it may be manually defined in the schema only if it
represents a PostgreSQL sequence that is not implicitly created by the database for identity or serial columns.
# Simple sequence with default values.
sequence "s1" {
schema = schema.public
}
# Sequence with custom configuration.
sequence "s2" {
schema = schema.public
type = smallint
start = 100
increment = 2
min_value = 100
max_value = 1000
}
# Sequence that is owned by a column.
sequence "s3" {
schema = schema.public
owner = table.t2.column.id
comment = "Sequence with column owner"
}
# The sequences created by this table are not printed on inspection.
table "users" {
schema = schema.public
column "id" {
type = int
identity {
generated = ALWAYS
start = 10000
}
}
column "serial" {
type = serial
}
primary_key {
columns = [column.id]
}
}
table "t2" {
schema = schema.public
column "id" {
type = int
}
}
schema "public" {
comment = "standard public schema"
}
Atlas support define sequence in SQL Server by using sequence block. See more about SQL Server sequence.
# Simple sequence with default values.
sequence "s1" {
schema = schema.dbo
}
# Sequence with custom configuration.
sequence "s2" {
schema = schema.dbo
type = decimal(18, 0)
start = 100000000000000000
increment = 1
min_value = 100000000000000000
max_value = 999999999999999999
cycle = true
}
# The sequences with alias-type.
sequence "s3" {
schema = schema.dbo
type = type_alias.ssn
start = 111111111
increment = 1
min_value = 111111111
}
type_alias "ssn" {
schema = schema.dbo
type = dec(9, 0)
null = false
}
Enum
The enum type allows storing a set of enumerated values. Supported by PostgreSQL.
enum "status" {
schema = schema.test
values = ["on", "off"]
}
table "t1" {
schema = schema.test
column "c1" {
type = enum.status
}
}
table "t2" {
schema = schema.test
column "c1" {
type = enum.status
}
}
Extension
Extensions are available only to Atlas Pro users. To use this feature, run:
atlas login
The extension block allows the definition of PostgreSQL extensions to be loaded into the database. The following
arguments are supported:
schema(Optional) - The schema in which to install the extension's objects, given that the extension allows its contents to be relocated.version(Optional) - The version of the extension to install. Defaults to the version specified in the extension's control file.comment(Read-only) - The description of the extension. This field is populated inatlas inspectoutput.
extension "adminpack" {
version = "2.1"
comment = "administrative functions for PostgreSQL"
}
extension "postgis" {
schema = schema.public
version = "3.4.1"
comment = "PostGIS geometry and geography spatial types and functions"
}
extension "pgcrypto" {
schema = schema.public
version = "1.3"
comment = "cryptographic functions"
}
schema "public" {
comment = "standard public schema"
}
Although the schema argument is supported, it only indicates where the extension's objects will be installed. However,
the extension itself is installed at the database level and cannot be loaded multiple times into different schemas.
Therefore, to avoid conflicts with other schemas, when working with extensions, the scope of the migration should be set to the database, where objects are qualified with the schema name. To learn more about the difference between database and schema scopes, visit this doc.
Foreign Servers
Foreign-servers are available only to Atlas Pro users. To use this feature, run:
atlas login
The server block defines a foreign server in the database, containing the connection details needed to access an
external data source via a foreign-data wrapper. This feature is supported by PostgreSQL.
extension "postgres_fdw" {
schema = schema.public
}
server "test_server" {
fdw = extension.postgres_fdw
comment = "test server"
options = {
dbname = "postgres"
host = "localhost"
port = "5432"
}
}
Foreign servers are defined at the database level, and their names must be unique within the database.
Therefore, when working with foreign servers, the scope of the migration should be set to the database, where bjects are qualified with the schema name. To learn more about the difference between database and schema scopes, visit this doc.
Comment
The comment attribute is an attribute of schema, table, column, and index.
schema "public" {
comment = "A schema comment"
}
table "users" {
schema = schema.public
column "name" {
type = text
comment = "A column comment"
}
index "name_idx" {
columns = [column.name]
}
comment = "A table comment"
}
Charset and Collation
The charset and collate are attributes of schema, table and column and supported by MySQL, MariaDB and PostgreSQL.
Read more about them in MySQL,
MariaDB and
PostgreSQL websites.
- MySQL
- PostgreSQL
- SQL Server
schema "public" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
table "products" {
column "name" {
type = text
collate = "binary"
}
collate = "utf8_general_ci"
}
schema "public" {}
table "products" {
column "name" {
type = text
collate = "es_ES"
}
}
SQLServer only support collate attribute on columns.
schema "dbo" {}
table "users" {
schema = schema.dbo
column "name" {
type = varchar(255)
collate = "Vietnamese_CI_AS"
}
}
Auto Increment
AUTO_INCREMENT and IDENTITY columns are attributes of the column and table resource, and can be used to
generate a unique identity for new rows.
- MySQL
- PostgreSQL
- SQLite
- SQL Server
In MySQL/MariaDB the auto_increment attribute can be set on columns and tables.
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
auto_increment = true
}
primary_key {
columns = [column.id]
}
}
The auto_increment column can be set on the table to configure a start value other than 1.
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
auto_increment = true
}
primary_key {
columns = [column.id]
}
auto_increment = 100
}
PostgreSQL supports serial columns and the generated as identity syntax for versions >= 10.
table "users" {
schema = schema.public
column "id" {
null = false
type = int
identity {
generated = ALWAYS
start = 10
increment = 10
}
}
primary_key {
columns = [column.id]
}
}
SQLite allows configuring AUTOINCREMENT columns using the auto_increment
attribute.
table "users" {
schema = schema.main
column "id" {
null = false
type = integer
auto_increment = true
}
primary_key {
columns = [column.id]
}
}
table "users" {
schema = schema.dbo
column "id" {
null = false
type = bigint
identity {
seed = 701
increment = 1000
}
}
primary_key {
columns = [column.id]
}
}