HCL Schema
Atlas schemas can be defined in SQL, external ORMs and programs, or by using the Atlas HCL language. The HCL-based language allows developers to describe database schemas in a declarative manner, and it supports all SQL features supported by Atlas. The main advantages of using HCL are that it enables developers to manage their database schemas like regular code, facilitates sharing and reusing files between projects, allows variable injection, and provides the ability to attach annotations to objects, such as PII or sensitive data.
Schema
The schema
object describes a database schema. A DATABASE
in MySQL and SQLite, or a SCHEMA
in PostgreSQL.
An HCL file can contain 1 or more schema objects.
- MySQL
- PostgreSQL
- SQLite
- SQL Server
- ClickHouse
- Redshift
In MySQL and MariaDB, the schema
resource can contain the charset
and collate
attributes. Read more about them
in MySQL or
MariaDB websites.
# Schema with attributes.
schema "market" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
comment = "A schema comment"
}
# Schema without attributes.
schema "orders" {}
schema "public" {
comment = "A schema comment"
}
schema "private" {}
Atlas does not support attached databases, and support only the default
database (i.e. main
).
schema "main" {}
schema "dbo" {
comment = "A schema comment"
}
schema "private" {}
In Clickhouse, the schema
resource can contain the engine
attribute. If not specified, the default engine depends on ClickHouse settings.
Use sql()
to specify the engine in advanced cases.
Read more about database engines in ClickHouse documentation.
schema "default" {
engine = sql("Replicated('/clickhouse/databases/default', '{shard}', '{replica}')")
}
schema "atomic" {
engine = Atomic
}
schema "public" {
comment = "A schema comment"
}
schema "private" {}
Table
A table
describes a table in a SQL database. A table hold its columns, indexes, constraints, and additional attributes
that are supported by the different drivers.
table "users" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
column "manager_id" {
type = int
}
primary_key {
columns = [
column.id
]
}
index "idx_name" {
columns = [
column.name
]
unique = true
}
foreign_key "manager_fk" {
columns = [column.manager_id]
ref_columns = [column.id]
on_delete = CASCADE
on_update = NO_ACTION
}
}
Check
A check
is a child resource of a table
that describes a CHECK
constraint.
table "products" {
column "price" {
type = float
}
check "positive price" {
expr = "price > 0"
}
}
Partitions
The partition
option is a PostgreSQL-specific option that allows defining table partitioning. Table partitioning refers
to splitting logical large tables into smaller physical ones.
table "logs" {
schema = schema.public
column "date" {
type = date
}
column "text" {
type = integer
}
partition {
type = RANGE
columns = [column.date]
}
}
table "metrics" {
schema = schema.public
column "x" {
type = integer
}
column "y" {
type = integer
}
partition {
type = RANGE
by {
column = column.x
}
by {
expr = "floor(y)"
}
}
}
Row Level Security
The row_security
option is a PostgreSQL-specific option that allows enabling row-level security policies for a table.
table "users" {
schema = schema.public
column "id" {
type = int
}
row_security {
enabled = true // ENABLE ROW LEVEL SECURITY
enforced = true // FORCE ROW LEVEL SECURITY
}
}
To define row-level security policies for a table, refer to the policy example.
Table Qualification
In some cases, an Atlas DDL document may contain multiple tables of the same name. This usually happens
when the same table name appears in two different schemas. In these cases, the table names must be
disambiguated by using resource qualifiers. The following document describes a
database that contains two schemas named a
and b
, and both of them contain a table named users
.
schema "a" {}
schema "b" {}
table "a" "users" {
schema = schema.a
// .. columns
}
table "b" "users" {
schema = schema.b
// .. columns
}
Storage Engine
The engine
attribute allows for overriding the default storage engine of the table. Supported by MySQL and MariaDB.
table "users" {
schema = schema.public
engine = MyISAM
}
table "posts" {
schema = schema.public
engine = InnoDB
}
table "orders" {
schema = schema.public
engine = "MyRocks"
}
System-Versioned Tables
The system_versioned
attribute allows marking a table as a system-versioned table.
Supported by MariaDB.
System-Versioned tables are currently available only to Atlas Pro users. To use this feature, run:
atlas login
table "f" {
schema = schema.public
column "id" {
type = int
}
// ...
system_versioned = true
}
Distribution
The distribution
block is a Redshift-specific option that allows specifying the distribution method of the table.
table "users" {
schema = schema.public
column "id" {
type = int
}
distribution {
style = KEY // EVEN | ALL | AUTO
key = column.id // only for KEY style
}
}
Sorting
The sort
block is a Redshift-specific option that allows specifying the sorting method of the table.
table "users" {
schema = schema.public
column "id" {
type = int
}
sort {
style = COMPOUND // INTERLEAVED | COMPOUND | AUTO
columns = [column.id]
}
}
AUTO
Redshift restricts user access to certain external tables which are used to inspect the sort style.
Therefore, Atlas will ignore differences when changing the style to AUTO
.
You will need to manually adjust the sort style on your target Redshift database after modifying it in the Atlas schema.
To change the sort style to AUTO
, run the following SQL command:
ALTER TABLE "my_table" ALTER SORTKEY AUTO;
View
A view
is a virtual table in the database, defined by a statement that queries rows from one or more existing
tables or views.
Views are currently available only to Atlas Pro users. To use this feature, run:
atlas login
view "clean_users" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = text
}
as = <<-SQL
SELECT u.id, u.name
FROM ${table.users.name} AS u
JOIN ${view.active_users.name} AS au USING (id)
SQL
depends_on = [table.users, view.t1]
comment = "A view to active users without sensitive data"
}
view "comedies" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = text
}
as = "SELECT id, name FROM films WHERE kind = 'Comedy'"
depends_on = [table.films]
check_option = CASCADED
security = INVOKER // DEFINER | INVOKER (MySQL/MariaDB only).
}
Atlas's testing framework allows you to write unit tests for your views. The following example demonstrates how
to write tests for the clean_users
view defined above. For more detail, read the schema testing docs
or see the full example.
- Simple Test
- Table-driven Test
test "schema" "view" {
# Seeding to test view.
exec {
sql = "INSERT INTO users (id, name) VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Charlie');"
}
log {
message = "Seeded the database"
}
# Expected exec to pass.
exec {
sql = <<SQL
SELECT id, name
FROM clean_users;
SQL
}
log {
message = "Tested the view"
}
# Validates data.
exec {
sql = "SELECT id, name FROM clean_users;"
format = table
output = <<TAB
id | name
----+---------
1 | Alice
2 | Bob
3 | Charlie
TAB
}
log {
message = "Table is as expected"
}
}
test "schema" "view" {
for_each = [
{id = 1, name = "Alice"},
{id = 2, name = "Bob"},
{id = 3, name = "Charlie"}
]
# Seed the `users` table.
exec {
sql = "INSERT INTO users (id, name) VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Charlie');"
}
# Query the `clean_users` view.
exec {
sql = "SELECT id, name FROM clean_users WHERE id IN (1, 2, 3);"
}
# Check each ID returns the right user.
log {
message = "Testing ${each.value.id} -> ${each.value.name}"
}
}
Materialized View
A materialized
view is a table-like structure that holds the results of a query. Unlike a regular view, the results of
a materialized view are stored in the database and can be refreshed periodically to reflect changes in the underlying data.
Materialized views are currently available only to Atlas Pro users. To use this feature, run:
atlas login
- PostgreSQL
- Clickhouse
- Redshift
materialized "mat_view" {
schema = schema.public
column "total" {
null = true
type = numeric
}
index "idx_expr" {
unique = true
on {
expr = "((total > (0)::numeric))"
}
}
index "idx_pred" {
unique = true
columns = [column.total]
where = "(total < (0)::numeric)"
}
as = <<-SQL
SELECT sum(total) AS total
FROM m1;
SQL
depends_on = [materialized.m1]
}
When creating materialized views with TO [db.]table
,
the view will be created with the same structure as the table or view specified in the TO
clause.
materialized "mat_view" {
schema = schema.public
to = table.dest
as = "SELECT * FROM table.src"
depends_on = [table.src]
}
The engine
and primary_key
attributes are required if the TO
clause is not specified.
In this syntax, populate
can be used for the first time to populate the materialized view.
materialized "mat_view" {
schema = schema.public
engine = MergeTree
column "id" {
type = UInt32
}
column "name" {
type = String
}
primary_key {
columns = [column.id]
}
as = "SELECT * FROM table.src"
populate = true
depends_on = [table.src]
}
Note that modifying the materialized view structure after the initial creation is not supported by Atlas currently.
materialized "mat_view" {
schema = schema.public
column "c1" {
null = true
type = smallint
}
as = "SELECT * FROM t1;"
comment = "example materialized view"
distribution {
style = EVEN
}
sort {
style = AUTO
}
depends_on = [table.t1]
}
Column
A column
is a child resource of a table
.
column "name" {
type = text
null = false
}
column "age" {
type = integer
default = 42
}
column "active" {
type = tinyint(1)
default = true
}
Properties
Name | Kind | Type | Description |
---|---|---|---|
null | attribute | bool | Defines whether the column is nullable. |
type | attribute | *schemahcl.Type | Defines the type of data that can be stored in the column. |
default | attribute | *schemahcl.LiteralValue | Defines the default value of the column. |
Generated Columns
Generated columns are columns whose their values are computed using other columns or by deterministic expressions.
- MySQL
- PostgreSQL
- SQLite
- SQL Server
table "users" {
schema = schema.test
column "a" {
type = int
}
column "b" {
type = int
# In MySQL, generated columns are VIRTUAL by default.
as = "a * 2"
}
column "c" {
type = int
as {
expr = "a * b"
type = STORED
}
}
}
table "users" {
schema = schema.test
column "a" {
type = int
}
column "b" {
type = int
# In PostgreSQL, generated columns are STORED by default.
as = "a * 2"
}
column "c" {
type = int
as {
expr = "a * b"
type = STORED
}
}
}
table "users" {
schema = schema.test
column "a" {
type = int
}
column "b" {
type = int
# In SQLite, generated columns are VIRTUAL by default.
as = "a * 2"
}
column "c" {
type = int
as {
expr = "a * b"
type = STORED
}
}
}
table "users" {
schema = schema.test
column "a" {
type = int
}
column "b" {
type = int
as = "a * 2"
}
column "c" {
type = int
as {
expr = "a * b"
# In SQLServer, computed columns are non-PERSISTED by default.
type = PERSISTED
}
}
}
Note, it is recommended to use the --dev-url
option when generated columns are used.
Encodings
Encodings are used to define the compression algorithm for the column data. Supported by ClickHouse and Redshift.
- Redshift
table "users" {
schema = schema.public
column "name" {
type = text
encode = LZ4 // AZ64 | RAW | LZ4 | ZSTD
}
}
Column Types
The SQL dialects supported by Atlas (Postgres, MySQL, MariaDB, and SQLite) vary in the types they support. At this point, the Atlas DDL does not attempt to abstract away the differences between various databases. This means that the schema documents are tied to a specific database engine and version. This may change in a future version of Atlas as we plan to add "Virtual Types" support. This section lists the various types that are supported in each database.
For a full list of supported column types, click here.
Primary Key
A primary_key
is a child resource of a table
, and it defines the table's
primary key.
Example
- Columns
- Expressions
primary_key {
columns = [column.id]
}
Properties
Name | Kind | Type | Description |
---|---|---|---|
columns | attribute | reference (list) | A list of references to columns that comprise the primary key. |
Note, primary key expressions are supported by ClickHouse.
primary_key {
on {
column = column.id
}
on {
expr = "c1 + c2"
}
}
Properties
Name | Kind | Type | Description |
---|---|---|---|
on | resource | schema.IndexPart (list) | The index parts that comprise the index |
Foreign Key
Foreign keys are child resources of a table
, and it defines some columns in the table
as references to columns in other tables.
Example
table "users" {
schema = schema.public
column "id" {
type = integer
}
primary_key {
columns = [column.id]
}
}
table "orders" {
schema = schema.market
// ...
column "owner_id" {
type = integer
}
foreign_key "owner_id" {
columns = [column.owner_id]
ref_columns = [table.users.column.id]
on_update = NO_ACTION
on_delete = NO_ACTION
}
}
Referencing Qualified Tables
If a foreign key references a column in a qualified table, it is referenced
using table.<qualifier>.<table_name>.column.<column_name>
:
table "public" "users" {
schema = schema.public
column "id" {
type = integer
}
primary_key {
columns = [column.id]
}
}
table "admin" "users" {
schema = schema.admin
// ...
column "external_id" {
type = integer
}
foreign_key "external_id" {
columns = [column.external_id]
ref_columns = [table.admin.users.column.id]
on_update = NO_ACTION
on_delete = NO_ACTION
}
}
Properties
Name | Kind | Type | Description |
---|---|---|---|
columns | attribute | reference (list) | The columns that reference other columns. |
ref_columns | attribute | reference (list) | The referenced columns. |
on_update | attribute | schema.ReferenceOption | Defines what to do on update. |
on_delete | attribute | schema.ReferenceOption | Defines what to do on delete. |
Index
Indexes are child resources of a table
, and it defines an index on the table.
Example
# Columns only.
index "idx_name" {
unique = true
columns = [column.name]
}
# Columns and order.
index "idx_name" {
unique = true
on {
column = column.rank
}
on {
column = column.score
desc = true
}
}
# Custom index type.
index "idx_name" {
type = GIN
columns = [column.data]
}
# Control storage options.
index "idx_range" {
type = BRIN
columns = [column.range]
page_per_range = 128
}
# Include non-key columns.
index "idx_include" {
columns = [column.range]
include = [column.version]
}
# Define operator class.
index "idx_operator_class" {
type = GIN
on {
column = column.j
ops = jsonb_path_ops
}
}
# Full-text index with ngram parser.
index "index_parser" {
type = FULLTEXT
columns = [column.text]
parser = ngram
}
# Postgres-specific NULLS [NOT] DISTINCT option.
index "index_nulls_not_distinct" {
unique = true
columns = [column.text]
nulls_distinct = false
}
Properties
Name | Kind | Type | Description |
---|---|---|---|
unique | attribute | boolean | Defines whether a uniqueness constraint is set on the index. |
type | attribute | IndexType (enum) | Defines the index type. e.g. HASH , GIN , FULLTEXT . |
columns | attribute | reference (list) | The columns that comprise the index. |
on | resource | schema.IndexPart (list) | The index parts that comprise the index. |
options | attribute | schema.Attr | Additional driver specific attributes. e.g. page_per_range |
Index Expressions
Index expressions allow setting indexes over functions or computed expressions. Supported by PostgreSQL, SQLite and MySQL8.
table "t" {
schema = schema.test
column "c1" {
type = int
}
column "c2" {
type = int
}
index "i" {
on {
expr = "c1 - c2"
}
on {
expr = "c2 - c1"
}
}
}
Note, it is recommended to use the --dev-url
option when index expressions are used.
Partial Indexes
Partial indexes allow setting indexes over subset of the table. Supported by PostgreSQL and SQLite.
table "t" {
schema = schema.public
column "b" {
type = bool
}
column "c" {
type = int
}
index "i" {
columns = [column.c]
where = "b AND c > 0"
}
}
Note, it is recommended to use the --dev-url
option when partial indexes are used.
Index Prefixes
Index prefixes allow setting an index
on the first N
characters of string columns. Supported by MySQL and MariaDB.
table "users" {
schema = schema.test
column "name" {
type = varchar(255)
}
index "user_name" {
on {
column = column.name
prefix = 128
}
}
}
Unique Constraints
The unique
block allows defining a unique constraint
supported by PostgreSQL:
# Columns only.
unique "name" {
columns = [column.name]
}
# Include non-key columns.
unique "name_include_version" {
columns = [column.name]
include = [column.version]
}
In order to add a unique constraint in non-blocking mode, the index supporting the constraint needs to be created concurrently first and then converted to a unique constraint. To achieve this, follow the steps below:
- Define a unique
index
block on the desired table. - Ensure a Diff Policy is used to instruct Atlas to create the index concurrently.
- Apply the migration and ensure the index was created.
- Replace the
index
block with aunique
block to create a new unique constraint using the existing index.
Exclude Constraints
The exclude
block allows defining a exclusion constraint
supported by PostgreSQL:
exclude "excl_speaker_during" {
type = GIST
on {
column = column.speaker
op = "="
}
on {
column = column.during
op = "&&"
}
}
# Include non-key columns.
exclude "excl_speaker_during" {
type = GIST
on {
column = column.speaker
op = "="
}
on {
column = column.during
op = "&&"
}
include = [column.another]
}
Trigger
Triggers are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The trigger
block allows defining SQL triggers in HCL format.
- PostgreSQL
- MySQL
- SQLite
- SQL Server
function "audit_orders" {
schema = schema.public
lang = PLpgSQL
return = trigger
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, TG_OP);
RETURN NEW;
END;
SQL
}
trigger "trigger_orders_audit" {
on = table.orders
after {
insert = true
update_of = [table.orders.column.amount]
}
execute {
function = function.audit_orders
}
}
trigger "after_orders_insert" {
on = table.orders
after {
insert = true
}
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, changed_at, operation)
VALUES (NEW.order_id, NOW(), 'INSERT');
END
SQL
}
trigger "after_orders_update" {
on = table.orders
after {
update = true
}
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, changed_at, operation)
VALUES (NEW.order_id, NOW(), 'UPDATE');
END
SQL
}
trigger "after_orders_insert" {
on = table.orders
after {
insert = true
}
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, 'INSERT');
END
SQL
}
trigger "after_orders_update" {
on = table.orders
after {
update_of = [table.orders.column.amount]
}
as = <<-SQL
BEGIN
INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, 'UPDATE');
END
SQL
}
trigger "t1_trg" {
on = table.orders
after {
insert = true
update = true
delete = true
}
as = <<-SQL
BEGIN
SET NOCOUNT ON;
DECLARE @c INT;
SELECT @c = COUNT(*) FROM [dbo].[orders];
IF @c > 1000
RAISERROR('Too many rows in orders', 16, 1);
END
SQL
}
trigger "t2_trg" {
on = table.customers
instead_of {
insert = true
}
as = <<-SQL
BEGIN
SET NOCOUNT ON;
INSERT INTO [dbo].[customers] ([name])
SELECT [ins].[name]
FROM [inserted] [ins]
WHERE [ins].[name] NOT IN (
SELECT [name] FROM [dbo].[blacklist_customers]
);
END
SQL
}
Computed Triggers
To configure the same trigger for multiple tables/views, users can utilize the for_each
meta-argument. By setting it
up, a trigger
block will be computed for each item in the provided value. Note that for_each
accepts either a map
or a set
of references.
trigger "audit_log_trigger" {
for_each = [table.users, table.orders, table.payments]
on = each.value
after {
insert = true
update = true
delete = true
}
execute {
function = function.audit_log_table
}
}
Event Trigger
Event Triggers are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The event_trigger
block allows defining PostgreSQL event trigger functions
that automatically execute in response to specific events within the database system, like table creation or schema modifications.
# Block table rewrites.
event_trigger "block_table_rewrite" {
on = table_rewrite
execute = function.no_rewrite_allowed
}
# Filter specific events.
event_trigger "record_table_creation" {
on = ddl_command_start
tags = ["CREATE TABLE"]
execute = function.record_table_creation
}
Function
Functions are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The function
block allows defining functions in HCL format. The lang
attribute specifies the language of the
function. For example, PLpgSQL
, SQL
, CRL
, etc.
- PostgreSQL
- MySQL
- SQL Server
function "positive" {
schema = schema.public
lang = SQL
arg "v" {
type = integer
}
return = boolean
as = "SELECT v > 0"
}
function "sql_body1" {
schema = schema.public
lang = SQL
arg "v" {
type = integer
}
return = integer
as = <<-SQL
BEGIN ATOMIC
SELECT v;
END
SQL
}
function "sql_body2" {
schema = schema.public
lang = SQL
arg {
type = integer
}
return = integer
as = "RETURN $1"
volatility = IMMUTABLE // STABLE | VOLATILE
leakproof = true // NOT LEAKPROOF | LEAKPROOF
strict = true // (CALLED | RETURNS NULL) ON NULL INPUT
security = INVOKER // DEFINER | INVOKER
}
function "add2" {
schema = schema.public
arg "a" {
type = int
}
arg "b" {
type = int
}
return = int
as = "return a + b"
deterministic = true // NOT DETERMINISTIC | DETERMINISTIC
data_access = NO_SQL // CONTAINS_SQL | NO_SQL | READS_SQL_DATA | MODIFIES_SQL_DATA
security = INVOKER // DEFINER | INVOKER
}
function "f1" {
schema = schema.public
arg "x" {
type = int
}
return = int
as = <<-SQL
BEGIN
INSERT INTO t1 VALUES (RAND(x));
RETURN x+2;
END
SQL
}
function "fn_return_scalar" {
schema = schema.dbo
lang = SQL
arg "@a" {
type = int
}
arg "@b" {
type = int
default = 1
}
return = int
as = <<-SQL
BEGIN
RETURN @a * @a + @b * @b
END
SQL
schema_bound = true // SCHEMABINDING
null_call = RETURNS_NULL // (RETURNS NULL | CALLED) ON NULL INPUT
inline = true // INLINE = { (OFF | ON) }
}
function "fn_return_inline" {
schema = schema.dbo
lang = SQL
arg "@a" {
type = int
}
arg "@b" {
type = int
default = 1
}
return = sql("table")
as = "RETURN SELECT @a as [a], @b as [b], (@a+@b)*2 as [p], @a*@b as [s]"
}
function "fn_return_table" {
schema = schema.dbo
lang = SQL
arg "@a" {
type = int
}
arg "@b" {
type = int
default = 1
}
return_table "@t1" {
column "c1" {
null = false
type = int
}
column "c2" {
null = false
type = nvarchar(255)
}
column "c3" {
null = true
type = nvarchar(255)
default = sql("N'G'")
}
column "c4" {
null = false
type = int
}
primary_key {
columns = [column.c1]
}
index {
unique = true
nonclustered = true
on {
desc = true
column = column.c3
}
on {
column = column.c4
}
}
index {
unique = true
nonclustered = true
on {
column = column.c2
}
on {
desc = true
column = column.c3
}
}
index "idx" {
columns = [column.c2]
nonclustered = true
}
check {
expr = "([c4]>(0))"
}
}
as = <<-SQL
BEGIN
INSERT @t1
SELECT 1 AS [c1], 'A' AS [c2], NULL AS [c3], @a * @a + @b AS [c4];
RETURN
END
SQL
}
Atlas's testing framework allows you to write unit tests for your functions. The following example demonstrates how
to write tests for the positive
function defined above. For more detail, read the schema testing docs
or see the full example.
- Simple Test
- Table-driven Test
test "schema" "simple_test" {
parallel = true
assert {
sql = "SELECT positive(1)"
}
log {
message = "First assertion passed"
}
assert {
sql = <<SQL
SELECT NOT positive(0);
SELECT NOT positive(-1);
SQL
}
}
test "schema" "simple_test" {
parallel = true
for_each = [
{input: 1, expected: "t"},
{input: 0, expected: "f"},
{input: -1, expected: "f"},
]
exec {
sql = "SELECT positive(${each.value.input})"
output = each.value.expected
}
}
Aggregate Functions
The aggregate
block defines a function that computes a single result from a set of values. Supported by
PostgreSQL.
aggregate "sum_of_squares" {
schema = schema.public
arg {
type = double_precision
}
state_type = double_precision
state_func = function.sum_squares_sfunc
}
function "sum_squares_sfunc" {
schema = schema.public
lang = PLpgSQL
arg "state" {
type = double_precision
}
arg "value" {
type = double_precision
}
return = double_precision
as = <<-SQL
BEGIN
RETURN state + value * value;
END;
SQL
}
Procedure
Procedures are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The procedure
block allows defining SQL procedure in HCL format.
- PostgreSQL
- MySQL
- SQL Server
procedure "proc" {
schema = schema.public
lang = SQL
arg "a" {
type = integer
}
arg "b" {
type = text
}
arg "c" {
type = integer
default = 100
}
as = <<-SQL
INSERT INTO t1 VALUES(a, b);
INSERT INTO t2 VALUES(c, b);
SQL
}
procedure "p1" {
schema = schema.public
arg "x" {
type = varchar(10)
}
as = "INSERT INTO t1 VALUES(x)"
comment = "A procedure comment"
deterministic = true
}
procedure "p2" {
schema = schema.public
arg "x" {
type = char(10)
mode = INOUT
charset = "latin1"
}
arg "y" {
type = char(10)
mode = OUT
}
as = <<-SQL
BEGIN
DECLARE перем1 CHAR(10) CHARACTER SET utf8;
// ...
END
SQL
}
procedure "p1" {
schema = schema.dbo
as = <<-SQL
SET NOCOUNT ON;
SELECT [c1], [c2], [c3]
FROM [dbo].[t1];
SQL
}
procedure "p2" {
schema = schema.dbo
as = <<-SQL
BEGIN
SELECT TOP(10) [c1], [c2], [c3] FROM [dbo].[t1];
SELECT TOP(10) [c1], [c4] FROM [dbo].[t2]; END
SQL
}
procedure "p3" {
schema = schema.dbo
arg "@c2" {
type = nvarchar(50)
}
arg "@c3" {
type = nvarchar(50)
}
as = <<-SQL
SET NOCOUNT ON;
SELECT [c1], [c2], [c3]
FROM [dbo].[t1]
WHERE [c2] = @c2 AND [c3] = @c3;
SQL
}
procedure "p4" {
schema = schema.dbo
arg "@c2" {
type = nvarchar(50)
default = "D%"
}
arg "@c3" {
type = nvarchar(50)
default = "%"
}
as = <<-SQL
BEGIN
SET NOCOUNT ON;
SELECT [c1] as [c1], [c2], [c3]
FROM [dbo].[t1]
WHERE [c2] LIKE @c2 AND [c3] LIKE @c3;
END
SQL
}
procedure "p5" {
schema = schema.dbo
arg "@a" {
type = int
}
arg "@b" {
type = int
}
arg "@s" {
type = int
mode = OUT
}
arg "@p" {
type = int
mode = OUT
}
as = <<-SQL
SET NOCOUNT ON;
SET @s = @a * @b;
SET @p = (@a + @b) * 2;
SQL
}
procedure "p7" {
schema = schema.dbo
as = "TRUNCATE TABLE [dbo].[t1];"
}
procedure "p8" {
schema = schema.dbo
arg "@c" {
type = cursor
mode = OUT
}
as = <<-SQL
SET NOCOUNT ON;
SET @c = CURSOR
FORWARD_ONLY STATIC FOR
SELECT [c1], [c2]
FROM [dbo].[t1];
OPEN @c;
SQL
}
Atlas's testing framework allows you to write unit tests for your procedures. The following example demonstrates how
to write tests for a stored procedure, archive_old_sales
, that moves old sales from the sales
table to the archive_sales
table according to a specified cutoff date.
For more detail, read the schema testing docs
or see the full example.
- Simple Test
- Table-driven Test
test "schema" "procedure" {
# Seed data
exec {
sql = <<-SQL
INSERT INTO sales (id, sale_amount, sale_date) VALUES
(1, 150.00, '2024-07-18'),
(2, 200.00, '2024-06-20'),
(1, 350.00, '2024-07-10');
SQL
}
# Execute the procedure with a specific cutoff date
exec {
sql = "CALL archive_old_sales('2024-07-18')" # Archive sales before this date
}
# Verify data in archive_sales table
exec {
sql = "SELECT COUNT(*) FROM archive_sales WHERE sale_date < '2024-07-18'"
output = "2" # Expect 2 archived sales
}
# Verify data in sales table
exec {
sql = "SELECT COUNT(*) FROM sales"
output = "1" # Expect 1 sale remaining in the sales table after cutoff date
}
}
test "schema" "procedure" {
# Seed data
exec {
sql = <<-SQL
INSERT INTO sales (id, sale_amount, sale_date) VALUES
(1, 150.00, '2024-07-18'),
(2, 200.00, '2024-06-20'),
(1, 350.00, '2024-07-10');
SQL
}
for_each = [
{cutoff_date: "2024-07-18", expected_archived: "2", expected_remaining: "1"},
{cutoff_date: "2024-08-01", expected_archived: "3", expected_remaining: "0"}
]
# Execute the procedure with the current test case cutoff date
exec {
sql = "CALL archive_old_sales('${each.value.cutoff_date}')"
}
# Verify data in archive_sales table
exec {
sql = "SELECT COUNT(*) FROM archive_sales WHERE sale_date < '${each.value.cutoff_date}'"
output = each.value.expected_archived
}
# Verify data in sales table
exec {
sql = "SELECT COUNT(*) FROM sales"
output = each.value.expected_remaining
}
log {
message = "Testing cutoff date: ${each.value.cutoff_date} - Expected Archived: ${each.value.expected_archived}, Expected Remaining: ${each.value.expected_remaining}"
}
}
Domain
Domains are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The domain
type is a user-defined data type that is based on an existing data type but with optional constraints
and default values. Supported by PostgreSQL.
domain "us_postal_code" {
schema = schema.public
type = text
null = true
check "us_postal_code_check" {
expr = "((VALUE ~ '^\\d{5}$'::text) OR (VALUE ~ '^\\d{5}-\\d{4}$'::text))"
}
}
domain "username" {
schema = schema.public
type = text
null = false
default = "anonymous"
check "username_length" {
expr = "(length(VALUE) > 3)"
}
}
table "users" {
schema = schema.public
column "name" {
type = domain.username
}
column "zip" {
type = domain.us_postal_code
}
}
schema "public" {
comment = "standard public schema"
}
Atlas's testing framework allows you to write unit tests for your domains. The following example demonstrates how
to write tests for the us_postal_code
domain defined above. For more detail, read the schema testing docs
or see the full example.
- Simple Test
- Table-driven Test
test "schema" "postal" {
parallel = true
exec {
sql = "select '12345'::us_postal_code"
}
catch {
sql = "select 'hello'::us_postal_code"
}
}
test "schema" "us_postal_code_check_valid" {
parallel = true
for_each = [
{input = "12345", expected = "valid"},
{input = "12345-6789", expected = "valid"},
]
log {
message = "Testing postal code: ${each.value.input} -> Expected: ${each.value.expected}"
}
exec {
sql = "SELECT '${each.value.input}'::us_postal_code"
}
}
test "schema" "us_postal_code_check_invalid" {
parallel = true
for_each = [
{input = "hello", expected = "invalid"},
{input = "123", expected = "invalid"},
]
log {
message = "Testing postal code: ${each.value.input} -> Expected: ${each.value.expected}"
}
catch {
sql = "SELECT '${each.value.input}'::us_postal_code"
}
}
Composite Type
Composite types are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The composite
type is a user-defined data type that represents the structure of a row or record. Supported by PostgreSQL.
composite "address" {
schema = schema.public
field "street" {
type = text
}
field "city" {
type = text
}
}
table "users" {
schema = schema.public
column "address" {
type = composite.address
}
}
schema "public" {
comment = "standard public schema"
}
Policies
Policies are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The policy
block allows defining row-level security policies. Supported by PostgreSQL.
policy "sales_rep_access" {
on = table.orders
for = SELECT
to = [PUBLIC]
using = "(sales_rep_id = (CURRENT_USER)::integer)"
}
policy "restrict_sales_rep_updates" {
on = table.orders
as = RESTRICTIVE
for = UPDATE
to = ["custom_role"]
check = "(sales_rep_id = (CURRENT_USER)::integer)"
comment = "This is a restrictive policy"
}
To enable and force row-level security on a table, refer to the table row-level security example.
Computed Policies
To configure the same policy for multiple tables, users can utilize the for_each
meta-argument. By setting it
up, a policy
block will be computed for each item in the provided value. Note that for_each
accepts either a map
or a set
of references.
policy "tenant_access_policy" {
for_each = [table.users, table.orders, table.payments]
on = each.value
as = RESTRICTIVE
using = "tenant_isolation_policy()"
}
Sequence
Sequences are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The sequence
block allows defining sequence number generator. Supported by PostgreSQL and SQL Server.
- PostgreSQL
- SQL Server
Note, a sequence
block is printed by Atlas on inspection, or it may be manually defined in the schema only if it
represents a PostgreSQL sequence that is not implicitly created by the database for identity or serial
columns.
# Simple sequence with default values.
sequence "s1" {
schema = schema.public
}
# Sequence with custom configuration.
sequence "s2" {
schema = schema.public
type = smallint
start = 100
increment = 2
min_value = 100
max_value = 1000
}
# Sequence that is owned by a column.
sequence "s3" {
schema = schema.public
owner = table.t2.column.id
comment = "Sequence with column owner"
}
# The sequences created by this table are not printed on inspection.
table "users" {
schema = schema.public
column "id" {
type = int
identity {
generated = ALWAYS
start = 10000
}
}
column "serial" {
type = serial
}
primary_key {
columns = [column.id]
}
}
table "t2" {
schema = schema.public
column "id" {
type = int
}
}
schema "public" {
comment = "standard public schema"
}
Atlas support define sequence in SQL Server by using sequence
block. See more about SQL Server sequence.
# Simple sequence with default values.
sequence "s1" {
schema = schema.dbo
}
# Sequence with custom configuration.
sequence "s2" {
schema = schema.dbo
type = decimal(18, 0)
start = 100000000000000000
increment = 1
min_value = 100000000000000000
max_value = 999999999999999999
cycle = true
}
# The sequences with alias-type.
sequence "s3" {
schema = schema.dbo
type = type_alias.ssn
start = 111111111
increment = 1
min_value = 111111111
}
type_alias "ssn" {
schema = schema.dbo
type = dec(9, 0)
null = false
}
Enum
The enum
type allows storing a set of enumerated values. Supported by PostgreSQL.
enum "status" {
schema = schema.test
values = ["on", "off"]
}
table "t1" {
schema = schema.test
column "c1" {
type = enum.status
}
}
table "t2" {
schema = schema.test
column "c1" {
type = enum.status
}
}
Extension
Extensions are currently available only to Atlas Pro users. To use this feature, run:
atlas login
The extension
block allows the definition of PostgreSQL extensions to be loaded into the database. The following
arguments are supported:
schema
(Optional) - The schema in which to install the extension's objects, given that the extension allows its contents to be relocated.version
(Optional) - The version of the extension to install. Defaults to the version specified in the extension's control file.comment
(Read-only) - The description of the extension. This field is populated inatlas inspect
output.
extension "adminpack" {
version = "2.1"
comment = "administrative functions for PostgreSQL"
}
extension "postgis" {
schema = schema.public
version = "3.4.1"
comment = "PostGIS geometry and geography spatial types and functions"
}
extension "pgcrypto" {
schema = schema.public
version = "1.3"
comment = "cryptographic functions"
}
schema "public" {
comment = "standard public schema"
}
Although the schema
argument is supported, it only indicates where the extension's objects will be installed. However,
the extension itself is installed at the database level and cannot be loaded multiple times into different schemas.
Therefore, to avoid conflicts with other schemas, when working with extensions, the scope of the migration should be set to the database, where objects are qualified with the schema name. To learn more about the difference between database and schema scopes, visit this doc.
Comment
The comment
attribute is an attribute of schema
, table
, column
, and index
.
schema "public" {
comment = "A schema comment"
}
table "users" {
schema = schema.public
column "name" {
type = text
comment = "A column comment"
}
index "name_idx" {
columns = [column.name]
}
comment = "A table comment"
}
Charset and Collation
The charset
and collate
are attributes of schema
, table
and column
and supported by MySQL, MariaDB and PostgreSQL.
Read more about them in MySQL,
MariaDB and
PostgreSQL websites.
- MySQL
- PostgreSQL
- SQL Server
schema "public" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
table "products" {
column "name" {
type = text
collate = "binary"
}
collate = "utf8_general_ci"
}
schema "public" {}
table "products" {
column "name" {
type = text
collate = "es_ES"
}
}
SQLServer only support collate
attribute on columns.
schema "dbo" {}
table "users" {
schema = schema.dbo
column "name" {
type = varchar(255)
collate = "Vietnamese_CI_AS"
}
}
Auto Increment
AUTO_INCREMENT
and IDENTITY
columns are attributes of the column
and table
resource, and can be used to
generate a unique identity for new rows.
- MySQL
- PostgreSQL
- SQLite
- SQL Server
In MySQL/MariaDB the auto_increment
attribute can be set on columns and tables.
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
auto_increment = true
}
primary_key {
columns = [column.id]
}
}
The auto_increment
column can be set on the table to configure a start value other than 1.
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
auto_increment = true
}
primary_key {
columns = [column.id]
}
auto_increment = 100
}
PostgreSQL supports serial
columns and the generated as identity
syntax for versions >= 10.
table "users" {
schema = schema.public
column "id" {
null = false
type = int
identity {
generated = ALWAYS
start = 10
increment = 10
}
}
primary_key {
columns = [column.id]
}
}
SQLite allows configuring AUTOINCREMENT
columns using the auto_increment
attribute.
table "users" {
schema = schema.main
column "id" {
null = false
type = integer
auto_increment = true
}
primary_key {
columns = [column.id]
}
}
table "users" {
schema = schema.dbo
column "id" {
null = false
type = bigint
identity {
seed = 701
increment = 1000
}
}
primary_key {
columns = [column.id]
}
}