Atlas v0.38: Linting Analyzers, PII Detection, Migration Hooks, and More
Hi everyone!
We're excited to share with you the release of Atlas v0.38, filled with many new features and enhancements for you to try.
- Oracle Triggers and Views - We've expanded the support for Oracle schemas to include triggers and views.
- Snowflake Additions - Our library of supported resources for Snowflake has also expanded with the additions of stages, external tables, hybrid tables, and dynamic tables.
- Google Spanner Additions - Spanner users can now manage geo-partitioning placements, locality groups, sequences, and change streams with Atlas.
- Expanded Analyzer Detection - Our linting analyzers now detect SQL injections in SQL schema and migration files, and incorrect usage of transactions in migration files.
- HTTP Data Source - Users can now use HTTP endpoints as data sources in the Atlas configuration file.
- PII Detection - Objects containing potentially sensitive or PII data can now be automatically or manually tagged in the Atlas Registry.
- Pre/Post-migration Hooks - Pre- and post-migration hooks enable teams to run custom logic before and after applying migrations.
- Atlas Monitoring - The Atlas Agent can now automatically discover and monitor RDS instances across multiple AWS accounts using IAM role assumption.
- Azure DevOps Repos CI/CD Integration - Atlas now provides native integration with Azure DevOps Pipelines and Azure Repos, including a dedicated Azure DevOps extension for seamless database schema CI/CD workflows.
We're eager to share that our beta drivers are growing to support more features to better manage schemas across database types.
- macOS + Linux
- Docker
- Windows
To download and install the custom release of the Atlas CLI, simply run the following in your terminal:
curl -sSf https://atlasgo.sh | ATLAS_FLAVOR="oracle" sh
To pull the Atlas image and run it as a Docker container:
docker pull arigaio/atlas:latest-extended
docker run --rm arigaio/atlas:latest-extended --help
If the container needs access to the host network or a local directory, use the --net=host flag and mount the desired
directory:
docker run --rm --net=host \
-v $(pwd)/migrations:/migrations \
arigaio/atlas:latest-extended migrate apply \
--url "oracle://PDBADMIN:Pssw0rd0995@localhost:1521/FREEPDB1"
Download the custom release and move the atlas binary to a file location on your system PATH.
Oracle Support Additions
With Atlas v0.38, we have expanded our Oracle driver (currently in beta) to include support for triggers and views in your schema management. This means that any triggers and views in your Oracle schemas will be included in automatic schema inspections, schema diffs, etc.
-- [...schema truncated for space]
-- Trigger that records any salary change
CREATE TRIGGER TRG_UPDATE_SALARY_AUDIT
AFTER UPDATE OF SALARY ON EMPLOYEES
FOR EACH ROW
WHEN (OLD.SALARY != NEW.SALARY)
BEGIN
INSERT INTO SALARY_AUDIT (EMP_ID, OLD_SALARY, NEW_SALARY, CHANGE_DATE)
VALUES (:OLD.EMP_ID, :OLD.SALARY, :NEW.SALARY, SYSDATE);
END;
-- View that shows employees with department names and computed annual salary
CREATE VIEW EMPLOYEE_INFO_VIEW AS
SELECT
e.EMP_ID,
e.FIRST_NAME || ' ' || e.LAST_NAME AS FULL_NAME,
d.DEPT_NAME,
e.SALARY,
(e.SALARY * 12) AS ANNUAL_SALARY,
e.HIRE_DATE
FROM EMPLOYEES e
JOIN DEPARTMENTS d ON e.DEPT_ID = d.DEPT_ID;
Snowflake Support Additions
Atlas v0.38 introduces broader support for our Snowflake driver (currently in beta), providing more flexibility in using Atlas to manage your Snowflake schema.
Newly supported resources include:
-
Stages: Define external storage locations for loading and unloading data.
schema.hclstage "mystage" {
schema = schema.public
url = "s3://mybucket/data/"
storage_integration = "my_integration"
directory_enabled = true
comment = "This is my stage"
} -
External Tables: Refer to data stored outside Snowflake, such as in S3, and query it as if it were a regular table.
schema.hclexternal_table "events" {
schema = schema.public
column "event_id" {
type = VARCHAR(255)
as {
expr = "value:c1::varchar"
}
}
location {
stage = stage.mystage
path = "data/events/"
}
} -
Hybrid Tables: Combine transactional and analytical workloads in a single table, supporting flexible data operations.
schema.hclhybrid_table "users" {
schema = schema.public
column "id" {
comment = "User ID"
type = VARCHAR(255)
}
primary_key {
columns = ["id"]
}
...
} -
Dynamic Tables: Define tables that automatically refresh based on upstream changes, supporting incremental or full refresh modes.
schema.hcldynamic_table "users" {
schema = schema.public
column "id" {
comment = "User ID"
}
target_lag = DOWNSTREAM // or "<num> { seconds | minutes | hours | days }"
refresh_mode = INCREMENTAL, or FULL
as = "SELECT * FROM users"
...
}
Read more about implementing these resources in the Atlas configuration documentation.
Google Spanner Support Additions
Atlas v0.38 also comes with more support for the Google Spanner driver (currently in beta) to improve managing your Spanner schema with Atlas.
The newly added resource support includes:
-
Geo-partitioning Placements: Partition logical tables at the row level into distinct placements mapped to different instance partitions to serve data locally by region while maintaining one unified database.
schema.hcltable "Orders" {
schema = schema.default
column "Location" {
null = false
type = STRING(MAX)
placment_key = true
}
primary_key {
columns = [column.OrderId]
}
}
placement "europeplacement" {
instance_partition = "euro-partition"
default_leader = "europe-west1"
} -
Locality Groups: Control whether their data uses high-performance SSD storage, standard HDD storage, or flexible age-based policies that transition data from SSD to HDD over time.
schema.hcltable "t1" {
schema = schema.default
locality_group = locality.hotspot_group
}
locality "hotspot_group" {
storage = "ssd"
ssd_to_hdd_spill_timespan = "24h"
} -
Sequences: Generate a series of unique integer values to use as a column's default value (e.g., ID numbers).
schema.hcltable "Orders" {
schema = schema.default
column "OrderId" {
null = false
type = INT64
default = sql("GET_NEXT_SEQUENCE_VALUE(SEQUENCE orders_sequence)")
}
primary_key {
columns = [column.OrderId]
}
}
sequence "orders_sequence" {
kind = bit_reversed_positive
} -
Change Streams: Capture and stream data changes (e.g., inserts, updates, and deletes) from a database in near real-time.
schema.hclstream "OrdersStream" {
tables = [table.Orders]
options = {
retention_period = "7d"
value_capture_type = "NEW_ROW"
}
}
Read more about using Atlas's Google Spanner driver in our guides and docs.
Expanded Analyzer Detection
Atlas contains a number of analyzers and checkers that automatically detect destructive changes, backward-incompatible changes, and much more. These have been expanded to include flagging nested transactions and detecting SQL injections that can make schemas or migrations vulnerable to untrusted input.
Transactions in Migration Files
Atlas wraps all migration files in individual transactions by default, which can cause errors when manually-written migration
files contain their own transaction statements (for example, START TRANSACTION).
The nestedtx analyzer flags these statements, allowing you to handle the overlap by either removing the manual transaction
control or adding -- atlas:txmode none to the file header to indicate that this migration file should not be wrapped in a
transaction by Atlas.
SQL Injection Analyzer
The SQL injection analyzer scans migration files and declarative schemas for potentially vulnerable SQL constructs. It detects unsafe dynamic SQL patterns like string concatenation and variable interpolation that could allow untrusted input to be executed.
With the increasing use of automated tools and AI assistance in schema development, this analyzer helps catch unintentional security risks early in the development cycle with two checks:
- Use of
EXEC/EXECUTEstatements with string concatenation or variable interpolation - Dynamic SQL construction using improper string operations
This analyzer is currently only supported on the PostgreSQL driver, with plans to expand to MySQL and SQL Server in the next release.
Example output
Analyzing changes from version 1 to 2 (1 migration in total):
-- analyzing version 2
-- possible SQL injection vulnerabilities detected:
-- L11: expression concatenates parameter(s) into dynamic statement
https://atlasgo.io/lint/analyzers#SA101
-- L25: function call propagates parameter into dynamic statement without sanitization
https://atlasgo.io/lint/analyzers#SA101
-- ok (234.583µs)
-------------------------
-- 500.5µs
-- 1 version with warnings
-- 11 schema change'
-- 6 diagnostics'
