Atlas v0.38:
Hi everyone!
We're excited to share with you the release of [Atlas v0.38], filled with many new features and enhancements for you to try.
-
[Oracle Triggers and Views] - We've expanded the support for Oracle schemas to include triggers and views.
-
[Google Spanner Additions] - Spanner users can now manage geo-partitioning placements and locality groups with Atlas.
-
[Snowflake Additions] - Our library of supported resources for Snowflake has also expanded with the additions of stages, external tables, hybrid tables, and dynamic tables.
-
[Expanded Analyzer Detection] - Our linting analyzers now detect SQL injections in SQL schema and migration files, and incorrect usage of transactions in migration files.
-
[HTTP Data Source] - Users can now use HTTP endpoints as data sources in the Atlas configuration file.
-
[PII Detection] - Objects containing potentially sensitive data can now be automatically or manually tagged in the Atlas Registry.
-
[Pre-/Post-migration Hooks] -
-
[Atlas Monitoring] -
-
[Azure DevOps Repos CI/CD Integration] - Atlas now provides native integration with Azure DevOps Pipelines and Azure Repos, including a dedicated Azure DevOps extension for seamless database schema CI/CD workflows.
Oracle Support Additions
We've expanded our Oracle driver to include support for triggers and views in your schema management.
Example
-- [...schema truncated for space]
-- Trigger that records any salary change
CREATE TRIGGER TRG_UPDATE_SALARY_AUDIT
AFTER UPDATE OF SALARY ON EMPLOYEES
FOR EACH ROW
WHEN (OLD.SALARY != NEW.SALARY)
BEGIN
INSERT INTO SALARY_AUDIT (EMP_ID, OLD_SALARY, NEW_SALARY, CHANGE_DATE)
VALUES (:OLD.EMP_ID, :OLD.SALARY, :NEW.SALARY, SYSDATE);
END;
-- View that shows employees with department names and computed annual salary
CREATE VIEW EMPLOYEE_INFO_VIEW AS
SELECT
e.EMP_ID,
e.FIRST_NAME || ' ' || e.LAST_NAME AS FULL_NAME,
d.DEPT_NAME,
e.SALARY,
(e.SALARY * 12) AS ANNUAL_SALARY,
e.HIRE_DATE
FROM EMPLOYEES e
JOIN DEPARTMENTS d ON e.DEPT_ID = d.DEPT_ID;
Google Spanner Support Additions
Atlas now supports Google Cloud Spanner's geo-partitioning placements, which allows you to partition logical tables at the row level into distinct placements, each mapped to different instance partitions. This enables you to serve data locally by region while maintaining one unified database, reducing latency for users worldwide while keeping your data architecture simple.
This release also includes support for Spanner locality groups. These named storage-tier configurations are assigned to tables so you can control whether its data uses high-performance SSD storage, standard HDD storage, or flexible age-based policies that automatically transition data from SSD to HDD over time.
Snowflake Support Additions
Atlas v0.38 introduces expanded support for Snowflake schema resources, making it easier to manage modern data architectures directly in your Atlas configuration.
Newly supported resources include:
-
Stages: Define external storage locations for loading and unloading data.
stage "mystage" {
schema = schema.public
url = "s3://mybucket/data/"
storage_integration = "my_integration"
directory_enabled = true
comment = "This is my stage"
} -
External Tables: Reference data stored outside Snowflake, such as in S3, and query it as if it were a regular table.
external_table "events" {
schema = schema.public
column "event_id" {
type = VARCHAR(255)
as {
expr = "value:c1::varchar"
}
}
location {
stage = stage.mystage
path = "data/events/"
}
} -
Hybrid Tables: Combine transactional and analytical workloads in a single table, supporting flexible data operations.
hybrid_table "users" {
schema = schema.public
column "id" {
comment = "User ID"
type = VARCHAR(255)
}
primary_key {
columns = ["id"]
}
...
} -
Dynamic Tables: Define tables that automatically refresh based on upstream changes, supporting incremental or full refresh modes.
dynamic_table "users" {
schema = schema.public
column "id" {
comment = "User ID"
}
target_lag = DOWNSTREAM // or "<num> { seconds | minutes | hours | days }"
refresh_mode = INCREMENTAL, or FULL
as = "SELECT * FROM users"
...
}
Read more about implementing these resources in the Atlas configuration documentation.
Expanded Analyzer Detection
Atlas contains a number of analyzers and checkers that automatically detect destructive changes, policy non-compliance, and much more. These have been expanded to include detecting SQL injections that can make schemas or migrations vulnerable to untrusted input and flagging nested transactions.
SQL Injection Analyzer
The SQL Injection analyzer scans migration files and declarative schemas for potentially vulnerable SQL constructs. It detects unsafe dynamic SQL patterns like string concatenation and variable interpolation that could allow untrusted input to be executed.
With the increasing use of automated tools and AI assistance in schema development, this analyzer helps catch unintentionally introduced security risks early in the development cycle with two checks:
- Use of
EXEC/EXECUTEstatements with string concatenation or variable interpolation - Dynamic SQL construction using improper string operations
This analyzer is currently only supported on the PostgreSQL driver, with plans to expand to MySQL and SQL Server in the next release.
Transactions in Migration Files
Atlas wraps all migration files in individual transactions by default, which can cause errors when manually-written migration files contain their own transaction statements.
The nestedtx analyzer flags these statements, allowing you to handle the overlap by either removing the manual transaction control
or adding -- atlas:txmode none to the file header to indicate that this migration file should not be wrapped in a transaction by Atlas.
HTTP Data Source
Atlas now supports using an HTTP endpoint as a data source, allowing you to fetch data from
internal and external services and use it dynamically in your project configuration. It can perform GET, HEAD, or POST requests,
optionally with headers, a request body, timeouts, TLS certificates, and retry logic.
After the request runs, Atlas exposes attributes like the response body, headers, status code, and URL, which can be referenced in your configuration (e.g., pulling a secret or setting an environment variable based on an API response).
Example
data "http" "example" {
url = "https://service.example.com/search?q=atlas"
method = "GET"
request_headers = {
"Accept" = "application/json"
}
request_timeout_ms = 5000
retry {
attempts = 2
min_delay_ms = 100
max_delay_ms = 1000
}
}
env "dev" {
src = "file://schema.my.hcl"
url = "mysql://root:${urlescape(jsondecode(data.http.example.response_body).password)}@host:3306/database"
}
PII Visibility and Detection
Pre- and Post-migration Hooks
Atlas Schema Monitoring
In our last release, we added the ability to automatically discover database instances for monitoring using the Atlas Agent, rather than manually creating them in the Atlas Cloud.
Expanding on this feature, we have now added the ability to discover and monitor RDS instances across multiple AWS (sub-)accounts by assuming an IAM role if a trusted relationship is established. Read more about this in our docs.
Azure DevOps Repos CI/CD Integration
Atlas now provides first-class support for Azure DevOps, bringing professional database CI/CD workflows to teams using Azure Repos and Azure Pipelines. This integration includes a dedicated Azure DevOps extension that simplifies the setup and execution of schema migration workflows.
Getting Started with Azure DevOps
Setting up Atlas with Azure DevOps takes just a few steps. Here's a quick example of what the pipeline configuration looks like:
View configuration
trigger:
branches:
include:
- main
paths:
include:
- 'migrations/*'
- 'azure-pipelines.yml'
pr:
branches:
include:
- main
paths:
include:
- 'migrations/*'
pool:
vmImage: ubuntu-latest
variables:
- group: atlas-vars
steps:
- checkout: self
persistCredentials: true
fetchDepth: 0
fetchTags: true
- script: |
echo "Configuring git user for commits...."
git config user.email "azure-pipelines[bot]@users.noreply.github.com"
git config user.name "azure-pipelines[bot]"
displayName: 'Configure Git User for Commits'
- script: curl -sSf https://atlasgo.sh | sh
displayName: Install Atlas
- script: atlas version
displayName: Atlas Version
- script: atlas login --token $(ATLAS_TOKEN)
displayName: Atlas Login
# Lint migrations on pull requests
- task: AtlasAction@1
condition: eq(variables['Build.Reason'], 'PullRequest')
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
action: 'migrate lint'
dir: 'file://migrations'
dir_name: 'app-schema-2'
config: 'file://atlas.hcl'
env: 'ci'
displayName: Lint Migrations
# Push migrations to Atlas Cloud on main branch
- task: AtlasAction@1
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
inputs:
action: 'migrate push'
dir: 'file://migrations'
dir_name: 'app-schema-2'
latest: true
env: 'ci'
displayName: Push Migrations
# Apply migrations to database on main branch
- task: AtlasAction@1
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
inputs:
action: 'migrate apply'
dir: 'file://migrations'
url: $(DB_URL)
displayName: Apply Migrations
This pipeline automatically:
- Lints new migrations when you open a pull request, surfacing potential issues directly in your PR,
- Pushes migration directories to Atlas Cloud's Schema Registry when changes merge to main, and
- Applies pending migrations to your target database with the connection string from your pipeline variables.
For more information on how to set up Atlas with Azure DevOps, see the Azure DevOps Repos CI/CD Integration guide.
Wrapping Up
Atlas v0.38 adds support for additional database-specific features and introduces several security and workflow improvements.
This release expands database platform coverage with Oracle triggers and views, Google Spanner geo-partitioning placements and locality groups, and stages, external tables, hybrid tables, and dynamic tables for Snowflake.
New security analyzers detect SQL injection patterns and transaction issues in migration files. HTTP data sources allow dynamic configuration from external APIs, while PII detection helps with data governance.
Additional features include pre- and post-migration hooks, schema monitoring capabilities, and Azure DevOps integration with a marketplace extension.
We hope you enjoy these new features and improvements. As always, we would love to hear your feedback and suggestions on our Discord server.
