Skip to main content

Announcing v0.13.0: New binary, support for SQL Views, MS-SQL Driver beta program and more

· 12 min read
Rotem Tamir
Building Atlas

It's been just over two months since our last version announcement and today I'm super excited to share with you our latest release, v0.13, which includes some long-anticipated additions to Atlas:

  • Atlas and Atlas Community Editions - we are making a change in the way Atlas is distributed. Starting today, we are making many of the features previously available only in the commercial version of Atlas available to everyone for free. More on this below.

  • Support for SQL Views - SQL Views are virtual tables in a database created by a query and stored in the database. Starting today, you can manage views in your database using the same Atlas workflows you use for tables, indexes, foreign keys, etc.

  • Microsoft SQL Server Driver Beta - Over the last few months, we have been working on a driver for MS-SQL with some of our enterprise design partners and are happy to make it available to you as a beta.

  • Built-in schema visualization - Over the past year we released some tools to help developers visualize their database schemas using Entity Relationship Diagrams (ERDs). Starting today, you can use the atlas schema inspect --visualize command to create database ERDs directly from the command line.

  • GitHub Action for Deployments - We recently received a request from the community to add a GitHub Action to streamline applying migrations directly from GitHub workflows. We thought this was a great idea and are happy to introduce ariga/atlas-deploy-action which you can use to deploy your migrations today.

  • Type checking for schema files - As part of our continuous effort to make the experience of working with Atlas as smooth as possible, we are excited to ship a new engine for running type checks on your schema files.

This release is extra-packed with new features and improvements, so let's dive in!

Atlas and Atlas Community Editions

In recent months, based on strategic discussions with our advisors, we have come to realize that in order to keep fueling Atlas's rapid growth and adoption, we should give free access to some features previously available only in the commercial version. For this reason, we are making a change in the way Atlas is distributed. Here is a summary of how this is going to work:

The Atlas codebase will continue to be actively developed in our open-source repository (ariga/atlas) under the Apache 2 License. This version of Atlas will continue to be actively maintained and supported by us in community channels (GitHub, Discord, etc). We will continue to build and ship versions of this purely open-source version under the name "Atlas Community". For more information about obtaining Atlas Community binaries, see the docs.

In addition, we are starting to distribute for free (under the Atlas EULA) the binaries for our commercial product which will include features previously unavailable to the public. We plan to release many of these features to the open-source repositories over time to keep supporting more open-source innovation in the database space. To keep things short and sweet, this distribution of the project will simply be called "Atlas".

Starting today, our existing distribution channels (the atlasgo.sh installation script, Docker image, brew tap, etc.) will point to this edition, and we are adding parallel installation options for users that prefer to use the community edition.

Trying out the new features

Before we dive into the new features, let's make sure you have the latest version of Atlas installed. To do so, run the following command:

curl -sSf https://atlasgo.sh | sh

For more installation options, see the docs.

In addition, two of the following features (MS-SQL driver, schema inspect --visualize) require you to log in your CLI to Atlas Cloud. To do so, run the following command:

atlas login

The CLI will ask your permission to open a browser window to complete the login process.

After completing the login process, you should see a message similar to the following:

You are now connected to "acme-corp" on Atlas Cloud.

Support for Views

"Views" are virtual tables created by a query and stored in the database. Many teams use them to simplify complex queries, encapsulate logic, and present a consistent interface to users, abstracting the underlying data structures. Starting today, you can start managing views in your database using the same Atlas workflows you use for tables, indexes, foreign keys, etc. Views are currently supported for MySQL, MariaDB, PostgreSQL, MS-SQL, and SQLite.

To demonstrate how this works, let's start by creating a new database. For this example, let's run a local MySQL database using Docker:

docker run --rm -d --name atlas-demo -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=default mysql:8

Next, create a new file named schema.hcl with the following content:

schema "default" {}

table "users" {
schema = schema.default
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
column "phone_number" {
type = varchar(32)
}
}

view "users_clean" {
schema = schema.default
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
as = <<-SQL
SELECT id, nickname
FROM ${table.users.name}
SQL
depends_on = [table.users]
}

In this example, we first defined a table named users that contains some PII (the user's phone number). Next, we defined a view named users_clean that exposes only the user's id and nickname. This view can be used by other applications to access the user's data without exposing any personally identifiable information.

Next, run the following command to apply the schema to the database:

atlas schema apply -u mysql://root:pass@localhost:3306/default -f schema.hcl

Atlas will connect to the database, inspect its current schema (which is empty) and produce the following output:

-- Planned Changes:
-- Create "users" table
CREATE TABLE `default`.`users` (`id` int NOT NULL, `nickname` varchar(32) NOT NULL, `phone_number` varchar(32) NOT NULL);
-- Create "users_clean" view
CREATE VIEW `default`.`users_clean` (`id`, `nickname`) AS SELECT id, nickname
FROM users;
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After reviewing the changes, press enter to apply them to the database. Atlas will connect to the database and apply the changes.

Next, run the following command to inspect the database schema:

atlas schema inspect -u mysql://root:pass@/default --format "{{ sql . \" \" }}"

Notice that we used the --format flag to instruct Atlas to output the schema as SQL statements.

Atlas will connect to the database and produce the following output:

-- Create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`nickname` varchar(32) NOT NULL,
`phone_number` varchar(32) NOT NULL
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- Create "users_clean" view
CREATE VIEW `users_clean` (
`id`,
`nickname`
) AS select `users`.`id` AS `id`,`users`.`nickname` AS `nickname` from `users`;

As you can see, Atlas created the users table and the users_clean view in the database!

Microsoft SQL Server Driver

Microsoft SQL Server, one of the longest-standing database engines in our business, was first released by Microsoft in 1989. MS-SQL is the go-to database for Windows environments in many industries. Over the last few months, we have been working on a driver for MS-SQL with some of our enterprise design partners and are happy to make it available to you as a beta feature.

To get started with MS-SQL, you must first opt-in to its beta testing program. To do so:

  1. Login to your Atlas Cloud account using the instructions above.
  2. Head over to your Atlas Cloud account.
  3. Click on your profile picture in the bottom left corner to open the user menu and select the "Personal Settings" option.
  4. Select the "Microsoft SQL Server" checkbox from the driver list and click on the "Save" button to save your changes.

Next, run the following command to start a local MS-SQL server using Docker:

docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 -d mcr.microsoft.com/mssql/server:latest

Notice that by providing the ACCEPT_EULA environment variable you confirm your acceptance of the Microsoft's End-User Licensing Agreement.

Create a new file named schema.hcl with the following content:

schema "dbo" {
}
table "users" {
schema = schema.dbo
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
}

With this file ready, run the following command to apply the schema to the database:

atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
--to "file://schema.hcl"

Atlas will connect to the database, inspect its current schema (which is empty) and produce the following output:

-- Planned Changes:
-- Create "users" table
CREATE TABLE [dbo].[users] ([id] int NOT NULL, [nickname] varchar(32) NOT NULL);
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After reviewing the changes, press enter to apply them to the database. Atlas will connect to the database and apply the changes.

Next, run the following command to inspect the database:

atlas schema inspect \
--url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master"

Atlas will connect to the database and produce the following output:

table "users" {
schema = schema.dbo
column "id" {
null = false
type = int
}
column "nickname" {
null = false
type = varchar(32)
}
}
schema "dbo" {
}

Great! Atlas created the users table in the database!

Do you use MS-SQL? We would love to hear feedback about your experience using this driver. Feel free to give it a try and reach out on our Discord channel to let us know what you think.

Built-in Schema Visualization

Over the past year we have released some tools to help developers visualize their database schemas using Entity Relationship Diagrams (ERDs). The success of these experiments (especially for Ent and Django) convinced us that we should provide this functionality natively in Atlas. So starting today, you can use the atlas schema inspect --visualize and atlas schema diff --visualize to create database ERDs directly from the command line.

To get started, create a new schema file named schema.hcl:

schema "default" {}

table "t1" {
schema = schema.default
column "id" {
type = int
}
primary_key {
columns = [column.id]
}
}

table "t2" {
schema = schema.default
column "id" {
type = int
}
column "t1_id" {
type = int
}
foreign_key "t1_id" {
columns = [column.t1_id]
ref_columns = [table.t1.column.id]
}
}

Next, run the following command to generate the ERD:

atlas schema inspect --dev-url docker://mysql/8/d --url file://schema.hcl --visualize

Atlas will prompt you to check where you would like to share the ERD:

Use the arrow keys to navigate: ↓ ↑ → ←
? Where would you like to share your schema visualization?:
▸ Publicly (gh.atlasgo.cloud)
Privately (acme-corp.atlasgo.cloud)

After selecting the desired option, Atlas will generate the ERD and print out the URL where you can access it:

https://gh.atlasgo.cloud/explore/cdd7bee5

ariga/atlas-deploy GitHub Action

GitHub Actions is a powerful tool that allows developers to automate workflows around GitHub repositories. We have previously published GitHub Actions for running CI for schema changes and for syncing your migration directory with Atlas Cloud.

Today, we are happy to announce the release of the ariga/atlas-deploy-action to the GitHub Actions Marketplace. This action allows you to run migrations directly from your GitHub Actions workflow. To get started, add the following step to your workflow:

name: Deploy Database Migrations
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Deploy Atlas Migrations
uses: ariga/atlas-deploy-action@v0
with:
url: ${{ secrets.DATABASE_URL }}
dir: path/to/migrations

Type checking for schema files

I firmly believe that great developer experiences aren't created by any one single design decision but are rather a result of a relentless commitment to improving things little by little over long periods of time. As we work with Atlas as users every day, we constantly look for points of friction and frustration and try to find ways to make them better.

Sometimes, these changes are small and simple, like improving the help text of a command or updating a documentation page. Other times, they are more complex, and require building more fundamental infrastructure.

I give all of this as an intro to a feature that I am very excited about: type checking for the HCL schema. Starting with this release, we are shipping a type checker engine that will give you more precise and actionable feedback when you make mistakes in your schema files.

We have come as an industry to appreciate that a fast feedback loop is one of the most prominent factors in developer productivity, and it all starts in the local development environment.

Type checking validates the contents of your schema files and provides you with actionable feedback in case you make a mistake. For example, suppose you define a table like this (try spotting the errors yourself before reading on):

schema.hcl
table "users" {
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
column "manager_id" {
type = int
}
primray_key {
columns = [
column.id
]
}
index "idx_name" {
coulmns = [
column.name
]
unique = true
}
foreign_key {
columns = [column.manager_id]
ref_columns = [column.id]
}
}

Trying to apply this schema will produce the following error:

Error: failed parsing mysql schema files: schema.hcl
schema.hcl:1,1-6: missing required attribute schema in block table
schema.hcl:11,3-14: unknown block type primray_key in block of type table (Did you mean primary_key?)
schema.hcl:17,5-19,6: unknown attribute coulmns in block of type index (Did you mean columns?)
schema.hcl:22,3-14: foreign_key block must have exactly 1 label

This type checking engine is available on the Atlas CLI and will be integrated into our editor plugins for VSCode and (the upcoming) JetBrains IDEs.

Wrapping up

This is it! I know this was a long post, but we are very excited about all of these new features and we hope you will find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

Bridging the gap between IaC and Schema Management

· 11 min read
Rotem Tamir
Building Atlas

Introduction

When we started building Atlas a couple of years ago, we noticed that there was a substantial gap between what was then considered state-of-the-art in managing database schemas and the recent strides from Infrastructure-as-Code (IaC) to managing cloud infrastructure.

In this post, we review that gap and show how Atlas – along with its Terraform provider – can bridge the two domains.

As an aside, I usually try to keep blog posts practical and to the point, but occasionally think it’s worth it to zoom out and explain the grander ideas behind what we do.

If you’re looking for a quick and practical explanation of working with Atlas and Terraform, I recommend this YouTube video.

Why Infrastructure-as-Code

Infrastructure as Code (IaC) refers to the practice of managing and provisioning infrastructure through machine-readable configuration files, instead of utilizing traditional interactive configuration tools. This approach makes for automated, consistent, and repeatable deployment of environments that are faster and less error-prone than previous, more manual approaches.

Terraform, a popular open-source tool created by HashiCorp, is the most prominent implementation of the IaC concept. With Terraform, organizations can describe the desired state of their infrastructure in a simple configuration language (HCL) and let Terraform plan and apply these changes in an automated way.

Terraform (and IaC in general) has taken the software engineering world by storm in recent years. As someone who had the dubious pleasure of managing complex cloud infrastructure manually, using what is today jokingly called "ClickOps", I can mention a few properties of IaC that I believe contributed to this success:

  • Declarative – Terraform is built on a declarative workflow, which means that users only define the final (desired) state of their system. Terraform is responsible for inspecting the target environment, calculating the difference between the current and desired states, and building a plan for reconciling between those two states.

    Cloud infrastructures are becoming increasingly complex, comprising thousands of different, interconnected components. Declarative workflows greatly reduce the mental overhead of planning changes to such environments.

  • Automated – Many engineers can attest that manually provisioning a new environment used to take days, even weeks! Once Terraform generates a plan for changing environments, the process runs automatically and finishes in a matter of minutes.

  • Holistic – With Terraform, it is possible to capture all of the resources and configurations required to provision an application as one interconnected and formally defined dependency graph. Deployments become truly reproducible and automated, with no dangling or manually provisioned dependencies.

  • Self-healing – Finally, these three properties converge to support a self-healing tool that can detect and fix drift on its own. Whenever drift occurs, it is only a matter of re-running Terraform to shift from the current state back to the desired one.

Comparing IaC with Schema Management Tools

Next, let’s discuss the current state of database schema management tools (often called schema migration tools) by contrasting them with the properties of IaC.

  • Imperative – If Terraform embodies the declarative approach, then schema management tools often exemplify the opposite, imperative (or revision-based) approach. In this case, we don’t provide the tools with the what (the desired state of the database), but the how (what SQL commands need to run to migrate the database from the previous version to the next).

  • Semi-automated – Migration tools were revolutionary when they came out a decade ago. One idea stood as one of the harbingers of the GitOps philosophy: that database changes should not be applied manually but first checked into source control and then applied automatically by a tool.

    Today’s migration tools automate two aspects of schema management: 1) execution and 2) tracking which migrations were already executed on a target database.

    Compared to modern IaC tools, however, they are fairly manual. In other words, they leave the responsibility of planning and verifying the safety of changes to the user.

  • Fragmented – As we described above, one of the most pleasant aspects of adopting the IaC mindset is having a unified, holistic description of your infrastructure, to the point where you can entirely provision it from a single terraform apply command.

    For database schema management, common practices are anything but holistic. In some cases, provisioning the schema might happen 1) when application servers boot, before starting the application, or 2) while it runs as an init container on Kubernetes.

    In fact, some places (yes, even established companies) still have developers manually connect (with root credentials) to the production database to execute schema changes!

  • A pain to fix – When a migration deployment fails, many schema management tools will actually get in your way. Instead of worrying about fixing the issue at hand, you now need to worry about both your database and the way your migration tool sees it (which have now diverged).

Bridging the Gap

After describing the gap between IaC and database schema management in more detail, let’s delve into what it would take to bridge it. Our goal is to have schema management become an integral part of your day-to-day IaC pipeline so that you can enjoy all the positive properties we described above.

To integrate schema change management and IaC, we would need to solve two things:

  1. A diffing engine capable of supporting declarative migration workflows, such that an engine should be capable of:
    • Loading the desired schema of the database in some form
    • Inspecting the current schema of the database
    • Calculating a safe migration plan automatically
  2. A Terraform Provider that wraps the engine as a Terraform resource, which can then seamlessly integrate into your overall application infrastructure configuration.

How Atlas drives Declarative Migrations

Atlas is a language-agnostic tool for managing and migrating database schemas using modern DevOps principles. It is different from Terraform in many ways, but similar enough to have received the informal nickname "Terraform for Databases".

At its core lie three capabilities that make it ideal to apply a declarative workflow to schema management:

  1. Schema loaders
  2. Schema inspection
  3. Diffing and planning

Let’s discuss each of these capabilities in more detail.

Schema loaders

Every declarative workflow begins with the desired state - what we want the system to look like. Using a mechanism called "schema loaders" Atlas users can provide the desired schema in many ways. For example:

Plain SQL

Atlas users can describe the desired schema of the database using plain SQL DDL statements such as:

CREATE TABLE users (
Id int primary key,
Name varchar(255)
)

Atlas HCL

Alternatively, users can use Atlas HCL, a configuration language that shares Terraform’s configuration language foundations:

table "users" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
column "manager_id" {
type = int
}
primary_key {
columns = [
column.id
]
}
}

A live database

In addition, users can provide Atlas with a connection to an existing database which in turn Atlas can inspect and use as the desired state of the database.

External Schemas (ORM)

Finally, Atlas has an easily extensible design which makes writing plugins to load schemas from external sources a breeze. For example, Atlas can read the desired schema of the database directly from your ORM, using a simple integration.

Schema inspection

Once Atlas understands the desired state of the database, it needs to inspect the existing database to understand its current schema. This is done by connecting to the target database and querying the database’s information schema to construct a schema graph (an in-memory representation of all the components in the database and their connections).

Diffing and planning

The next phase involves calculating the difference ("diffing") between the desired and current states and calculating an execution plan to reconcile this difference. Because resources are often interconnected, Atlas must create a sensible order of execution using algorithms such as Topological Sort to ensure, for example, that dependencies on a resource are removed before it is dropped.

In addition, each database engine has its own peculiarities and limitations to take into account when creating an execution plan. For example, adding a default value to a column in an SQLite database must be performed in a multiple-step plan that looks similar to this:

-- Planned Changes:
-- Create "new_users" table
CREATE TABLE `new_users` (`id` int NOT NULL, `greeting` text NOT NULL DEFAULT 'shalom')
-- Copy rows from old table "users" to new temporary table "new_users"
INSERT INTO `new_users` (`id`, `greeting`) SELECT `id`, IFNULL(`greeting`, 'shalom') AS `greeting` FROM `users`
-- Drop "users" table after copying rows
DROP TABLE `users`
-- Rename temporary table "new_users" to "users"
ALTER TABLE `new_users` RENAME TO `users`

Atlas in action

What does this workflow look like in practice? As you can see in Atlas's "Getting Started" guide, suppose we made a change to our desired schema that adds a new table named blog_posts (this change may be described in a plain SQL file, an HCL file or even in your ORM's data model).

To apply the desired schema on a target database you would use the schema apply command:

atlas schema apply \
-u "mysql://root:pass@localhost:3306/example" \
--to file://schema.sql \
--dev-url "docker://mysql/8/example"

After which Atlas will generate a plan:

-- Planned Changes:
-- Create "blog_posts" table
CREATE TABLE `example`.`blog_posts` (`id` int NOT NULL, `title` varchar(100) NULL, `body` text NULL, `author_id` int NULL, PRIMARY KEY (`id`), INDEX `author_id` (`author_id`), CONSTRAINT `author_fk` FOREIGN KEY (`author_id`) REFERENCES `example`.`users` (`id`))
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

Observing this example, you may begin to understand how Atlas earned its nickname the "Terraform for Databases."

Integrating with Terraform

The second piece of bridging the gap is to create a Terraform Provider that wraps Atlas and allows users to define resources that represent the schema definition as part of your infrastructure.

Ariga (the company behind Atlas) is an official HashiCorp Tech Partner that publishes the Atlas Terraform Provider, which was created to solve this problem precisely.

Using the Atlas Terraform Provider, users can finally provision their database instance and its schema in one holistic definition. For example, suppose we provision a MySQL database using AWS RDS:

// Our RDS-based MySQL 8 instance.
resource "aws_db_instance" "atlas-demo" {
identifier = "atlas-demo"
instance_class = "db.t3.micro"
engine = "mysql"
engine_version = "8.0.28"
// Some fields skipped for brevity
}

Next, we load the desired schema from an HCL file, using the Atlas Provider:

data "atlas_schema" "app" {
src = "file://${path.module}/schema.hcl"
}

Finally, we use the atlas_schemaresource to apply our schema to the database:

// Apply the normalized schema to the RDS-managed database.
resource "atlas_schema" "hello" {
hcl = data.atlas_schema.app.hcl
url = "mysql://${aws_db_instance.atlas-demo.username}:${urlencode(random_password.password.result)}@${aws_db_instance.atlas-demo.endpoint}/"
}

You can find a full example here.

When we run terraform apply, this is what will happen:

  • Terraform will provision the RDS database using the AWS Provider
  • Terraform will use Atlas to inspect the existing schema of the database and load the desired state from a local HCL file.
  • Atlas will calculate for Terraform a SQL plan to reconcile between the two.

And this is how it may look like in the Terraform plan:

Terraform will perform the following actions:

# atlas_schema.hello will be created
+ resource "atlas_schema" "hello" {
+ hcl = <<-EOT
table "posts" {
schema = schema.app
column "id" {
null = false
type = int
}
column "user_id" {
null = false
type = int
}
column "title" {
null = false
type = varchar(255)
}
column "body" {
null = false
type = text
}
primary_key {
columns = [column.id]
}
foreign_key "posts_ibfk_1" {
columns = [column.user_id]
ref_columns = [table.users.column.id]
on_update = NO_ACTION
on_delete = CASCADE
}
index "user_id" {
columns = [column.user_id]
}
}
table "users" {
schema = schema.app
column "id" {
null = false
type = int
}
column "user_name" {
null = false
type = varchar(255)
}
column "email" {
null = false
type = varchar(255)
}
primary_key {
columns = [column.id]
}
}
schema "app" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
EOT
+ id = (known after apply)
+ url = (sensitive value)
}

# aws_db_instance.atlas-demo will be created
+ resource "aws_db_instance" "atlas-demo" {
// .. redacted for brevity
+ }

And that's how you bridge the gap between IaC and schema management!

Conclusion

In this blog post, we reviewed some exceptional properties of Infrastructure-as-Code tools, such as Terraform, that have led to their widespread adoption and success in the industry. We then reviewed the current state of a similar problem, database schema management, in contrast to these properties. Finally, we showcased Atlas’s ability to adapt some IaC principles into the domain of schema management and how we can unify the two domains using the Atlas Terraform Provider.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Versioned migrations on Kubernetes with the Atlas Operator

· 7 min read
Rotem Tamir
Building Atlas
TL;DR

You can now use the Atlas Kubernetes Operator to run versioned migrations natively from your Kubernetes cluster.

See an example

Introduction

The Atlas Kubernetes Operator is a Kubernetes operator that enables you to manage your database schemas natively from your Kubernetes cluster. By exposing custom resource definitions (CRD) the operator extends the Kubernetes API to support database schema management.

In a previous blog post we demonstrated how to use the Atlas Operator for the declarative (state-based) workflow in which you define the desired state of your database schema in a Kubernetes manifest and the operator takes care of the rest.

State vs. versioned based migrations is a common and unresolved debate in the database schema management world, and we built Atlas to support both from the get-go.

Today, we are happy to announce v0.2.0 of the Atlas Kubernetes Operator which adds support support for the versioned migration workflow.

In this blog post we will demonstrate how to use the Atlas Operator this new workflow.

How it works

The Atlas Kubernetes Operator supports versioned migrations. In versioned migrations, the database schema is defined by a series of SQL scripts ("migrations") that are applied in order. The user can specify the version and migration directory to run, which can be located on the Atlas Cloud or stored as a ConfigMap in your Kubernetes cluster.

In this workflow, after installing the Atlas Kubernetes Operator, the user defines the desired state of the database as an AtlasMigration resource which connects between a target database and a migration directory. The migration directory may be configured as a remote directory in Atlas Cloud or as a ConfigMap in your Kubernetes cluster.

The operator then reconciles the desired state with the actual state of the database by applying any pending migrations on the target database.

Demo time

In this demo we will use the Atlas Kubernetes Operator to run versioned migrations on a MySQL database.

Prerequisites

  1. A Kubernetes cluster - you can use Minikube to quickly spin up a local cluster.
  2. kubectl configured to connect to your cluster.
  3. Helm, the Kubernetes package manager, locally installed.
  4. The Atlas CLI tool, locally installed.

1. Install the Atlas Kubernetes Operator

The Atlas Kubernetes Operator is available as a Helm Chart. To install the chart with the release name atlas-operator:

helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator

2. Install a database

Create a MySQL database and a secret with an Atlas URL to the database:

kubectl apply -f https://raw.githubusercontent.com/ariga/atlas-operator/65dce84761354d1766041c7f286b35cc24ffdddb/config/integration/databases/mysql.yaml

Result:

deployment.apps/mysql created
service/mysql created
secret/mysql-credentials created

In this example, we are using a plain MySQL pod as a database. In a real-world scenario, you would probably use a managed database service such as Amazon RDS or Google Cloud SQL.

3. Set up a migration directory

With the operator and the database running, let's set up the migration directory which we will use to manage our database schema.

You can use the directory from an existing project, but for the sake of this demo we will use the Atlas template repo which contains a simple migration directory.

git clone git@github.com:ariga/atlas-template.git versioned-demo

Observe this directory contains a migrations directory with a couple of migration scripts:

cd versioned-demo
tree migrations
tree migrations
migrations
├── 20230316085611.sql
├── 20230316090502.sql
└── atlas.sum

4. Create a ConfigMap with the migration directory

The operator supports two ways to manage your migration directory:

  • Atlas Cloud - a cloud-based directory that is managed by Atlas.
  • ConfigMap - a Kubernetes resource that contains the migration directory files as key-value pairs.

In this demo we will use a ConfigMap to manage our migration directory. To create a ConfigMap with the migration directory files:

kubectl create configmap migrations --from-file=migrations -o yaml --dry-run=client --save-config > migrations.yaml

The above command creates a YAML named migrations.yaml file with the migration directory files. It should look something like this:

apiVersion: v1
data:
20230316085611.sql: |
-- Create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`user_name` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
20230316090502.sql: |
-- Create "posts" table
CREATE TABLE `posts` (
`id` int NOT NULL,
`user_id` int NOT NULL,
`title` varchar(255) NOT NULL,
`body` text NOT NULL,
PRIMARY KEY (`id`),
INDEX `user_id` (`user_id`),
CONSTRAINT `posts_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE CASCADE
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
atlas.sum: |
h1:XBXbh+rzLis8gknjlIqnxXLBkOZ+sN2v2p7KjyVFYYM=
20230316085611.sql h1:br6W6LPEnnsejlz/7hRm9zthwStCzjN2vZkqVPxlmvo=
20230316090502.sql h1:GfeRjkSeoCt3JVRtLQNa/r50lRfpAPXS7AqTU2ZNFgY=
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"migrations","creationTimestamp":null},"data":{"20230316085611.sql":"-- Create \"users\" table\nCREATE TABLE `users` (\n `id` int NOT NULL,\n `user_name` varchar(255) NOT NULL,\n `email` varchar(255) NOT NULL,\n PRIMARY KEY (`id`)\n) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;\n","20230316090502.sql":"-- Create \"posts\" table\nCREATE TABLE `posts` (\n `id` int NOT NULL,\n `user_id` int NOT NULL,\n `title` varchar(255) NOT NULL,\n `body` text NOT NULL,\n PRIMARY KEY (`id`),\n INDEX `user_id` (`user_id`),\n CONSTRAINT `posts_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE CASCADE\n) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;\n","atlas.sum":"h1:XBXbh+rzLis8gknjlIqnxXLBkOZ+sN2v2p7KjyVFYYM=\n20230316085611.sql h1:br6W6LPEnnsejlz/7hRm9zthwStCzjN2vZkqVPxlmvo=\n20230316090502.sql h1:GfeRjkSeoCt3JVRtLQNa/r50lRfpAPXS7AqTU2ZNFgY=\n"}}
name: migrations

Apply the ConfigMap to your cluster:

kubectl apply -f migrations.yaml

Kubernetes will create a ConfigMap named migrations with the migration directory files:

configmap/migrations created

5. Create an AtlasMigration resource

Now that we have a database and a migration directory, we can create an AtlasMigration resource to manage our database schema. The AtlasMigration resource is a custom resource that you use to define the desired state of your database schema. The operator will then reconcile the actual state of your database schema with the desired state.

To create an AtlasMigration resource, create a YAML file named atlas-migration.yaml with the following content:

apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlas-migration
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql-credentials
dir:
configMapRef:
name: "migrations"

After you create the atlas-migration.yaml file, apply it to your cluster:

kubectl apply -f atlas-migration.yaml

Next, let's wait for the resource to enter a "Ready" state:

kubectl wait --for=condition=Ready atlasmigration/atlas-migration

When the operator finishes reconciling the AtlasMigration resource, the AtlasMigration resource will be ready:

atlasmigration.db.atlasgo.io/atlas-migration condition met

6. Verify the migrations were applied

Finally, to verify the migrations were applied, connect to the database and check to see if the users table was created:

kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe myapp.users"

You should see the following output:

+-----------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+-------+
| id | int | NO | PRI | NULL | |
| user_name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | | NULL | |
+-----------+--------------+------+-----+---------+-------+

Check that the posts table was created as well:

kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe myapp.posts"

You should see the following output:

+---------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+-------+
| id | int | NO | PRI | NULL | |
| user_id | int | NO | MUL | NULL | |
| title | varchar(255) | NO | | NULL | |
| body | text | NO | | NULL | |
+---------+--------------+------+-----+---------+-------+

And that's it! You've successfully deployed the Atlas Operator and applied migrations to your database.

Conclusion

In this blog post, we showed you how to use the Atlas Operator to manage your database schema in Kubernetes using a versioned migrations workflow. To learn more about the Atlas Operator, check out the Atlas Operator GitHub repository as well as the documentation on the Atlas website.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Announcing External Schemas and magical GORM support

· 7 min read
Rotem Tamir
Building Atlas
TL;DR

You can now import the desired database schema from any ORM or other tool into Atlas, and use it to automatically plan migrations for you.

See an example

Introduction

Today, I'm happy to share with you one of the most exciting features we've added to Atlas since its inception: "External Schemas".

Atlas is a modern tool for managing your database schema. It allows you to inspect, plan, lint and execute schema changes to your database. It is designed to be used by developers, DBAs and DevOps engineers alike.

Schema-as-Code

Atlas is built around the concept of database "Schema-as-Code", which means that you define the desired schema of your database in a declarative way, and Atlas takes care of planning and executing the necessary migrations to get your database to the desired state. The goal of this approach is to let organizations build a single source of truth for complex data topologies, and to make it easy to collaborate on schema changes.

Schema Loaders

To achieve this goal, Atlas provides support for "Schema Loaders" which are different mechanisms for loading the desired state of your database schema into Atlas. Until today, Atlas supported a few ways to load your schema:

  • Using Atlas DDL - an HCL based configuration language for defining database schemas.
  • Using Plain SQL - a simple way to define your schema using plain SQL files (CREATE TABLE statements, etc.)
  • From an existing database - Atlas can connect to your database and load the schema from it.
  • The Ent ORM - Atlas can load the schema of your Ent project.

Today, we are adding support for "External Schemas", which means that you can now import the desired database schema from any ORM or other tool into Atlas, and use it to automatically plan migrations and execute them for you.

How do External Schemas work?

External Schemas are implemented using a new type of Datasource called external_schema. The external_schema data source enables the import of an SQL schema from an external program into Atlas' desired state. With this data source, users have the flexibility to represent the desired state of the database schema in any language.

To use an external_schema, create a file named atlas.hcl with the following content:

data "external_schema" "example" {
program = [
"echo",
"create table users (name text)",
]
}

env "local" {
src = data.external_schema.example.url
dev = "sqlite://file?mode=memory&_fk=1"
}

In this dummy example, we use the echo command to generate a simple SQL schema. In a real-world scenario, you would use a program that understands your ORM or tool of choice to generate the desired schema. Some ORMs support this out-of-the-box, such as Laravel's Eloquent's schema:dump command, while others require some simple integrations work to extract the schema from.

In the next section we will present the GORM Atlas Provider and how it can be used to seamlessly integrate a GORM based project with Atlas.

Demo Time

GORM is a popular ORM widely used in the Go community. GORM allows users to manage their database schemas using its AutoMigrate feature, which is usually sufficient during development and in many simple cases.

However, at some point, teams need more control and decide to employ the versioned migrations methodology. Once this happens, the responsibility for planning migration scripts and making sure they are in line with what GORM expects at runtime is moved to developers.

Atlas can automatically plan database schema migrations for developers using GORM. Atlas plans migrations by calculating the diff between the current state of the database, and its desired state.

In the context of versioned migrations, the current state can be thought of as the database schema that would have been created by applying all previous migration scripts.

Installation

If you haven't already, install Atlas from macOS or Linux by running:

curl -sSf https://atlasgo.sh | sh

See atlasgo.io for more installation options.

Install the provider by running:

go get -u ariga.io/atlas-provider-gorm

Standalone vs Go Program mode

The Atlas GORM Provider can be used in two modes:

  • Standalone - If all of your GORM models exist in a single package, and either embed gorm.Model or contain gorm struct tags, you can use the provider directly to load your GORM schema into Atlas.
  • Go Program - If your GORM models are spread across multiple packages, or do not embed gorm.Model or contain gorm struct tags, you can use the provider as a library in your Go program to load your GORM schema into Atlas.

Standalone mode

If all of your GORM models exist in a single package, and either embed gorm.Model or contain gorm struct tags, you can use the provider directly to load your GORM schema into Atlas.

In your project directory, create a new file named atlas.hcl with the following contents:

data "external_schema" "gorm" {
program = [
"go",
"run",
"-mod=mod",
"ariga.io/atlas-provider-gorm",
"load",
"--path", "./path/to/models",
"--dialect", "mysql", // | postgres | sqlite
]
}

env "gorm" {
src = data.external_schema.gorm.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}

In this example, we use the go run command to run the atlas-provider-gorm program and load the schema from the ./path/to/models directory. The atlas-provider-gorm program will scan the directory for GORM models and generate the desired schema for them. The --dialect flag is used to specify the database dialect that the schema should be generated for. The atlas-provider-gorm program supports the following dialects: mysql, postgres, and sqlite.

For the sake of brevity, we will not review the Go program mode in this post, but you can find more information about it in the GORM Guide.

External schemas in action

Atlas supports a versioned migrations workflow, where each change to the database is versioned and recorded in a migration file. You can use the atlas migrate diff command to automatically generate a migration file that will migrate the database from its latest revision to the current GORM schema.

Suppose we have the following GORM models in our models package:

package models

import "gorm.io/gorm"

type User struct {
gorm.Model
Name string
Pets []Pet
}

type Pet struct {
gorm.Model
Name string
User User
UserID uint
}

We can now generate a migration file by running this command:

atlas migrate diff --env gorm 

Observe that files similar to this were created in the migrations directory:

migrations
|-- 20230627123246.sql
`-- atlas.sum

0 directories, 2 files

Examining the contents of 20230625161420.sql:

-- Create "users" table
CREATE TABLE `users` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`created_at` datetime(3) NULL,
`updated_at` datetime(3) NULL,
`deleted_at` datetime(3) NULL,
`name` longtext NULL,
PRIMARY KEY (`id`),
INDEX `idx_users_deleted_at` (`deleted_at`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- Create "pets" table
CREATE TABLE `pets` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`created_at` datetime(3) NULL,
`updated_at` datetime(3) NULL,
`deleted_at` datetime(3) NULL,
`name` longtext NULL,
`user_id` bigint unsigned NULL,
PRIMARY KEY (`id`),
INDEX `fk_users_pets` (`user_id`),
INDEX `idx_pets_deleted_at` (`deleted_at`),
CONSTRAINT `fk_users_pets` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Amazing! Atlas automatically generated a migration file that will create the pets and users tables in our database!

Next, alter the models.Pet struct to add a Nickname field:

type Pet struct {
gorm.Model
Name string
+ Nickname string
User User
UserID uint
}

Re-run this command:

atlas migrate diff --env gorm 

Observe a new migration file is generated:

-- Modify "pets" table
ALTER TABLE `pets` ADD COLUMN `nickname` longtext NULL;

Conclusion

In this post, we have presented External Schemas and how they can be used to automatically generate database schema directly from your ORM models. We have also demonstrated how to use the GORM Atlas Provider to automatically plan migrations for your GORM models.

We believe that this is a huge step forward in making Atlas more accessible to developers who are already using ORMs in their projects. We hope that you will find this feature useful and we look forward to hearing your feedback.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Announcing SOC2 Compliance for Atlas Cloud

· 3 min read
Rotem Tamir
Building Atlas

Today we are happy to announce that Atlas Cloud, our cloud offering, has achieved SOC2 compliance. This is a big milestone for us, which shows our determination to providing solid infrastructure for our users and customers.

SOC2 is a security and compliance standard that helps organizations demonstrate their ability to protect customer data and ensure the availability of their services. It’s like an independent third-party audit that evaluates how well a company follows industry-standard security practices, covering areas such as availability, processing integrity, confidentiality, and privacy.

Achieving SOC2 compliance requires a significant investment in time, effort, and resources, so you may be wondering why we decided to pursue this goal so early in the life of our product. The knee-jerk response of any seasoned engineer to large and long infrastructure projects should is:

YAGNI. You ain’t gonna need it.

When building software systems, we often spend a huge amount of time developing abstractions and tooling, only to find out that product requirements changed, rendering our work useless.

The first commandment of the lean movement: Waste Not. Do the minimum you can to learn what will work. Do less to move fast. But there’s another side to this coin: teams with solid infrastructure move way faster. Try driving a sportscar on a shabby dirt road.

*So why invest resources in compliance early on?

First of all, are we gonna need it? What are the odds that we won’t need the SOC2 certification, and that it won’t bring us business value?

Ariga is an open-core company building tools for software engineering teams. We are building Atlas Cloud to be the safest, fastest, and richest way for organizations to manage database schema changes.

We believe that in order to earn the trust of other organizations, in order for them to grant us the privilege of being infrastructure to their business, we must hold ourselves accountable to rigid standards.

To be perfectly honest, the vast majority of the things that we were required to demonstrate in the compliance process, such as mandatory code reviews, disaster recovery, and data privacy controls are things that we consider just consider to be solid engineering practices that we hold ourselves accountable to regardless of an external auditing process.

We are proud of this accomplishment and look forward to continuing to provide our users with the best possible experience using Atlas Cloud. We will continue to invest in our security and compliance programs to ensure we stay ahead of the curve and remain a trusted partner to our customers.

Ariga raises 18M$ to build the future of working with databases

· 6 min read
Ariel Mashraki
Building Atlas

Today, we are delighted to share that Ariga (the company behind Atlas) has raised 18M$ in total from Tiger Global and TLV Partners to build our vision for the future of how organizations work with their databases.

This is an important milestone for us as it provides us with the resources we need to keep growing our open-source communities and build the products we envision for the industry. I want to take this opportunity to reflect on the work we've done in the past two+ years, and where we are going as a company.

Background

Databases play a crucial role in driving a company's success. From the outset, companies invest time in selecting a suitable database for building their product, modeling its schema, and building their application on top of it. As companies grow, they face challenges in performance, security, privacy, and more. Databases remain a vital component, accompanying businesses from the earliest stages and throughout their entire journey.

Ariga was founded after a decade of experience in building database infrastructure, and dev tooling frameworks. Our focus in the last two years has been on three fundamental pillars:

  • Pioneering the “Schema-as-Code” concept within the industry
  • Building an innovative database change management platform
  • Open sourcing a versatile framework for developing data-driven server applications

I want to elaborate on these pillars and describe how they shaped our mission statement as an engineering team and a company.

Schema-as-Code

One of the core principles we advocate for is “Schema-as-Code”, which refers to the practice of defining and managing the database schema through code. Similar to Infrastructure as Code (IaC), which is used to manage infrastructure resources, SaC is about defining database schema elements in code format.

As with IaC, being able to capture the desired state of application data topologies (which are comprised of the database schemas, the connections between them, and the policies that govern them) unlocks the ability to automate many operations around them such as:

  • Change management (provisioning, migrations, data migrations)
  • Monitoring (drift detection, performance)
  • Governance (enforcing access, privacy, and change policies)
  • Collaboration (notifications, workflows, catalog/visibility)
  • Runtime (API layers, caching, multi-tenancy, auditing, deletion)

Being able to capture not only the schema structure but also its behavior, access permissions, methods of access, and more. Adopting this approach simplifies management, ensures the schema is version-controlled, and allows the automation of most database operations.

From this core concept which we believe can have an outsized impact on the industry, similar to IaC, stem our core activities:

Atlas, our OSS schema management engine, was developed to bring the SaC concept to life, understanding that without a practical implementation, the theoretical idea couldn't truly showcase its benefits. By making Atlas open-source, we encourage broader adoption of the SaC concept.

Ent, our OSS application framework embodies the concept of schema-as-code. Define your application data topology in code: entities, relations, policies, behaviors, and have everything created automatically for the user from there.

Database Change Management

Databases are the most critical component of almost any software system; database downtime is equivalent to business downtime. Being stateful components, they cannot be easily rebuilt, scaled out, or fixed by a restart. Outages that involve damage to data or simply unavailability of the database are notoriously hard to manage and recover from, often taking long hours of careful work by a team's most senior engineers.

On the other hand, businesses are required to evolve faster than ever. Any change to the business carries a change to its underlying data model, which means modifying the database schema. Teams that can evolve their data model faster can respond quickly to market demands and customer feedback and beat the competition.

This puts organizations in a dilemma. To protect business stability, databases should seldom be changed to reduce the risk of an expensive outage. Simultaneously, database schemas must be constantly modified to accommodate the evolution of the business.

Ariga provides software engineering teams with the fastest and safest way to manage database schema changes. With our platform:

  • Software developers do not need to worry about planning safe changes to the database schema. By using a simple data definition language, developers declare the desired schema of their database and our engine plans and applies the changes for them.
  • Teams can define and enforce change policies during continuous integration to prevent dangerous changes way before they happen.
  • Operators can deploy schema changes to production with peace of mind as dangerous changes are verified and simulated against real production data.

Database Runtime Access

Ariga was founded by the creators of Ent, a Linux Foundation-backed project beloved by software engineers in companies of various sizes and industries, from top firms to small startups.

Ent is an entity framework that provides application developers with robust, privacy-aware APIs for managing data, reducing 90% of repetitive code by generating efficient and fast GraphQL, REST, or gRPC servers. Ent users value it for its exceptional development experience and time-saving capabilities, often returning to it as a tool of choice for new projects.

Even though we actively engaged in our open-source communities, providing assistance on a daily basis to many users on our GitHub, Discord, and Slack channels, we've learned that some companies need additional support beyond the standard open-source options to thrive.

To address the needs of these organizations, we are launching our Ent-erprise Support program. Besides the dedicated support, our aim for this pillar is to solve performance, privacy, and runtime issues that reduce time and effort from engineers and allow them to move much faster.

Some of the problems we address involve multi-tenancy, performance enhancements, secure database access, and improved GraphQL/REST layers that automatically connect to Ent and Atlas. These components are seamlessly integrated not only with Ent but also with other frameworks, making them suitable for a broad range of companies.

Our Mission Statement

In conclusion, Ariga's mission focuses on empowering businesses through integrated database management solutions that span the entire product lifecycle. Our technology ranges from the initial stages of the data definition to ongoing database change management, and during data access at application runtime.

Using our expertise in these areas and commitment to the open-source, we aim to revolutionize the way companies manage their database schemas, allowing them to respond quickly to customer demands and market changes while maintaining high-quality and stable infrastructure.

Best,

a8m

Announcing Atlas v0.12: Cloud State Management

· 5 min read
Ariel Mashraki
Building Atlas

It has been two months since we announced the Community Preview Plan for Atlas Cloud, and today I am thrilled to announce the next batch of features that we are releasing to open-source and to Atlas Cloud:

In summary, version v0.12 includes a few major features that are explained in detail below:

  1. We have added support for importing and running migration linting on GitHub PRs for external migration formats, such as Flyway and golang-migrate.
  2. Atlas now supports reading your migration directory directly from your Atlas Cloud account. This eliminates the need for users to build their Docker images with the directory content and makes running schema migrations in production much easier.
  3. By connecting Atlas CLI to Atlas Cloud, migration runs will be recorded in the cloud account, making it easier to monitor and troubleshoot executed migrations.
  4. A new Slack integration is now available for Community Plan accounts. Organizations that connect their migration directories to the cloud can receive notifications to Slack channels when the schemas are updated or deployed, among other events.
  5. A new look has been given to the CI report page. It will be enhanced with additional features in the next version.

Remote Directory State

One of the most common complaints we received from our users is that setting up migration deployments for real-world environments is time-consuming. For each service, users are required to build a Docker image that includes the content of the migration directory, which ensures it is available when Atlas is executed. After this, the built image must be pushed to a registry, and finally, the deployment process needs to be configured to use this newly created image.

Not only does this process add complexity to the setup, but it is also repetitive for each migration directory and involves setting up a CI/CD pipeline for each service, adding another layer of complexity.

Atlas supports the concept of Data Sources, which enables users to retrieve information stored in an external service or database. In this release, we are introducing a new data source called remote_dir. This feature allows users to configure Atlas to read the content of the migration directory directly from their cloud account, thereby eliminating the need to build Docker images with the directory content.

Here is an example of how to configure the remote_dir data source:

atlas.hcl
variable "cloud_token" {
type = string
}

atlas {
cloud {
token = var.cloud_token
}
}

data "remote_dir" "migrations" {
// The name of the migration directory in Atlas Cloud.
// In this example, the directory is named "graph".
name = "graph"
}

env {
// Set environment name dynamically based on --env value.
name = atlas.env
migration {
dir = data.remote_dir.migrations.url
}
}
atlas migrate apply \
--url "<DATABASE_URL>" \
--config file://path/to/atlas.hcl \
--env prod \
--var cloud_token="<ATLAS_TOKEN>"

Visualizing Migration Runs

Schema migrations are an integral part of application deployments, yet the setup might vary between different applications and teams. Some teams may prefer using init-containers, while others run migrations from CD pipeline. There are also those who opt for Helm upgrade hooks or use our Kubernetes operator. The differences also apply to databases. Some applications work with one database, while others manage multiple databases, often seen in multi-tenant applications.

However, across all these scenarios, there's a shared need for a single place to view and track the progress of executed schema migrations. This includes triggering alerts and providing the means to troubleshoot and manage recovery if problems arise.

Starting from version v0.12, if the cloud configuration was set with a valid token, Atlas will log migration runs in your cloud account. Here's a demonstration of how it looks in action:

We have several new features lined up for the Community Plan in the next release. If you're interested in them earlier, don't hesitate to reach out to me in our Discord community.

Slack Webhooks

In this release, we're making our Slack Webhooks integration available to all users, promoting better team collaboration and providing instant alerts when issues occur. This new feature allows different groups within the organization, such as the data engineering team, to receive notifications when the schema changes. Or, ping the on-call when deployment fails.

If you're interested in enabling this feature for your project, please check out the documentation.

Screenshot example

What's next?

There's a lot more coming in the following months. Our next releases will be focused on making other database objects such as views, triggers, and policies accessible to all Atlas users. We'll also continue to make more features from our commercial product available to both open-source and community preview users.

As always, we value community feedback and strive to be responsive to it. Please feel free to reach out and share your feedback with us on our Discord if you think something is missing or could be improved. Cheers!

Quickly visualize your Django schemas with DjangoViz

· 3 min read
DjangoViz is Deprecated

DjangoViz has been deprecated. Please refer to the Atlas Django Provider documentation for up to date instructions.

Having a visual representation of your data model can be helpful as it allows for easier comprehension of complex data structures, and enables developers to better understand and collaborate on the data model of the application they are building.

Entity relationship diagrams (ERDs) are a common way to visualize data models, by showing how data is stored in the database. ERDs are graphical representations of the entities, their attributes, and the way these entities are related to each other.

Today we are happy to announce the release of DjangoViz, a new tool for automatically creating ERDs from Django data models.

Django is an open source Python framework for building web applications quickly and efficiently. In this blog post, I will introduce DjangoViz and demonstrate how to use it for generating Django schema visualizations using the Atlas playground.

img

Django ORM

Django ORM is a built-in module in the Django web framework. It offers a high-level abstraction layer that enables developers to define complex application data models with ease. Unlike traditional ORM frameworks that rely on tables and foreign keys, Django models are defined using Python objects and relationships:

from django.db import models

class User(models.Model):
username = models.CharField(max_length=255)
email = models.EmailField(unique=True)
password = models.CharField(max_length=255)

class Post(models.Model):
title = models.CharField(max_length=255)
content = models.TextField()
author = models.ForeignKey(User, on_delete=models.CASCADE)

When the application runs, Django translates these Python models into database schemas, mapping each model to a corresponding database table and each field to a corresponding column in the table.
When working with schemas and making changes to them, being able to understand the full picture just through code can get complicated very quickly. To help developers better understand their schema, we have created DjangoViz.

Introducing DjangoViz

For the purpose of this demo, we will follow the Django getting started tutorial, and showcase how you can use DjangoViz to visualize the default models included by Django's startproject command.

First, install Django and create a new project:

pip install Django
django-admin startproject atlas_demo
cd atlas_demo

Install the DjangoViz package:

pip install djangoviz

Add DjangoViz to your Django project's INSTALLED_APPS in atlas_demo/settings.py:

INSTALLED_APPS = [
...,
'djangoviz',
...
]

DjangoViz supports either PostgreSQL or MySQL, in this example we will use PostgreSQL:

Install the PostgreSQL driver:

pip install psycopg2-binary

Configure the database to work with PostgreSQL in the settings.py file:

DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql_psycopg2",
"NAME": "postgres",
"USER": "postgres",
"PASSWORD": "pass",
"HOST": "127.0.0.1",
"PORT": "5432",
}
}

Start a PostgreSQL container:

docker run --rm -p 5432:5432  -e POSTGRES_PASSWORD=pass -d postgres:15

Now, you can visualize your schema by running the djangoviz management command from your new project directory:

python manage.py djangoviz

You will get a public link to your visualization, which will present an ERD and the schema itself in SQL or HCL:

Here is a public link to your schema visualization:
https://gh.atlasgo.cloud/explore/ac523fef

When clicking on the link you will see the ERD of your new project:

img

Wrapping up

In this post, we discussed DjangoViz, a new tool that helps to quickly visualize Django schemas. With this tool, you can easily get an overview of the data model and visual of your schema. We would love to hear your thoughts and feedback if you decide to give it a go!

Have questions? Feedback? Find our team on our Discord server ❤️.

Announcing the Atlas HCL VSCode Extension

· 3 min read

Introduction

Atlas supports many ways to describe database schemas. Using Schema Loaders, Atlas gives users the flexibility to define schemas using plain SQL, a connection to another database or using Atlas HCL.

What is Atlas HCL?

Atlas includes a data-definition language based on a Terraform-like syntax. This language, which was designed specifically for describing database schemas is based on HashiCorp's HCL toolkit for creating structured configuration languages.

This data definition language is easy to read, edit, and visualize, and it allows us to organize our schema data in a efficient way that has many benefits to using plain SQL.

What is the Atlas HCL VSCode Extension?

To make it easier for users to define schemas using Atlas HCL, we're happy to announce the release of the official Atlas HCL VSCode Extension. This extension provides a rich set of features that will help users define schemas using Atlas HCL.

In this post, we'll take a closer look at the features of this extension and how it can benefit your work.

Features

Auto Completion

The extension provides auto completion support for all SQL resources available within Atlas. For a comprehensive list of the supported resources, please visit this document: SQL Resources.

Completion

Batch Completion

Referencing Qualified Tables

This refers to a feature that allows for referencing tables and their columns in a structured manner, either relative to the current location (e.g. column.id) or absolute to the table (e.g. table.user.column.id).

Referencing Qualified Tables

Linting

To ensure that errors are caught from the beginning, the Atlas HCL VSCode Extension offers a range of validations that enhance type safety for users. The specific validations provided will vary depending on the SQL dialect being used. This feature is designed to help users identify and fix issues early on, resulting in a more efficient and effective schema-building process.

Linting

Usage

To use this extension, you can either install it directly from the VSCode desktop or download it from the Marketplace. The extension will appear in the extensions list as Atlas HCL.

Extension

After installing the Atlas HCL extension, it automatically registers for all the languages that Atlas supports. These languages include:

NameLanguage IDDescription
Atlas SQLiteatlas-schema-sqliteSQLite dialect
Atlas MySQLatlas-schema-mysqlMySQL dialect
Atlas PostgresQLatlas-schema-postgresqlPostgresQL dialect
Atlas MariaDBatlas-schema-mariadbMariaDB dialect
Atlas Config Fileatlas-configProject Configuration

By default, the extension will auto-detect atlas.hcl files in your project. However, if you have any schema files in your project, you must choose the language for them. You can do this in three ways in vscode:

  1. Click on "Change Language" for your schema files in this section at bottom. (See image below)

Installation

  1. Use the Command Palette and choose "Change language mode" (See image below)

Installation by command palette

  1. Use Language Identifiers

Wrapping Up

In this post, we've introduced The Atlas HCL Extension. There's much more inside, and we invite you to install it today and give it a try.

What next?

Announcing v0.11.0: Manage database schemas with Kubernetes and Atlas

· 8 min read
Rotem Tamir
Building Atlas
TL;DR

You can now use the Atlas Kubernetes Operator to safely manage your database schemas with Atlas from within your Kubernetes cluster.

See an example

Introduction

Today, we are excited to announce the release of Atlas v0.11.0, which introduces the Atlas Kubernetes Operator. This release is a major milestone in our mission to make Atlas the most robust and modern way to manage your database schemas. With the Atlas Kubernetes Operator, you can now manage your database schemas with Atlas from within your Kubernetes cluster.

In this release, we also introduce a new concept to Atlas - "Diff Policies" - which allow you to customize the way Atlas plans database migrations for you. This concept is directly related to the Kubernetes Operator, and we will explain how below.

What are Kubernetes Operators?

Kubernetes has taken the cloud infrastructure world by storm mostly thanks to its declarative API. When working with Kubernetes, developers provide their cluster's desired configuration to the Kubernetes API, and Kubernetes is responsible for reconciling the actual state of the cluster with the desired state. This allows developers to focus on the desired state of their cluster, and let Kubernetes handle the complexities of how to get there.

This works out incredibly well for stateless components, such as containers, network configuration and access policies. The benefit of stateless components is that they can be replaced at any time, and Kubernetes can simply create a new instance of the component with the desired configuration. For stateful resources, such as databases, this is not the case. Throwing away a running database and creating a new one with the desired configuration is not an option.

For this reason, reconciling the desired state of a database with its actual state can be a complex task that requires a lot of domain knowledge. Kubernetes Operators were introduced to the Kubernetes ecosystem to help users manage complex stateful resources by codifying this type of domain knowledge into a Kubernetes controller.

What is the Atlas Kubernetes Operator?

The Atlas Kubernetes Operator is a Kubernetes controller that uses Atlas to manage your database schema. The Atlas Kubernetes Operator allows you to define the desired schema and apply it to your database using the Kubernetes API.

Declarative schema migrations

The Atlas Kubernetes Operator supports declarative migrations. In declarative migrations, the desired state of the database is defined by the user and the operator is responsible for reconciling the desired state with the actual state of the database (planning and executing CREATE, ALTER and DROP statements).

Diffing policies

One of the common objections to applying declarative workflows to databases is that there are often multiple ways to achieve the same desired state. For example, if you are running a Postgres database, you may want to add an index to a table. Depending on your circumstances, you may want to add this index with or without the CONCURRENTLY option. When using a declarative workflow, you supply where you want to go, but not how to get there.

To address this concern, we have introduced the concept of "diff policies" to Atlas. Diff policies allow you to customize the way Atlas plans database schema changes for you. For example, you can define a diff policy that will always add the CONCURRENTLY option to CREATE INDEX statements. You can also define a diff policy that will skip certain kinds of changes (for example DROP COLUMN) altogether.

Diff policies can be defined in the atlas.hcl file you use to configure Atlas. For example:

env "local" {
diff {
// By default, indexes are not created or dropped concurrently.
concurrent_index {
create = true
drop = true
}
}
}

Diff policies are especially valuable when using the Atlas Kubernetes Operator, as they allow you to customize and constrain the way the operator manages your database to account for your specific needs. We will see an example of this below.

Demo time!

Let's see the Atlas Kubernetes Operator in action. In this demo, we will use the Atlas Kubernetes Operator to manage a MySQL database running in a Kubernetes cluster.

The Atlas Kubernetes Operator is available as a Helm Chart. To install the chart with the release name atlas-operator:

helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator

After installing the operator, follow these steps to get started:

  1. Create a MySQL database and a secret with an Atlas URL to the database:
kubectl apply -f https://raw.githubusercontent.com/ariga/atlas-operator/65dce84761354d1766041c7f286b35cc24ffdddb/config/integration/databases/mysql.yaml

Result:

deployment.apps/mysql created
service/mysql created
secret/mysql-credentials created
  1. Create a file named schema.yaml containing an AtlasSchema resource to define the desired schema:
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-mysql
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql-credentials
schema:
sql: |
create table users (
id int not null auto_increment,
name varchar(255) not null,
email varchar(255) unique not null,
short_bio varchar(255) not null,
primary key (id)
);
  1. Apply the schema:
kubectl apply -f schema.yaml

Result:

atlasschema.db.atlasgo.io/atlasschema-mysql created
  1. Check that our table was created:
kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe myapp.users"

Result:

+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | UNI | NULL | |
| short_bio | varchar(255) | NO | | NULL | |
+-----------+--------------+------+-----+---------+----------------+

Hooray! We applied our desired schema to our target database.

Diff policies in action

Now let's see how we can use diffing policies to customize the way the operator manages our database. In this example, we will demonstrate how we can prevent the operator from dropping columns in our database. Modify the schema.yaml file:

apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-mysql
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql-credentials
+ policy:
+ diff:
+ skip:
+ drop_column: true
schema:
sql: |
create table users (
id int not null auto_increment,
name varchar(255) not null,
email varchar(255) unique not null,
- short_bio varchar(255) not null,
primary key (id)
);

In the example above we added a policy section to our AtlasSchema resource. In this section, we defined a diff policy that will skip DROP COLUMN statements. In addition, we dropped the short_bio column from our schema. Let's apply the updated schema:

kubectl apply -f schema.yaml

Next, wait for the operator to reconcile the desired state with the actual state of the database:

kubectl wait --for=condition=Ready atlasschema/atlasschema-mysql

Finally, let's check that the short_bio column was not dropped. Run:

kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe myapp.users"

Result:

+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | UNI | NULL | |
| short_bio | varchar(255) | NO | | NULL | |
+-----------+--------------+------+-----+---------+----------------+

As you can see, the short_bio column was not dropped. This is because we defined a diffing policy that skips DROP COLUMN statements.

Linting policies

An alternative way to prevent the operator from dropping columns is to use a linting policy. Linting policies allow you to define rules that will be used to validate the changes to the schema before they are applied to the database. Let's see how we can define a policy that prevents the operator from applying destructive changes to the schema. Edit the schema.yaml file:

```diff
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-mysql
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql-credentials
policy:
+ lint:
+ destructive:
+ error: true
- diff:
- skip:
- drop_column: true
schema:
sql: |
create table users (
id int not null auto_increment,
name varchar(255) not null,
email varchar(255) unique not null,
primary key (id)
);

In the example above we replaced the diff policy with a lint policy. In this policy, we defined a destructive rule that will cause the operator to fail if it detects a destructive change to the schema. Notice that the short_bio is not present in the schema (we did this in our previous change).

Let's apply the updated schema:

kubectl apply -f schema.yaml

Next, let's wait for the operator to reconcile the desired state with the actual state of the database:

kubectl wait --for=condition=Ready atlasschema/atlasschema-mysql --timeout 10s

Notice that this time, the operator failed to reconcile the desired state with the actual state of the database:

error: timed out waiting for the condition on atlasschemas/atlasschema-mysql

Let's check the reason for this failure:

kubectl get atlasschema atlasschema-mysql -o jsonpath='{.status.conditions[?(@.type=="Ready")].message}'

Result:

destructive changes detected:
- Dropping non-virtual column "short_bio"

Hooray! We have successfully prevented the operator from applying destructive changes to our database.

Conclusion

In this post, we have presented the Atlas Operator and demonstrated how you can use it to manage your database schema. We also covered diffing and linting policies and showed how you can use them to customize the way the operator manages your database.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.