Skip to main content

2 posts tagged with "hibernate"

View All Tags

Announcing v0.16: ClickHouse support, Hibernate Provider, Baseline Schemas and more

· 10 min read
Rotem Tamir
Building Atlas

Hi everyone,

It's been a while since our last version announcement and today I'm happy to share with you v0.16, which includes some very exciting improvements for Atlas:

  • ClickHouse Beta Support - ClickHouse is a high-performance, columnar database optimized for analytics and real-time query processing. Support for ClickHouse in Atlas has been one of the top requested features by our community in the past year. Today, we are happy to announce that ClickHouse is officially in Beta!
  • Hibernate Provider - Atlas now supports loading the desired state of your database directly from your Hibernate code. Hibernate developers can now join developers from the GORM, Sequelize, TypeORM and more communities who can now use Atlas to manage their database schema.
  • Baseline Schemas - In some cases, your migrations rely on certain database objects to exist apriori to your application schema, for example extensions or legacy tables. Atlas now supports defining a baseline schema which will be loaded before automatically planning and applying your migrations.
  • Proactive conflict detection - Teams that have connected their project to Atlas Cloud will get a prompt in the CLI if their migration directory is out of sync with the latest version in Atlas Cloud. This ensures that new migration files are added in a sequential order, preventing unexpected behavior.
  • Mermaid Support - Atlas now supports generating a Mermaid diagram of your database schema. This is a great way to visualize your database schema and share it with your team.
  • Review Policies - Users working with declarative migrations can now define "review policies" which can define thresholds for which kinds of changes require human review and which can be auto-applied.
  • Postgres Sequences - Another long awaited feature, Atlas now supports managing sequences in PostgreSQL.

I know that's quite a list, so let's dive right in!

ClickHouse Support

ClickHouse is a high-performance, columnar database optimized for analytics and real-time query processing. Support for ClickHouse in Atlas has been one of the top requested features by our community in the past year. Our team has been working hard to bring this feature to you and today we are happy to announce that ClickHouse is now available to use in Beta!

Here's what you need to do to get started:

  1. Log in to your Atlas Cloud account. If you don't have an account yet, you can sign up for free.
  2. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  3. Log in to your Atlas Cloud account from the CLI:
    atlas login
  4. Spin up a local ClickHouse instance:
    docker run -d --name clickhouse-sandbox -p 9000:9000 -d clickhouse/clickhouse-server:latest
  5. Verify that you are able to connect to this instance:
    atlas schema inspect -u 'clickhouse://localhost:9000'
    If everything is working correctly, you should see the following output:
     schema "default" {
    engine = Atomic
    }
  6. Create a new file named schema.hcl with the following content:
     schema "default" {
    engine = Atomic
    }

    table "users" {
    schema = schema.default
    engine = MergeTree
    column "id" {
    type = UInt32
    }
    column "name" {
    type = String
    }
    column "created" {
    type = DateTime
    }
    primary_key {
    columns = [column.id]
    }
    }
  7. Run the following command to apply the schema to your local ClickHouse instance:
     atlas schema apply -u 'clickhouse://localhost:9000' -f schema.hcl
    Atlas will prompt you to confirm the changes:
     -- Planned Changes:
    -- Create "users" table
    CREATE TABLE `default`.`users` (
    `id` UInt32,
    `name` String,
    `created` DateTime
    ) ENGINE = MergeTree
    PRIMARY KEY (`id`) SETTINGS index_granularity = 8192;
    Hit "Enter" to apply the changes.
  8. Amazing! Our schema has been applied to the database!

Hibernate Provider

Atlas now supports loading the desired state of your database directly from your Hibernate code. Packaged as both a Maven and Gradle plugin, the Hibernate provider allows you seamlessly integrate Atlas into your existing Hibernate project.

Hibernate ships with an automatic schema management tool called hbm2ddl. Similarly to Atlas, this tool can inspect a target database and automatically migrate the schema to the desired one. However, the Hibernate team has been advising for years not to use this tool in production:

Although the automatic schema generation is very useful for testing and prototyping purposes, in a production environment, it’s much more flexible to manage the schema using incremental migration scripts.

This is where Atlas comes in. Atlas can read Hibernate schema and plan database schema migrations.

To get started, refer to the blog post we published earlier this week.

Baseline Schemas

LOGIN REQUIRED

The docker block is available for logged-in users only. To use this feature, run:

atlas login

In some cases, there is a need to configure a baseline schema for the dev database so that every computation using the dev-database starts from this baseline. For example, users' schemas or migrations rely on objects, extensions, or other schema resources that are not managed by the project.

To configure such a baseline, use the docker block with the relevant image and pass to it the script for creating the base schema for the project:

docker "postgres" "dev" {
image = "postgres:15"
schema = "public"
baseline = <<SQL
CREATE SCHEMA "auth";
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" SCHEMA "auth";
CREATE TABLE "auth"."users" ("id" uuid NOT NULL DEFAULT auth.uuid_generate_v4(), PRIMARY KEY ("id"));
SQL
}

env "local" {
src = "file://schema.pg.hcl"
dev = docker.postgres.dev.url
}

For more details refer to the documentation.

Proactive conflict detection

Teams that have connected their project to Atlas Cloud (see setup) will get a prompt in the CLI if their migration directory is out of sync with the latest version in Atlas Cloud. This ensures that new migration files are added in a sequential order, preventing unexpected behavior. For example:

atlas migrate diff --env dev

? Your directory is outdated (2 migrations behind). Continue or Abort:
▸ Continue (Rebase later)
Abort (Pull changes and re-run the command)

Additionally, the atlas migrate lint command helps enforce this requirement during the CI stage. Learn more on how to integrate Atlas into your GitHub Actions or GitLab CI pipelines.

Mermaid Support

Atlas now supports generating a Mermaid diagram of your database schema. Let's demonstrate this feature using an example schema for a local SQLite database. First, we'll create a new file named sqlite.hcl with the following content:

schema "default" {
}

table "users" {
schema = schema.default
column "id" {
type = int
}
column "name" {
type = text
}
column "email" {
type = text
}
primary_key {
columns = [column.id]
}
}

table "blog_posts" {
schema = schema.default
column "id" {
type = int
}
column "title" {
type = text
}
column "body" {
type = text
}
column "author_id" {
type = int
}
foreign_key "blog_author" {
columns = [column.author_id]
ref_columns = [table.users.column.id]
}
}

Run the following command to inspect the schema and generate the Mermaid code:

atlas schema inspect -u file://sqlite.hcl --dev-url 'sqlite://?mode=memory' --format "{{ mermaid . }}"

The output will look like this:

erDiagram
users {
int id PK
text name
text email
}
blog_posts {
int id
text title
text body
int author_id
}
blog_posts }o--o| users : blog_author

Next, copy this output and paste it into the Mermaid Live Editor.

The result should look like this:

Review Policies

Users working with declarative migrations can now define "review policies" which can define thresholds for which kinds of changes require human review and which can be auto-applied.

By default, when running atlas schema apply on a target database, if any changes to the target database are required, Atlas will prompt the user to confirm the changes. This is a safety measure to prevent accidental changes to the target database.

However, Atlas ships with an analysis engine that can detect the impact of different changes to the target database. For example, Atlas can detect irreversible destructive changes that will result in data loss or data dependent changes that may fail due to data integrity constraints.

With review policies, you can tell Atlas to first analyze the proposed changes and only prompt the user if the changes are above a certain risk threshold. For example, you can configure Atlas to only ask for review if any warnings are found and to automatically apply all changes that do not trigger any diagnostics:

lint {
review = WARNING
}

You can see a live demonstration of this feature towards the end of our recent HashiCorp conference talk.

Postgres Sequences

BETA FEATURE

Sequences are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

The sequence block allows defining a sequence number generator. Supported by PostgreSQL.

Note, a sequence block is printed by Atlas on inspection, or it may be manually defined in the schema only if it represents a PostgreSQL sequence that is not implicitly created by the database for identity or serial columns.

# Simple sequence with default values.
sequence "s1" {
schema = schema.public
}

# Sequence with custom configuration.
sequence "s2" {
schema = schema.public
type = smallint
start = 100
increment = 2
min_value = 100
max_value = 1000
}

# Sequence that is owned by a column.
sequence "s3" {
schema = schema.public
owner = table.t2.column.id
comment = "Sequence with column owner"
}

# The sequences created by this table are not printed on inspection.
table "users" {
schema = schema.public
column "id" {
type = int
identity {
generated = ALWAYS
start = 10000
}
}
column "serial" {
type = serial
}
primary_key {
columns = [column.id]
}
}

table "t2" {
schema = schema.public
column "id" {
type = int
}
}

schema "public" {
comment = "standard public schema"
}

Wait, there's more!

A few other notable features shipped in this release are:

  • Analyzers for detecting blocking enum changes on MySQL. Certain kinds of changes to enum columns on MySQL tables change the column type and require a table copy. During this process, the table is locked for write operations which can cause application downtime.

    Atlas now ships with analyzers that can detect such changes and warn the user before applying them. For more information see the documentation for analyzers MY111, MY112 and MY113.

  • The external data source - The external data source allows the execution of an external program and uses its output in the project.

    For example:

    atlas.hcl
    data "external" "dot_env" {
    program = [
    "npm",
    "run",
    "load-env.js"
    ]
    }

    locals {
    dot_env = jsondecode(data.external.dot_env)
    }

    env "local" {
    src = local.dot_env.URL
    dev = "docker://mysql/8/dev"
    }

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

Announcing Automatic Migrations for Hibernate Users

· 9 min read
Dor Avraham
Dor Avraham
TL;DR

You can now import the desired database schema from your Hibernate project into Atlas, and use it to automatically plan migrations for you.

See an example

Introduction

Hibernate-ORM is one of the most popular ORMs for Java, so much so that parts of it have evolved into the JPA standard and the Jakarta APIs.

Today, we are excited to announce that Atlas now supports loading and managing Hibernate schemas.

Atlas is a modern tool for managing your database schema. It allows you to inspect, plan, lint and apply schema changes to your database. It is designed to be used by developers, DBAs and DevOps engineers alike.

By using Atlas, Hibernate users can now enjoy these benefits:

  • A declarative migration flow - Atlas can operate like a "Terraform for databases", where by running atlas schema apply the application schema is applied on a target database.
  • Automatic schema migration planning - Alternatively, Atlas can operate using a more traditional versioned migration flow. However, contrary to most tools, Atlas will automatically analyze the diff between the migration directory and the current application data model, and will produce correct and safe SQL migration files.
  • CI for schema changes - Atlas can be used during CI to make sure you never merge a pull request that will break your database schema.
  • Modern CD integrations - Atlas integrates seamlessly with modern deployment tools such as Kubernetes, Terraform, Helm, Flux, and ArgoCD. This allows you to deploy changes to your database schema as part of your existing deployment pipelines.
  • Visualization - Atlas users can create beautiful, shareable ERDs of their application data model with a single command.
  • .. and much more (read more about Atlas features).
note

Hibernate support is currently in Beta and we would love to hear your feedback ❤️. Please reach out to us on Discord or by opening an issue.

Integrating Atlas into your Hibernate project

Hibernate ships with an automatic schema management tool called hbm2ddl. Similarly to Atlas, this tool can inspect a target database and automatically migrate the schema to the desired one. However, the Hibernate team has been advising for years not to use this tool in production:

Although the automatic schema generation is very useful for testing and prototyping purposes, in a production environment, it’s much more flexible to manage the schema using incremental migration scripts.

This is where Atlas comes in. Atlas can read Hibernate schema and plan database schema migrations.

How does it work?

Atlas compares two database schema states and plans a migration to get from ones state to the other. The database schema can be read directly from Hibernate, a migration directory, a database connection, or another ORM.

To read the Hibernate schema, Atlas utilizes the concept of an external_schema datasource.

Demo Time

For this demo, we are going to use Gradle, PostgreSQL and this example project.

Installation

If you haven't already, install the latest version of Atlas:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Add the hibernate-provider to your project via Gradle or Maven:

plugins {
id("io.atlasgo.hibernate-provider-gradle-plugin") version "0.1"
}

To check the installation, run: ./gradlew help --task schema

Configuration

The plugin adds a configurable Gradle task (or a Maven goal) that prints the Hibernate schema without requiring a database connection. However, the task needs to be configured with the database dialect. We can do this by creating a schema-export.properties file in the resource directory. For example, for MySQL / PostgreSQL:

jakarta.persistence.database-product-name=MySQL
jakarta.persistence.database-major-version=8

Lastly, we need to configure Atlas to use this configuration by creating an atlas.hcl file and adding the definition of the Hibernate schema:

atlas.hcl
data "external_schema" "hibernate" {
program = [
"./gradlew",
"-q",
"schema",
"--properties", "schema-export.properties"
]
}

And the Atlas configuration:

atlas.hcl
env "hibernate" {
src = data.external_schema.hibernate.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}

Running Atlas

We should now be able to view our schema using Atlas:

atlas schema inspect -w --env hibernate --url env://src

The -w flag allows us to inspect the schema in atlas cloud:

Atlas has many more features we can explore, let's create a migration directory from our schema:

atlas migrate diff --env hibernate

By running atlas migrate diff, Atlas compares the state of our Hibernate schema and the state of the schema in the migration directory. Atlas sees that the migration directory does not exist and initializes it with the current Hibernate schema. Observe the migration directory, it should contain similar files:

-- Create "movies" table
-- Create "movies" table
CREATE TABLE "movies" (
"id" bigserial NOT NULL,
"numberinseries" integer NULL,
"title" character varying(255) NULL,
PRIMARY KEY ("id")
);
-- Create "actors" table
CREATE TABLE "actors" (
"name" character varying(255) NOT NULL,
PRIMARY KEY ("name")
);
-- Create "movieparticipation" table
CREATE TABLE "movieparticipation" (
"actorname" character varying(255) NOT NULL,
"movieid" bigint NOT NULL,
PRIMARY KEY ("actorname", "movieid"),
CONSTRAINT "fkaq2kkwvh9870847sm35vtjtiy" FOREIGN KEY ("movieid") REFERENCES "movies" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT "fktm8fbwa577lnbvwdjegwxvget" FOREIGN KEY ("actorname") REFERENCES "actors" ("name") ON UPDATE NO ACTION ON DELETE NO ACTION
);

Atlas uses the atlas.sum file to protect against conflicting schema changes, you can read about it here.

note

Currently, Atlas does not support using generated fields that require data initialization such as GenerationType.SEQUENCE, GenerationType.TABLE, and Generation.AUTO.

If needed, you can still export the schema using the flag --enable-table-generators (or -Denable-table-generators using Maven). When applying the schema to your database, you will need to make sure to apply the ignored statements (using atlas migrate --env hibernate diff --edit). See more information on manual migrations here

For example, if you are adding GenerationType.SEQUENCE to the Event entity, you will need to add insert statements to your generated migration file:

diff --git a/migrations/20231210140844.sql b/examples/with_local_plugin_repository/migrations/20231210140844.sql
index ad80a64..5955834 100644
--- a/migrations/20231210140844.sql
+++ b/migrations/20231210140844.sql
@@ -4,3 +4,6 @@ CREATE TABLE `Event` (`id` bigint NOT NULL AUTO_INCREMENT, `title` varchar(255)
-- Create "Event_SEQ" table
CREATE TABLE `Event_SEQ` (`next_val` bigint NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
+ -- Initialize "Event_SEQ" table
+ insert into Event_SEQ values ( 1 );

Testing these changes can be done by running the application with a local database and creating the entity. To apply the migration directory to the local database, use atlas migrate apply.

Testing the migrations

Now that our migration directory is ready, let's see how to apply it to a target database. Let's start a local PostgreSQL instance:

docker run -it --rm --name mypostgres -p 5432:5432 -e 'POSTGRES_PASSWORD=password' postgres

Next, let's apply our migrations to the database:

atlas migrate apply --env hibernate --url 'postgres://postgres:password@localhost:5432/?search_path=public&sslmode=disable'

Atlas provides details on the applied migrations:

Migrating to version 20231211121102 (1 migrations in total):

-- migrating version 20231211121102
-> CREATE TABLE "movies" (
"id" bigserial NOT NULL,
"numberinseries" integer NULL,
"title" character varying(255) NULL,
PRIMARY KEY ("id")
);
-> CREATE TABLE "actors" (
"name" character varying(255) NOT NULL,
PRIMARY KEY ("name")
);
-> CREATE TABLE "movieparticipation" (
"actorname" character varying(255) NOT NULL,
"movieid" bigint NOT NULL,
PRIMARY KEY ("actorname", "movieid"),
CONSTRAINT "fkaq2kkwvh9870847sm35vtjtiy" FOREIGN KEY ("movieid") REFERENCES "movies" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT "fktm8fbwa577lnbvwdjegwxvget" FOREIGN KEY ("actorname") REFERENCES "actors" ("name") ON UPDATE NO ACTION ON DELETE NO ACTION
);
-- ok (9.282079ms)

-------------------------
-- 54.100203ms
-- 1 migrations
-- 3 sql statements

To confirm the migrations were applied, we can use Atlas to inspect the database. Run the following command:

atlas schema inspect -w --env hibernate --url 'postgres://postgres:password@localhost:5432/?search_path=public&sslmode=disable'

Making changes with confidence

Atlas ships with a static code analysis engine that can detect risky schema changes during development or Continuous Integration. This functionality is exposed to users via the migrate lint command. Let's demonstrate this capability with an example.

Suppose we make the following change:

--- a/src/main/java/org/example/Movie.java
+++ b/src/main/java/org/example/Movie.java
@@ -10,13 +10,10 @@ public class Movie {

Movie(String title, Integer numberInSeries) {
this.title = title;
- this.numberInSeries = numberInSeries;
}
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
public Long id;

public String title;
-
- public Integer numberInSeries;
}

By removing an attribute from the Movie model, we are removing a column from the database schema. Let's see how Atlas handles this change. Run atlas migrate diff --env hibernate, and observe the new file in the migration directory:

cat migrations/20231211124321.sql
-- Modify "movies" table
ALTER TABLE "movies" DROP COLUMN "numberinseries";

While this change may be desired, it is an irreversible operation that should be done with caution. Atlas can help us avoid dangerous schema changes by linting the migration directory and not allowing such a change to get merged.

Running the following command, we can see that Atlas will warn us about a destructive change to the database:

atlas migrate lint --env hibernate --latest 1

20231211124321.sql: destructive changes detected:
L2: Dropping non-virtual column "numberinseries"

Running migrate lint locally during development can be very useful, but linting becomes much more powerful when you integrate into your Continuous Integration pipeline. Atlas offers a set of Github Actions designed to make setting this up a breeze.

Conclusion

In this post, we have presented how Hibernate projects can use Atlas to automatically plan, lint and apply schema migrations based only on their data model.

If you want to explore more configuration options or dive deeper into how this works, please take a look at this repository.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.