Automatic Schema Migrations for GORM (Program Mode)
This document describes how to set up the GORM Atlas Provider to load your GORM schema into Atlas in Go Program Mode. Go Program Mode is for more advanced scenarios where you need more control specifying which structs to consider as models.
Using this mode, you can load your GORM schema into Atlas by writing a Go program that imports your GORM models and uses the provider as a library to generate the schema.
If all of your GORM models are in a single package, and either embed gorm.Model
or contain gorm
struct tags,
consider using the Standalone Mode instead.
Installation
- Install Atlas from macOS or Linux by running:
curl -sSf https://atlasgo.sh | sh
See atlasgo.io for more installation options.
- Install the provider by running:
go get -u ariga.io/atlas-provider-gorm
Setup
In Go Program Mode, you can use the provider as a library in your Go program to load your GORM schema into Atlas.
- Create a new program named
loader/main.go
with the following contents:
package main
import (
"fmt"
"io"
"os"
"ariga.io/atlas-provider-gorm/gormschema"
"github.com/<yourorg>/<yourrepo>/path/to/models"
)
func main() {
stmts, err := gormschema.New("mysql").Load(&models.User{})
if err != nil {
fmt.Fprintf(os.Stderr, "failed to load gorm schema: %v\n", err)
os.Exit(1)
}
io.WriteString(os.Stdout, stmts)
}
Be sure to replace github.com/<yourorg>/<yourrepo>/path/to/models
with the import path to your GORM models.
In addition, replace the model types (e.g models.User
) with the types of your GORM models.
- In your project directory, create a new file named
atlas.hcl
with the following contents:
data "external_schema" "gorm" {
program = [
"go",
"run",
"-mod=mod",
"./loader",
]
}
env "gorm" {
src = data.external_schema.gorm.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}
Verify Setup
Next, let's verify Atlas is able to read our desired schema, by running the
schema inspect
command, to inspect our desired schema (GORM models).
atlas schema inspect --env gorm --url "env://src"
Notice that this command uses env://src
as the target URL for inspection, meaning "the schema represented by the
src
attribute of the local
environment block."
Given we have a simple GORM model user
:
type User struct {
gorm.Model
Name string
Age int
}
We should get the following output after running the inspect
command above:
table "users" {
schema = schema.dev
column "id" {
null = false
type = bigint
unsigned = true
auto_increment = true
}
column "created_at" {
null = true
type = datetime(3)
}
column "updated_at" {
null = true
type = datetime(3)
}
column "deleted_at" {
null = true
type = datetime(3)
}
column "name" {
null = true
type = longtext
}
column "age" {
null = true
type = bigint
}
primary_key {
columns = [column.id]
}
index "idx_users_deleted_at" {
columns = [column.deleted_at]
}
}
schema "dev" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
Usage
Now that your project is set up, choose between the two workflows offered by Atlas for generating and planning migrations:
-
Versioned Migrations: Set up a migration directory for your project, creating a version-controlled source of truth of your database schema.
-
Declarative Migrations: Set up a Terraform-like workflow where each migration is calculated as the diff between your desired state and the current state of the database.
Getting Started with the Versioned Workflow
Using the atlas migrate diff
command, you can automatically generate SQL migration files based on changes made to
your GORM models that you can then integrate with GORM's migration system.
Suppose you have the following GORM models across different packages in your project:
package models
import (
"github.com/yourorg/yourrepo/blog"
"gorm.io/gorm"
)
type User struct {
gorm.Model
Name string `gorm:"size:255;not null"`
Email string `gorm:"size:255;uniqueIndex;not null"`
Bio string `gorm:"type:text"`
Posts []blog.Post `gorm:"foreignKey:UserID"`
}
package blog
import (
"gorm.io/gorm"
)
type Post struct {
gorm.Model
Title string `gorm:"not null"`
Content string `gorm:"type:text"`
UserID uint `gorm:"not null"`
}
Update your loader/main.go
to include all your models:
package main
import (
"fmt"
"io"
"os"
"ariga.io/atlas-provider-gorm/gormschema"
"github.com/yourorg/yourrepo/models"
"github.com/yourorg/yourrepo/blog"
)
func main() {
stmts, err := gormschema.New("mysql").Load(
&models.User{},
&blog.Post{},
)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to load gorm schema: %v\n", err)
os.Exit(1)
}
io.WriteString(os.Stdout, stmts)
}
Using the Go Program mode configuration file for the provider, generate a migration file by running:
atlas migrate diff --env gorm
This will generate a migration file in the migrations
directory, similar to this:
migrations
├── 20250819061933.sql
└── atlas.sum
1 directory, 2 files
Examining the contents of 20250819061933.sql
:
-- Create "users" table
CREATE TABLE `users` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`created_at` datetime(3) NULL,
`updated_at` datetime(3) NULL,
`deleted_at` datetime(3) NULL,
`name` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
INDEX `idx_users_deleted_at` (`deleted_at`),
UNIQUE INDEX `idx_users_email` (`email`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- Create "posts" table
CREATE TABLE `posts` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`created_at` datetime(3) NULL,
`updated_at` datetime(3) NULL,
`deleted_at` datetime(3) NULL,
`title` longtext NOT NULL,
`content` text NULL,
`user_id` bigint unsigned NOT NULL,
PRIMARY KEY (`id`),
INDEX `fk_users_posts` (`user_id`),
INDEX `idx_posts_deleted_at` (`deleted_at`),
CONSTRAINT `fk_users_posts` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
Atlas automatically generated a migration file that will create the users
and posts
tables in your database.
Next, let's alter the User
model to add a new field:
type User struct {
ID uint `gorm:"primaryKey"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt gorm.DeletedAt `gorm:"index"`
Name string `gorm:"not null"`
Email string `gorm:"uniqueIndex;not null"`
Bio string `gorm:"type:text"`
Posts []Post `gorm:"foreignKey:UserID"`
}
Create a new migration file by running the same command:
atlas migrate diff --env gorm
The new file in the migrations
directory contains the following SQL:
-- Modify "users" table
ALTER TABLE `users` ADD COLUMN `bio` text NULL;
Next Steps
Follow our Versioned Migrations docs for applying the generated migration files to your database and learning more about using this workflow.
Getting Started with the Declarative Workflow
Using the atlas schema apply
command, Atlas will plan and apply the changes directly to your target database
based on the current state of your GORM schema. Atlas will prompt you to confirm the migration plan before
applying it to the database.
To apply your schema changes declaratively, run:
atlas schema apply --env gorm -u "mysql://root:password@localhost:3306/mydb"
Where the -u
flag accepts the URL to the target database.
Advanced Usage: Custom GORM Configuration
To supply custom gorm.Config{}
object to the provider use the Go Program Mode with
the WithConfig
option. For example, to disable foreign keys:
loader := New("sqlite", WithConfig(
&gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
},
))
Next Steps
Follow our Declarative Migrations docs to learn more about using this workflow.
Going Further
Once you have Atlas integrated with your GORM project, consider exploring these additional features:
-
Set up CI/CD: Automate your schema migrations using GitHub Actions, GitLab CI, or other CI platforms.
-
Enforce Migration Safety: Use Atlas's migration policies to catch potentially dangerous migration operations before they reach production.
-
Drift Detection & Schema Monitoring: Monitor your production databases for schema drift and unauthorized changes.