Using Custom Project Configuration in Atlas Operator
Introduction
Project Configuration being a core concept in Atlas, It allows you to manage your database schema in a more advanced way. In Atlas Operator, you can inject your project configuration into the Atlas Operator to customize the behavior of the Operator.
How it works
By providing a config / configForm field in the Atlas resources,
Atlas Operator will use the provided configuration to override the default behavior.
Guide: Using composite_schema with custom configurations
In this guide, we will demonstrate how to use custom configuration to setup a composite schema
Step 1: Install the Operator with allowCustomConfig enabled
Install the Atlas Operator using Helm:
helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator --set allowCustomConfig=true
The allowCustomConfig flag is set to true to allow custom configuration to be injected into the Operator.
Step 2: Provision a local database
Next, we will install a PostgreSQL database to manage using the Atlas Operator:
kubectl apply -f https://gist.githubusercontent.com/rotemtam/a7489d7b019f30aff7795566debbedcc/raw/53bac2b9d18577fed9e858642092a7f4bcc44a60/db.yaml
This command will install a few resources in your cluster:
- A Deployment for the PostgreSQL database running the postgres:latest image.
- A Service to expose the database to the cluster.
- A Secret containing the database credentials in the Atlas URL format.
Step 3: Create a secret with the Atlas token
The custom configuration feature requires atlas login. Let's create a secret with the token:
kubectl create secret generic atlas-token --from-literal=ATLAS_TOKEN="your-atlas-token"
This secret will be used to login to Atlas Cloud.
Step 4: Create the AtlasSchema resource
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-pg
spec:
urlFrom:
secretKeyRef:
key: url
name: postgres-credentials
cloud:
tokenFrom:
secretKeyRef:
key: ATLAS_TOKEN
name: atlas-token
envName: custom-env
config: |
data "external_schema" "users" {
program = [
"echo",
"CREATE TABLE users (id INT, name TEXT);"
]
}
data "external_schema" "posts" {
program = [
"echo",
"CREATE TABLE posts (id INT, user_id INT, title TEXT);"
]
}
data "composite_schema" "default" {
schema "public" {
url = data.external_schema.users.url
}
schema "public" {
url = data.external_schema.posts.url
}
}
env "custom-env" {
schema {
src = data.composite_schema.default.url
}
}
The custom configuration block contains a composite_schema block that defines two external schemas,
users and posts, and a composite schema that combines them.
envName is a required field in the AtlasSchema resource. It is used to reference the environment in the custom configuration.
Step 5: Apply the schema and verify the result
Run the following command to apply the schema:
kubectl apply -f atlas-schema.yaml
You can verify the result by querying the database to list the tables:
kubectl exec -it $(kubectl get pods -l app=postgres -o jsonpath='{.items[0].metadata.name}') -- psql -U postgres -d postgres -c "\dt"
You should see both the users and posts being created:
List of relations
Schema | Name | Type | Owner
--------+-------+-------+----------
public | posts | table | postgres
public | users | table | postgres
Guide: Using IAM Authentication with AWS RDS
This guide demonstrates how to configure the Atlas Operator to connect to an AWS RDS database using IAM Authentication instead of static credentials.
Prerequisites
Before you begin, ensure you have:
- IAM Authentication enabled on your RDS instance. See the AWS documentation.
- A database user configured for IAM authentication. See Creating a database account using IAM authentication.
- An IAM role with the
rds-db:connectpermission attached to the Atlas Operator's service account. See Creating and using an IAM policy for IAM database access. - The Atlas Operator installed with
allowCustomConfig=true.
Step 1: Install the Operator with allowCustomConfig enabled
helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator --set allowCustomConfig=true
Step 2: Create a secret with the Atlas token
kubectl create secret generic atlas-token --from-literal=ATLAS_TOKEN="your-atlas-token"
Step 3: Create the AtlasMigration resource with IAM authentication
When using IAM authentication, you define the database URL inside the config block using the
aws_rds_token data source. This generates a short-lived
token that Atlas uses to authenticate with RDS.
- PostgreSQL
- MySQL
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlas-migration
spec:
envName: "prod"
cloud:
tokenFrom:
secretKeyRef:
name: atlas-token
key: ATLAS_TOKEN
dir:
remote:
name: "myapp"
config: |
data "aws_rds_token" "db" {
endpoint = "mydb.xxx.us-east-1.rds.amazonaws.com:5432"
region = "us-east-1"
username = "iamuser"
}
env {
name = atlas.env
url = "postgres://iamuser:${urlescape(data.aws_rds_token.db)}@mydb.xxx.us-east-1.rds.amazonaws.com:5432/mydb?sslmode=require"
}
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlas-migration
spec:
envName: "prod"
cloud:
tokenFrom:
secretKeyRef:
name: atlas-token
key: ATLAS_TOKEN
dir:
remote:
name: "myapp"
config: |
data "aws_rds_token" "db" {
endpoint = "mydb.xxx.us-east-1.rds.amazonaws.com:3306"
region = "us-east-1"
username = "iamuser"
}
env {
name = atlas.env
url = "mysql://iamuser:${urlescape(data.aws_rds_token.db)}@mydb.xxx.us-east-1.rds.amazonaws.com:3306/mydb?tls=true&allowCleartextPasswords=1"
}
Key points
name = atlas.env: This creates an unnamed environment block that dynamically takes its name from theenvNamefield in the resource spec. This allows you to reuse the same configuration across multiple environments.- No
credentialsblock: When theconfigblock defines the databaseurlinside theenvblock, you should not use thecredentials,url, orurlFromfields in the resource spec. urlescape(): The IAM token contains special characters that must be URL-encoded when used in the connection string.
Using variables for dynamic configuration
If you deploy to multiple environments with different RDS endpoints, you can use the vars field to inject
values into your configuration:
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlas-migration
spec:
envName: "prod"
cloud:
tokenFrom:
secretKeyRef:
name: atlas-token
key: ATLAS_TOKEN
dir:
remote:
name: "myapp"
vars:
- key: "endpoint"
value: "mydb.xxx.us-east-1.rds.amazonaws.com:5432"
- key: "region"
value: "us-east-1"
- key: "username"
value: "iamuser"
- key: "database"
value: "mydb"
config: |
variable "endpoint" {
type = string
}
variable "region" {
type = string
}
variable "username" {
type = string
}
variable "database" {
type = string
}
data "aws_rds_token" "db" {
endpoint = var.endpoint
region = var.region
username = var.username
}
env {
name = atlas.env
url = "postgres://${var.username}:${urlescape(data.aws_rds_token.db)}@${var.endpoint}/${var.database}?sslmode=require"
}
This approach works well with Helm, where you can template the vars values from your Helm values file.
Guide: Using IAM Authentication with GCP Cloud SQL
Similarly, you can use GCP Cloud SQL IAM authentication
with the Atlas Operator using the gcp_cloudsql_token
data source.
Prerequisites
- IAM Authentication enabled on your Cloud SQL instance. See the GCP documentation.
- A database user configured for IAM authentication. See Add an IAM user or service account to the database.
- Workload Identity configured for the Atlas Operator's service account to authenticate with GCP.
Example configuration
- PostgreSQL
- MySQL
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlas-migration
spec:
envName: "prod"
cloud:
tokenFrom:
secretKeyRef:
name: atlas-token
key: ATLAS_TOKEN
dir:
remote:
name: "myapp"
config: |
data "gcp_cloudsql_token" "db" {}
env {
name = atlas.env
url = "postgres://iamuser:${urlescape(data.gcp_cloudsql_token.db)}@10.0.0.1:5432/mydb?sslmode=require"
}
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlas-migration
spec:
envName: "prod"
cloud:
tokenFrom:
secretKeyRef:
name: atlas-token
key: ATLAS_TOKEN
dir:
remote:
name: "myapp"
config: |
data "gcp_cloudsql_token" "db" {}
env {
name = atlas.env
url = "mysql://iamuser:${urlescape(data.gcp_cloudsql_token.db)}@10.0.0.1:3306/mydb?allowCleartextPasswords=1&tls=skip-verify&parseTime=true"
}
For MySQL, the allowCleartextPasswords and tls parameters are required for IAM authentication with Cloud SQL.
For PostgreSQL, use sslmode=require.