Connect to your database
In this section, we perform read-only operations and make no changes to your database.
With the Atlas CLI installed and logged-in to your account, let's now make sure we can connect to your database with the CLI.
Step 1: Determine the database's URL
Atlas uses a standard URL format to connect to databases and load schemas and migrations from various sources. The format below covers the supported parts of a URL, with subsequent sections providing more detailed examples.
driver://[username[:password]@]address/[schema|database][?param1=value1&...¶mN=valueN]
Select your database below for the connection URL format. For the full reference, see the URL Docs.
- PostgreSQL
- CockroachDB
- MySQL
- MariaDB
- SQL Server
- ClickHouse
- Redshift
- Aurora DSQL
- Oracle
- Spanner
- Snowflake
- Databricks
- Docker
- SQLite
Connecting to a local PostgreSQL database named database (all schemas):
postgres://localhost:5432/database
Connecting to a specific PostgreSQL schema named public:
postgres://localhost:5432/database?search_path=public
Connecting to a local PostgreSQL with credentials and SSL disabled:
postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable
Connecting to a CockroachDB Cloud cluster:
crdb://user:pass@<cluster-name>.<region>.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full
Connecting to a specific CockroachDB schema:
crdb://user:pass@<cluster-name>.<region>.cockroachlabs.cloud:26257/database?search_path=public&sslmode=verify-full
Connecting to a local CockroachDB instance:
crdb://root@localhost:26257/defaultdb?sslmode=disable
Connecting to a local MySQL server (all schemas/databases):
mysql://localhost:3306/
Connecting to a specific MySQL schema (database) with a username and password:
mysql://user:pass@localhost:3306/schema
Connecting using Unix Sockets:
mysql+unix:///tmp/mysql.sock
mysql+unix://user:pass@/tmp/mysql.sock
mysql+unix://user@/tmp/mysql.sock?database=dbname
Connecting to a local MariaDB server (all schemas/databases):
maria://localhost:3306/
Connecting to a specific MariaDB schema (database) with a username and password:
maria://user:pass@localhost:3306/schema
Connecting using Unix Sockets:
maria+unix:///tmp/mysql.sock
maria+unix://user:pass@/tmp/mysql.sock
maria+unix://user@/tmp/mysql.sock?database=dbname
Connecting to a default schema of current user:
sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master&mode=schema
Connecting to a local SQLServer database named master (all schemas). The user need to have db_owner role:
sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master&mode=database
Azure Active Directory (AAD) authentication:
Use the fedauth parameter to specify the AAD authentication method. For more information, see the document on the underlying driver.
azuresql://<instance>.database.windows.net?fedauth=ActiveDirectoryDefault&database=master
- The
modeparameter is Atlas-specific and isn't used for opening the underlying connection. - The default
modeisschema. - The
azuresqlschema is used for AAD authentication with Azure SQL Database and Azure SQL Managed Instance.
Connecting to the Oracle Pluggable Database (PDB) named FREEPDB1 and managing all tables of the login user PDBADMIN:
oracle://PDBADMIN:Pssw0rd0995@localhost:1521/FREEPDB1
If you want to manage all schemas in the Oracle Pluggable Database (PDB),
you can use the mode parameter to specify the scope of the connection.
oracle://PDBADMIN:Pssw0rd0995@localhost:1521/FREEPDB1?mode=database
- The
modeparameter is Atlas-specific and isn't used for opening the underlying connection. - The default
modeisschema. - Please ensure the account has the necessary privileges to manage schemas in the PDB.
GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO PDBADMIN;
-- Grant privileges to manage schemas in the PDB
GRANT CREATE USER, CREATE ANY TABLE, CREATE VIEW TO PDBADMIN;
GRANT ALTER USER, ALTER ANY TABLE TO PDBADMIN;
GRANT DROP USER, DROP ANY TABLE TO PDBADMIN;
-- Grant execute on DBMS_LOCK to create advisory locks
GRANT EXECUTE ON SYS.DBMS_LOCK TO PDBADMIN;
Connecting to a local SQLite database (file):
sqlite://file.db
Connecting to an in-memory SQLite database (ephemeral). Useful for --dev-url:
sqlite://file?mode=memory&_fk=1
Atlas also supports WebSocket connections to remote libsql databases:
libsql+ws://database-url # For local environments
libsql://database-url
Connecting to a local ClickHouse server (all schemas/databases):
clickhouse://localhost:9000
Connecting to a specific ClickHouse schema (database) with a username and password:
clickhouse://user:pass@localhost:9000/schema
Connecting to a specific ClickHouse schema with SSL enabled:
clickhouse://user:pass@localhost:9000/schema?secure=true
To connect ClickHouse Cloud,
we need to use native protocol port 9440 with SSL enabled:
clickhouse://user:pass@CLICKHOUSE-CLOUD-HOST:9440/schema?secure=true
Connecting to a specific Redshift cluster with a schema named public:
redshift://user:pass@redshift-cluster:5439/database?search_path=public
Connecting to a specific Redshift cluster with a schema named public with SSL disabled:
redshift://user:pass@redshift-cluster:5439/database?search_path=public&sslmode=disable
If you want to connect Redshift though Data API you can use the following URL:
AWS credentials are required to connect to Redshift via Data API. In this protocol, atlas doesn't support changing the schema on URL, the schema is based on default schema of the user. If you want to bind the connection to a specific schema, you can use the following SQL command:
ALTER USER [username] SET search_path = [schema];
Connecting to Serverless via IAM Identity:
redshift+http://workgroup([workgroup-name])/[database]
Connecting to Serverless via Secret ARN:
redshift+http://[arn]@workgroup([workgroup-name])/[database]
Connecting to provisioned Redshift cluster via IAM Identity:
redshift+http://cluster([cluster-name])/[database]
Connecting to provisioned Redshift cluster with database username
redshift+http://[dbuser]@cluster([cluster-name])/[database]
Connecting to provided Redshift cluster via Secret ARN:
redshift+http://[arn]@cluster([cluster-name])/[database]
- The default
modeisschema. - To change the connection to realm mode, use
mode=database. - Use
timeout=5mto set the timeout for the http client. Default is 5 minutes. - Use
polling=50msto set the polling interval when fetching the query results. Default is 50ms.
Connecting to an Aurora DSQL cluster:
dsql://admin:pass@cluster.dsql.us-east-1.on.aws/?sslmode=require
- Aurora DSQL requires SSL connections (
sslmode=require) - The
adminuser is the default DSQL user - Password is obtained from AWS DSQL token generation
- Aurora DSQL is PostgreSQL-compatible but has significant limitations
Connecting to spanner database on Google Cloud Platform (GCP):
spanner://projects/[project-id]/instances/[instance-id]/databases/[database-name]
You must be authenticated with GCP credentials to connect to Spanner.
Connecting to Snowflake database via user and password:
snowflake://<username>:<password>@<account_identifier>/<database>?warehouse=<warehouse>
Connecting to Snowflake schema via user and password:
snowflake://<username>:<password>@<account_identifier>/<database>/<schema>?warehouse=<warehouse>
Connecting to Snowflake database via Programmatic Access Token:
snowflake://<username>:<access_token>@<account_identifier>/<database>?warehouse=<warehouse>
Connecting to Snowflake schema via Programmatic Access Token:
snowflake://<username>:<access_token>@<account_identifier>/<database>/<schema>?warehouse=<warehouse>
Connection to Snowflake database via Multi-factor authentication (MFA)
snowflake://<username>:<password><passcode>@<account_identifier>/<database>?warehouse=<warehouse>?authenticator=USERNAME_PASSWORD_MFA&passcodeInPassword=true
Connecting to Snowflake via Key-Pair authentication:
snowflake://<username>@<account_identifier>/<database>?warehouse=<warehouse>&authenticator=SNOWFLAKE_JWT&privateKey=<private_key>
To generate the <private_key> value:
1. Generate an RSA private key in PKCS#8 PEM format:
openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out snowflake_rsa_key.p8 -nocrypt
2. Extract the public key and register it with your Snowflake user:
openssl rsa -in snowflake_rsa_key.p8 -pubout -out snowflake_rsa_key.pub
ALTER USER <username> SET RSA_PUBLIC_KEY='<content of snowflake_rsa_key.pub without header/footer>';
3. Encode the private key for the DSN. The <private_key> value should be the URL-safe base64-encoded PKCS#8 DER content of the key
(this is what the gosnowflake driver expects).
You can generate this value using the following Python script:
import base64
# Read the PEM file and extract the raw DER bytes
with open('snowflake_rsa_key.p8') as f:
lines = f.readlines()
# Strip header/footer and decode standard base64
pem_body = ''.join(l.strip() for l in lines if not l.startswith('-----'))
der_bytes = base64.b64decode(pem_body)
# Encode as URL-safe base64 (what gosnowflake expects)
url_safe = base64.urlsafe_b64encode(der_bytes).decode()
print(url_safe)
For another authentication methods or advanced connection options, refer to the Snowflake DSN.
Explanation of the URL parameters:
<username>: Your Snowflake account username.<password>: Your Snowflake account password.<access_token>: Your Snowflake Programmatic Access Token (found under Avatar → Settings → Authentication → Programmatic Access Tokens)<account_identifier>: Your Snowflake account identifier (found under Avatar → Connect a tool to Snowflake → Account Identifier)<warehouse>: The name of the Snowflake warehouse where Atlas executes DDLs. (Found under Admin → Warehouses)<database>: The name of the Snowflake database to connect to.<schema>: The name of the Snowflake schema to connect to.<passcode>: The passcode for MFA authentication. Use with query paramsauthenticator=USERNAME_PASSWORD_MFA&passcodeInPassword=true.<private_key>: The base64-encoded PKCS#8 DER content of the RSA private key (the content between theBEGIN/ENDmarkers in the.p8file, with newlines removed). Used withauthenticator=SNOWFLAKE_JWT.
Connecting to a schema on the default catalog:
databricks://token@host:443/warehouse?schema=default
To get the connection details of host and warehouse, please refer to the
Databricks documentation.
For token, please refer to
Connecting to a catalog (database) scope:
databricks://token@host:443/warehouse?catalog=catalog_name
Connecting to a specific schema in a specific catalog:
databricks://token@host:443/warehouse?catalog=catalog_name&schema=schema_name
Atlas can spin up an ephemeral local docker container for you by specifying a special URL like below. This can be useful
if you need a dev database for schema validation or diffing. However, some images like mysql /
mariadb take quite some time to "boot", before they are ready to be used. For a smoother developing experience
consider spinning up a longer lived container by yourself.
# PostgreSQL database scope (all schemas).
docker://postgres/15/test
# PostgreSQL specific schema scope.
docker://postgres/15/test?search_path=public
# MySQL server scope (all schemas).
docker://mysql/8
# MySQL specific schema scope.
docker://mysql/8/test
# MariaDB server scope (all schemas).
docker://maria/latest
# MariaDB specific schema scope.
docker://maria/latest/test
# Aurora DSQL (uses PostgreSQL, generates DSQL-compatible SQL).
docker://dsql/16
# Aurora DSQL with specific schema scope.
docker://dsql/16/postgres?search_path=public
# CockroachDB server scope (all schemas).
docker://crdb/v25.1.1
# CockroachDB specific schema scope.
docker://crdb/v25.1.1/dev
Step 2: Choose connection scope
Using the connection URL, specify the scope with which you would like Atlas to connect:
Schema scope
- Targets a single schema.
- Atlas only sees objects in that schema, and generated DDL omits schema
qualifiers (e.g.
CREATE TABLE usersinstead ofCREATE TABLE public.users).
# PostgreSQL: search_path selects the schema
postgres://localhost:5432/database?search_path=public
# MySQL: the path segment is the schema
mysql://localhost:3306/app_schema
Database scope
- Targets the entire database or specific schemas within it.
- Atlas includes schema qualifiers in generated DDL (e.g.
CREATE TABLE public.users):
# PostgreSQL: no search_path = all schemas
postgres://localhost:5432/database
# MySQL: trailing slash = all schemas
mysql://localhost:3306/
Which scope should you use?
| Use case | Scope |
|---|---|
| All tables live in one schema | Schema |
Multiple schemas (e.g. auth, billing) | Database |
Qualified identifiers (e.g. auth.users) | Database |
| Database-level objects: extensions, roles, event triggers, composite types | Database |
The URL examples in Step 1 above show both scope formats per database. For a detailed explanation, see Schema vs. Database scope.
Test your connection
Once you have determined your full connection URL, including its scope, you can test that Atlas is able to connect to the database using the following command:
atlas schema inspect --url "<url>" --format "OK"
Be sure to replace <url> with your database's URL.
If Atlas is able to connect to the given URL you should see the following output:
OK
Troubleshooting
Password contains special characters
Atlas uses standard RFC 3986 URLs. Characters like
@, $, +, % in the password must be percent-encoded.
For example, Pa$$w@rd becomes Pa%24%24w%40rd.
Without encoding, you will see errors like:
Error: mysql: query system variables: dial tcp: lookup root:BnB+PjA:3309: no such host
Encode the password before putting it in the URL:
import urllib.parse
print(urllib.parse.quote("BnB+PjA")) # BnB%2BPjA
You can also use the urlescape function in atlas.hcl to handle this automatically:
locals {
db_pass = urlescape(getenv("DB_PASSWORD"))
}
env "local" {
url = "postgres://user:${local.db_pass}@localhost:5432/database"
}
Connection times out
Error: postgres: scanning system variables: dial tcp 10.0.5.243:5432: connect: operation timed out
Atlas cannot reach the database over the network. Verify:
- You are connected to the correct VPN
- Firewall rules allow connections from your machine
- The host and port in the URL are correct
- If the database is behind a bastion/jump server, set up an SSH tunnel first
SSL/TLS errors
For local or dev databases where SSL is not configured, disable it in the URL:
# PostgreSQL
postgres://localhost:5432/database?sslmode=disable
# MySQL
mysql://localhost:3306/schema?tls=skip-verify
For production databases, see the SSL/TLS configuration docs.
Authentication failed
Double-check that the username and password are correct and that the user has the necessary permissions. For database-specific requirements:
- PostgreSQL: The user needs
CONNECTprivilege on the database andUSAGEon the target schema. - MySQL: The user needs
SELECTprivilege oninformation_schemaand the target schema. - SQL Server: The user needs
db_ownerrole for database-scope connections. - Oracle: See the required grants in the Oracle URL tab.
If you are evaluating Atlas for commercial use, reach out via your shared Slack Connect channel. Otherwise, use the Community Support channels and provide a minimal example to reproduce the issue.