How to Use Snowflake Create Database Command: Simplified

Photo of Jim Kutz
Jim Kutz
January 30, 2026

Summarize this article with:

✨ AI Generated Summary

You’ve probably run into confusing “insufficient privilege” or “object already exists” errors the first time you created a database in Snowflake. Its permission model and retention behavior differ from PostgreSQL or MySQL, so familiar commands don’t always behave as expected.

The CREATE DATABASE command does more than define a container for schemas and tables. It lets you control retention windows, choose transient storage, enable data sharing, and create zero-copy clones, all baked into Snowflake’s cloud-native design.

Teams migrating from on-prem systems often miss these details and pay for it later through higher storage costs or limited recovery options. Understanding this command means understanding how Snowflake handles durability, scalability, and cost, not just SQL syntax.

TL;DR: Snowflake Create Database Command at a Glance

  • CREATE DATABASE sets up a Snowflake database instantly without pre-allocating storage or compute
  • You can define retention windows, transient storage, and zero-copy clones directly in the statement
  • Transient databases reduce costs but skip Fail-safe and have limited recovery options
  • The command reflects Snowflake’s cloud-native design, not traditional on-prem database behavior

What Is the Snowflake CREATE DATABASE Command?

The CREATE DATABASE command creates a new container for schemas, tables, and other database objects. Unlike traditional SQL databases that pre-allocate disk space, the command returns instantly because storage and compute are decoupled in its multi-cluster shared-data architecture. You can configure retention policies, enable zero-copy cloning, or create read-only shares directly in the statement.

The syntax stays familiar - CREATE DATABASE <name> [options]; - but the options extend beyond traditional systems. You can adjust Time Travel windows, declare a database transient to reduce costs, or clone production for QA without copying data. Traditional databases like PostgreSQL or MySQL simply allocate storage and leave backup strategies to separate tooling.

Because the platform doesn't pre-allocate disk, you can script database creation in CI pipelines without capacity planning or downtime concerns.

1. Basic Syntax Structure

The minimal statement requires only a name:

CREATE DATABASE my_production_db;

The database name must be a unique identifier up to 255 characters, using letters, numbers, or underscores. Unquoted identifiers are capitalized automatically, so my_production_db becomes MY_PRODUCTION_DB. Use quotes when you need mixed case or spaces:

CREATE DATABASE "MyMixedCaseDb";

All parameters after the name (retention settings, transient flags, cloning clauses) are optional. This keeps the command simple for quick starts while supporting advanced governance requirements.

2. Database Types in Snowflake

Databases are permanent by default, providing up to 90 days of Time Travel plus a 7-day Fail-safe period. This suits data you can't afford to lose. For workloads where durability matters less than cost, declare the database transient. Transient databases skip Fail-safe and shorten retention.

CREATE TRANSIENT DATABASE staging_db;

Use permanent databases for production finance records or user telemetry. Choose transient for staging tables, ETL scratch space, or dbt test models. Once the retention window closes on a transient database, the data disappears permanently with no recovery option.

What Parameters Can You Configure When Creating a Snowflake Database?

You can configure data retention, compliance windows, cost-saving transient storage, and zero-copy cloning at creation time. The statement accepts options that don't exist in MySQL or PostgreSQL.

Each parameter is metadata-only, so you can apply them without downtime and adjust later with ALTER DATABASE if requirements or budgets change.

1. DATA_RETENTION_TIME_IN_DAYS

This setting defines your Time Travel window, the period during which you can query or restore historical data snapshots. Standard Edition allows 0-1 day; Enterprise unlocks 0-90 days.

Longer windows raise storage costs proportionally, so pick the smallest interval that satisfies audit or rollback needs.

CREATE DATABASE production_db  
DATA_RETENTION_TIME_IN_DAYS = 7;

If you supply a negative number or exceed the edition limit, the system throws a compilation error.

2. MAX_DATA_EXTENSION_TIME_IN_DAYS

When regulators demand records beyond standard Time Travel, you can extend coverage with MAX_DATA_EXTENSION_TIME_IN_DAYS. The parameter adds extra retention for specific compliance investigations without changing the primary window.

CREATE DATABASE compliance_db  
DATA_RETENTION_TIME_IN_DAYS = 90  
MAX_DATA_EXTENSION_TIME_IN_DAYS = 14;

Use it sparingly, as the extension carries the same per-terabyte cost as normal storage.

3. TRANSIENT Option

Marking a database TRANSIENT removes the seven-day Fail-safe copy and lowers long-term storage fees by roughly 35 percent. That trade-off works well for staging areas, test environments, or ETL scratch space where data can be regenerated.

CREATE TRANSIENT DATABASE dev_environment  
DATA_RETENTION_TIME_IN_DAYS = 1;

After the retention period expires, the data is gone for good.

4. CLONE Option

Zero-copy cloning creates an instant replica that initially consumes no extra storage because it shares underlying micro-partitions with the source.

CREATE DATABASE dev_db CLONE production_db;

You might clone production to spin up a development sandbox or capture a point-in-time snapshot for debugging. Storage only accrues when either database diverges, so you get production-grade data without the size-of-production bill.

How to Create a Database From a Share in Snowflake?

Run the CREATE DATABASE … FROM SHARE command to pull external data into your account without moving files. This feature uses secure data sharing so you can query the provider's tables immediately.

Before you issue the command, confirm that the provider has granted a share to your account. Run SHOW SHARES; to see what's available, note the provider_account and share_name, and pick a name for your read-only database.

-- 1. Inspect available shares
SHOW SHARES;

-- 2. Create a database from the share
CREATE DATABASE shared_marketplace_data
  FROM SHARE provider_account.share_name;

Databases created this way are read-only, which means you can run SELECTs but not INSERTs or DDL. If you need to transform or enrich the data, clone it first: CREATE DATABASE my_editable_copy CLONE shared_marketplace_data;.

What Permissions Do You Need to Create a Snowflake Database?

You can't create a database until your current role holds the CREATE DATABASE privilege at the account level. By default, that power belongs to system roles like SYSADMIN and ACCOUNTADMIN, so most permission denied errors trace back to running the command under a role that lacks this grant.

1. Required Privileges

The platform treats database creation as an account-wide operation, so the executing role must own the CREATE DATABASE privilege on the account object. System roles that already include it are:

  • SYSADMIN
  • ACCOUNTADMIN

Any lower role (PUBLIC, ANALYST, or a home-grown application role) starts without this right. Falling back to ACCOUNTADMIN every time may feel convenient, but it sidesteps Snowflake's role-based access control and leaves an oversize audit footprint.

2. Granting Permissions to Custom Roles

When you need more granular control, grant the privilege to a dedicated role instead of elevating everyone to SYSADMIN. A minimal workflow looks like this:

-- Create a role for data platform administrators
CREATE ROLE data_admin_role;

-- Allow that role to create databases
GRANT CREATE DATABASE ON ACCOUNT TO ROLE data_admin_role;

-- Hand the role to an engineer
GRANT ROLE data_admin_role TO USER data_engineer;

Keeping the privilege in a single, purpose-built role upholds the least-privilege principle and fits neatly into a layered RBAC hierarchy of domain (dev/prod), functional (analyst/developer), and access roles.

3. Common Permission Errors and Fixes

The most common failure message reads:

SQL execution error: Insufficient privileges to operate on database 'XYZ'

Follow these steps to confirm and resolve the issue:

1. Check your active role:```sql SELECT CURRENT_ROLE();
2. Inspect its grants:```sqlSHOW GRANTS TO ROLE CURRENT_ROLE();
3. If the CREATE DATABASE privilege is missing, switch roles:```sql USE ROLE SYSADMIN;
4. Or, if you have ACCOUNTADMIN access, delegate the privilege instead of switching:```sqlGRANT CREATE DATABASE ON ACCOUNT TO ROLE your_role;

Running these checks before executing CREATE DATABASE saves you from rerunning failed scripts.

What Are Common Mistakes When Using CREATE DATABASE in Snowflake?

Most database creation errors stem from naming conflicts, invalid retention settings, or forgetting that shared databases are read-only. Case-sensitivity and strict role privileges differ from other SQL engines and catch teams off guard.

1. Database Name Already Exists

The system stops execution with SQL execution error: Object 'DATABASE_NAME' already exists when you reuse a name. In automated pipelines, make your statement idempotent:

CREATE DATABASE IF NOT EXISTS my_database;

The IF NOT EXISTS clause prevents failures in CI/CD workflows. Stick to snake_case naming to avoid hidden conflicts with spaces, hyphens, or reserved words.

2. Invalid Retention Period Values

You'll hit Invalid value for DATA_RETENTION_TIME_IN_DAYS when exceeding edition limits: 0-1 day in Standard, up to 90 in Enterprise. Account-level settings override database values, so verify your limits first:

SHOW PARAMETERS LIKE 'DATA_RETENTION_TIME_IN_DAYS' IN ACCOUNT;

CREATE DATABASE my_db DATA_RETENTION_TIME_IN_DAYS = 7;

After creation, confirm the setting with SHOW PARAMETERS LIKE 'DATA_RETENTION_TIME_IN_DAYS' IN DATABASE my_db; to ensure you have the retention window you need.

3. Trying to Modify a Shared Database

Databases created with FROM SHARE are read-only. You can't create schemas, insert data, or alter objects, and all attempts trigger privilege errors. When you need an editable copy, clone it:

CREATE DATABASE my_editable_copy CLONE shared_database;

The clone inherits all objects but removes the read-only constraint. You can evolve schemas independently while the original share continues receiving provider updates.

How Do You Manage Databases After Creation?

You'll use three commands: SHOW DATABASES to list objects, ALTER DATABASE to modify settings, and DROP DATABASE (or UNDROP DATABASE) to remove or recover databases. These statements control governance and costs in the storage-based pricing model.

1. Viewing Existing Databases

List every database your role can access:

SHOW DATABASES;

For large accounts, filter the output:

SHOW DATABASES LIKE 'prod%';

Need richer metadata? Query the information schema:

SELECT *
FROM INFORMATION_SCHEMA.DATABASES
WHERE DATABASE_NAME ILIKE '%production%';

Run these checks after creation to verify the database exists with expected properties. This prevents case-sensitivity errors that trip up teams migrating from other SQL databases.

2. Modifying Database Properties

You can adjust retention, rename databases, or add comments without downtime:

ALTER DATABASE my_db SET DATA_RETENTION_TIME_IN_DAYS = 30;
ALTER DATABASE my_db RENAME TO new_database_name;
ALTER DATABASE my_db SET COMMENT = 'Production analytics database';

Adjusting retention directly impacts storage spend: lower values cut historical storage, higher values extend safety windows. Only roles with OWNERSHIP on the database can issue these commands.

3. Dropping Databases

Remove a database when it's no longer needed:

DROP DATABASE IF EXISTS my_database;

Time Travel lets you reverse the deletion:

UNDROP DATABASE my_database;

After the Time Travel retention window closes, Snowflake's Fail-safe retains data for an additional 7 days for recovery from critical failures. Pair drops with resource monitors to shut down costly warehouses before deletion.

How Does Airbyte Connect to Snowflake Databases?

Once you've created your database, the Airbyte Snowflake connector handles the data pipeline setup so you can focus on analytics instead of maintaining sync scripts. It can be configured as both a source and a destination within your pipelines, but primarily for the Load stage rather than every step of ELT.

As a source, Airbyte reads through JDBC and lets you select tables in the UI. When acting as the destination, the connector uses internal staging and bulk loading. You supply the account identifier, warehouse, database, schema, and authentication credentials, Airbyte handles the rest.

With 600+ pre-built connectors, you can route data from virtually any SaaS tool, file system, or event stream into your database, or out of it, without writing custom code.

Ready to start loading data into your Snowflake database? Try Airbyte and connect your first data source in minutes with pre-built connectors.

Need help with enterprise-scale data pipelines or custom integrations? Talk to sales to explore solutions tailored to your organization's needs.

Frequently Asked Questions

Can you change a database from permanent to transient after creation?

No. Snowflake doesn’t allow converting a permanent database to transient or vice versa. If you need to change durability or cost characteristics, you must create a new database with the desired type and migrate or clone the data into it.

Does CREATE DATABASE consume storage immediately?

No. Creating a database is a metadata-only operation. Snowflake allocates storage only when you load data. Even zero-copy clones initially consume no additional storage until data changes in either the source or the clone.

What happens if account-level retention settings conflict with database settings?

Account-level parameters override database-level values. If your account caps DATA_RETENTION_TIME_IN_DAYS at a lower value, Snowflake silently enforces the account limit even if you specify a higher value during database creation.

Is CREATE DATABASE safe to run in CI/CD pipelines?

Yes. The command executes instantly and doesn’t reserve resources. Use IF NOT EXISTS to make it idempotent and avoid failures during repeated deployments across environments like dev, staging, and production.

Limitless data movement with free Alpha and Beta connectors
Introducing: our Free Connector Program
The data movement infrastructure for the modern data teams.
Try a 30-day free trial
Photo of Jim Kutz