How to List All Schemas in the PostgreSQL Database: An Ultimate Guide

July 21, 2025
20 Mins Read

Summarize with ChatGPT

Data professionals managing PostgreSQL environments face a critical challenge that compounds daily: schema drift occurs in 78% of PostgreSQL outages, with undocumented alterations causing application failures when code expects columns absent in production. This productivity drain intensifies when teams need to rapidly identify schema structures for debugging performance issues, validating access permissions, or coordinating data migrations across development environments.

PostgreSQL, an object-relational database management system, supports SQL for querying relational data and JSON for non-relational data. This flexibility helps you handle complex data types for applications ranging from web apps to enterprise systems.

Effective schema management is essential to fully utilize PostgreSQL's capabilities in application development. With proper schema management, you can ensure data integrity and enhance scalability in complex systems.

Efficient database administration requires a clear overview of all database schemas, whether for troubleshooting, performance optimization, or access auditing. This article explains the different ways to postgres list schemas in PostgreSQL databases, covering both fundamental techniques and advanced management approaches that prevent costly production incidents.

What Is a PostgreSQL Schema?

PostgreSQL schemas are logical containers that allow you to organize database objects like tables, views, data types, and operators. Objects such as views, materialized views, tables, sequences, and foreign tables must have unique names within a schema due to a shared namespace.

While objects of the same type cannot share names inside a single schema, identical object names can exist across different schemas. For instance, both schema_info and my_schema can contain a table named my_table without conflict.

Each database has a public schema by default. You can also create your own schemas with specific ownership and privileges using the CREATE SCHEMA command. If you no longer need a schema, remove it with DROP SCHEMA; if the schema contains objects, add the CASCADE option.

To create or access objects within a schema, use a qualified name:

schema.object

Example:

my_schema.my_table

To create my_table inside my_schema:

CREATE TABLE my_schema.my_table (
    id   INT,
    name TEXT
);

Because writing fully-qualified names is tedious, PostgreSQL lets you use unqualified names (e.g., just the table name). A search path tells PostgreSQL which schema(s) to consult when you reference an object without the schema prefix.

What Are the Different Methods to List All Schemas in a PostgreSQL Database?

There are several ways to list schemas, each offering different details and filtering options.

Using information_schema

information_schema is an ANSI-SQL standard set of read-only views providing metadata about the current database.

SELECT schema_name
FROM information_schema.schemata;

Output of information_schema query


Using pg_catalog

pg_catalog is the built-in system catalog schema. It's always first in the search path.

SELECT nspname AS schema_name
FROM pg_catalog.pg_namespace;

Output of pg_catalog query


How Do You List Schemas With Privileges?

SELECT
    schemata.schema_name,
    schema_privileges.grantee,
    schema_privileges.privilege_type
FROM
    information_schema.schemata
LEFT JOIN
    information_schema.schema_privileges
ON  schemata.schema_name = schema_privileges.schema_name
ORDER BY
    schemata.schema_name,
    schema_privileges.grantee;

The LEFT JOIN ensures schemas without explicit privileges still appear.

How Do You List Schemas With Their Sizes?

SELECT
    nspname AS schema_name,
    pg_size_pretty(
        SUM(pg_total_relation_size(pg_class.oid))
    ) AS size
FROM pg_catalog.pg_namespace
JOIN pg_catalog.pg_class
     ON pg_class.relnamespace = pg_namespace.oid
GROUP BY nspname;

pg_total_relation_size computes total disk usage; pg_size_pretty formats it (KB, MB, GB).

How Can Automated Drift Detection Prevent Schema Inconsistencies?

Schema drift detection has emerged as a critical capability for maintaining database consistency across environments. Modern PostgreSQL administrators leverage automated tools that continuously monitor schema states and identify unauthorized changes before they impact production operations.

Conceptual Framework of Schema Drift Detection

Schema drift occurs when database structures unintentionally deviate from version-controlled definitions due to ad-hoc changes. This pervasive risk in collaborative environments can lead to application failures when code expects columns that are absent in production. Automated drift solutions embed reconciliation into DevOps pipelines by comparing committed schema definitions against actual database structures using checksum validation.

These systems generate differential reports highlighting added or dropped columns, modified constraints, or missing indexes. Advanced implementations enforce policy-based resolutions that can block deployments or auto-generate corrective DDL statements. The detection process works by querying system catalogs and comparing metadata fingerprints against stored baselines.

Implementation Strategies for Drift Detection

Flyway Community Integration provides embedded validation that executes drift checks before applying pending migrations. The tool performs checksum comparisons against migration history tables and produces object-level discrepancy reports when inconsistencies are detected. If drift is identified, deployments halt automatically, allowing administrators to rectify problems before proceeding.

StreamSets Synchronization Engine employs declarative pipelines that scan incoming record structures, detect missing tables or columns relative to configured schemas, and autogenerate DDL statements. This approach executes DDL transactionally before data reaches JDBC producers, dynamically aligning schemas with evolving data models.

Practical Benefits and Industry Applications

Organizations implementing drift detection report significant reductions in deployment failures and faster resolution of schema-related incidents. Financial services companies use these tools for SOX compliance audits, while e-commerce platforms synchronize product attribute expansions during live operations. Edge computing environments particularly benefit from auto-adding columns for new sensor metrics in distributed systems.

What Are Advanced Online Schema Rebuilding Techniques?

Online schema rebuilding enables PostgreSQL administrators to perform major structural changes without service interruption. These techniques have become essential for high-availability systems that cannot tolerate traditional maintenance windows.

Understanding pg_repack Architecture

The pg_repack extension revolutionizes table maintenance by using trigger-based replication instead of PostgreSQL's locking VACUUM FULL operation. The process involves creating log tables that capture changes via WAL-backed triggers, building shadow copies while triggers replicate live writes, and performing atomic swaps within transactions.

This approach mirrors advanced database techniques but operates on ACID-compliant triggers that ensure data consistency. Benchmark studies show rebuilding processes for terabyte-scale tables maintain write latency under 100 milliseconds during active operations.

Deployment Workflows for Zero-Downtime Operations

Transactional Safety Measures ensure all schema changes encapsulate within transactions using standard DDL commands. Operations like ALTER TABLE users ADD COLUMN mfa_secret TEXT complete with instant metadata updates while applications remain online. Unlike other database systems with non-transactional DDL, PostgreSQL allows complete rollbacks of incomplete changes without data loss.

Parallel Processing Capabilities distribute rebuild work across multiple CPU cores using commands like pg_repack -j 8 -t shipments. This parallelization reduces rebuild time for large tables from hours to minutes while maintaining system responsiveness.

Enterprise Implementation Patterns

SaaS platforms use online rebuilding to reclaim storage bloat without service interruptions, particularly valuable for multi-tenant architectures where individual tenant maintenance cannot affect others. Legacy migration projects leverage these techniques for data type conversions, such as INT to BIGINT transitions, while continuing to serve production queries. The methodology proves especially effective in environments requiring continuous availability where traditional maintenance windows are not acceptable.

What Are the Advanced Techniques for Zero-Downtime Schema Migrations?

Traditional ALTER TABLE operations frequently require exclusive locks that block read/write access during schema changes, creating unacceptable downtime windows for applications requiring continuous availability. Modern PostgreSQL environments demand migration strategies that maintain service availability while implementing schema modifications.

View-Based Schema Versioning

View-based migration systems enable simultaneous access to both pre-migration and post-migration schemas through abstraction layers. Tools like Reshape implement a three-phase protocol that eliminates locking through transactional view redirection:

  1. Migration Initialization – Creates shadow tables with new schema structures while establishing real-time synchronization triggers between original and modified schemas.
  2. Gradual Application Rollout – Enables updated applications to reference new schema versions while legacy applications continue using original views through bidirectional data consistency triggers.
  3. Finalization – Atomically switches default view references to the new schema after validating data integrity, then removes legacy structures.

This approach allows column renames and type changes with minimal latency impact while maintaining full write throughput during migration processes.

Concurrent Index Operations

PostgreSQL's CONCURRENTLY options for index operations provide non-blocking alternatives to standard DDL commands. When creating or dropping indexes, concurrent operations prevent table locks that would otherwise halt application access:

CREATE INDEX CONCURRENTLY idx_schema_name ON my_table (column_name);
DROP INDEX CONCURRENTLY idx_old_schema;

These operations take longer to complete but maintain application availability throughout the migration process. For high-volume environments, concurrent index management becomes essential for maintaining SLA commitments during schema evolution.

Logical Replication for Schema Changes

Logical replication enables schema modifications by maintaining synchronized replicas with different structures. This technique proves particularly valuable for major schema restructuring where traditional migration approaches would require extended downtime periods.

How Do Modern Declarative Schema Management Tools Enhance PostgreSQL Administration?

Traditional migration approaches rely on imperative change scripts that create cumulative technical debt through sequential modifications. Modern declarative tools address these limitations by managing schema state rather than change sequences, reducing operational complexity and improving team collaboration.

State-Based Schema Management

Tools like Atlas introduce differential migration through declarative schema definitions. Rather than maintaining sequential migration scripts, you define desired schema states in configuration files:

schema "production" {
  table "users" {
    column "id" {
      type        = bigint
      primary_key = true
    }
    column "email" {
      type   = varchar(255)
      unique = true
    }
  }
}

The engine automatically generates optimized migration paths by comparing actual database states with desired configurations, prioritizing non-blocking operations and data preservation strategies.

GitOps-Inspired Schema Workflows

Modern schema management platforms like Bytebase implement review workflows that mirror software development practices. These systems provide:

  • SQL linting with compliance rule enforcement
  • Automated drift detection through schema snapshot comparisons
  • Role-based access control integration with approval workflows
  • Real-time collaboration features for distributed teams

Drift Detection and Reconciliation

Declarative tools excel at identifying schema drift through unauthorized changes that occur outside managed migration processes. These systems continuously monitor schema states and alert administrators to discrepancies between expected and actual configurations.

Advanced reconciliation algorithms calculate minimal operation sequences needed to restore desired states while preserving data integrity. This capability proves essential in environments where multiple teams might make ad-hoc schema modifications that compromise consistency.

How Can You List All Schemas Using Python (psycopg2)?

  1. Install the driver:
   pip install psycopg2
  1. Sample script:
   import psycopg2

   conn = psycopg2.connect(
       dbname="postgres_database_name",
       user="postgresDB_username",
       password="postgresDB_password",
       host="host_address",
       port="port_number"
   )
   cur = conn.cursor()

   cur.execute("""
       SELECT schema_name
       FROM information_schema.schemata
       WHERE schema_name NOT IN ('information_schema', 'pg_catalog');
   """)

   schemas = cur.fetchall()
   for schema in schemas:
       print(schema[0])

   cur.close()
   conn.close()

Why Is Listing Schemas Useful?

  • Understanding Database Structure – Provides a high-level view for easier maintenance and scalability.
  • Identifying Available Schemas – Quickly see which schemas exist in multi-schema environments.
  • Access Control & Permissions – Verify or adjust who can access what.
  • Performance Monitoring – Spot storage inefficiencies and optimize queries.
  • Full-Text Search Index Availability – Ensure indexes (e.g., GIN, GiST) live in the correct schema for efficient querying.

How Does Airbyte Help With PostgreSQL Schema Management?

Airbyte is a data-movement platform with 600+ pre-built connectors, including PostgreSQL. After configuring a pipeline, Airbyte can automatically check for schema changes (every 15 minutes in Cloud, every 24 hours self-hosted) to keep integrations in sync.

Airbyte's open-source foundation provides flexibility for custom schema management workflows while maintaining enterprise-grade security and governance capabilities. The platform supports deployment across cloud, hybrid, and on-premises environments, ensuring your PostgreSQL schema management aligns with your infrastructure requirements.

For organizations managing multiple PostgreSQL instances, Airbyte's automated schema detection capabilities reduce the manual overhead of tracking schema changes across distributed environments. If you need tailored guidance, you can connect with experts.

Airbyte

Conclusion

You now know several ways to postgres list schemas in PostgreSQL via psql meta-commands, information_schema views, and pg_catalog tables, as well as how to retrieve additional details such as privileges and sizes, and even automate the task via Python. Modern approaches like automated drift detection, online schema rebuilding, zero-downtime migrations, and declarative schema management provide additional capabilities for complex production environments. Understanding and monitoring schemas ensures better organization, security, and performance of your PostgreSQL databases.


Further reading

Limitless data movement with free Alpha and Beta connectors
Introducing: our Free Connector Program
The data movement infrastructure for the modern data teams.
Try a 14-day free trial