Amazon DynamoDB Pricing: On-Demand & Provisioned Plans

July 21, 2025
8 min

Summarize with ChatGPT

Data professionals managing Amazon DynamoDB workloads recently witnessed the most significant pricing transformation in the service's history. In November 2024, AWS slashed on-demand throughput costs by 50% and reduced Global Tables replication expenses by up to 67%, fundamentally altering the cost-benefit equation for NoSQL deployments. These changes, combined with new features like warm throughput management and configurable maximum capacity, create unprecedented opportunities for organizations to optimize their database expenses while scaling performance. Understanding these evolving pricing structures becomes critical as enterprises migrate from legacy systems and seek to balance cost efficiency with operational flexibility.

What Are the Core DynamoDB Pricing Models?

DynamoDB provides two primary capacity modes, each optimized for different workload patterns and cost management strategies:

On-Demand Capacity Mode

On-Demand Capacity Mode eliminates capacity planning by automatically scaling throughput based on application demand. Following the November 2024 price reductions, this serverless option became significantly more cost-effective, making it the preferred choice for most variable workloads. This mode excels for:

  • Applications with unpredictable traffic patterns
  • New applications where usage patterns remain unknown
  • Development and testing environments requiring cost flexibility
  • Workloads experiencing seasonal or event-driven spikes

Current Pricing Structure (US East)

After the 50% reduction implemented in late 2024:

  • Write Request Units (WRU): $0.625 per million write request units
  • Read Request Units (RRU): $0.125 per million read request units

Real-World DynamoDB Cost Example

Consider a data analytics application processing user interaction events with varying workload patterns:

Application characteristics

  • Daily base traffic: 5,000 reads and writes
  • Weekly batch processing: 100,000 reads every Sunday
  • Monthly data aggregation: 500,000 writes on the first of each month
  • Average item size: 1 KB for writes, 2 KB for reads

Monthly usage analysis

Operation Calculation Total
Regular daily reads 5,000 × 30 150,000
Regular daily writes 5,000 × 30 150,000
Weekly batch reads 100,000 × 4 400,000
Monthly aggregation writes 500,000 500,000
  • Total Reads: 550,000
  • Total Writes: 650,000

Cost calculation

  • Writes: 0.65 million × $0.625 = $0.41
  • Reads: 0.55 million × $0.125 = $0.07
  • Storage: 15 GB — first 25 GB are free → $0

Total monthly cost: $0.48

This example demonstrates how the revised On-Demand pricing suits applications with varying workload patterns while enabling engineers to focus on data processing logic rather than capacity forecasting.

Provisioned Capacity Mode

Provisioned Capacity Mode provides granular control over database throughput with predictable cost structures. This mode remains optimal for applications with consistent, forecastable workloads where capacity planning delivers cost advantages over on-demand pricing.

Pricing Structure (US East)

  • Write Capacity Unit (WCU): $0.00065 per WCU-hour
  • Read Capacity Unit (RCU): $0.00013 per RCU-hour

Provisioned Capacity Example

Application configuration

  • Region: US East (Ohio)
  • Table class: DynamoDB Standard
  • Auto-scaling enabled (70% target utilization)
  • Base capacity: 200 WCUs and 200 RCUs
  • Item size: 1 KB writes, 2 KB reads

Usage pattern

Period WCUs used RCUs used
6 AM–9 AM (ETL) 180 150
9 AM–6 PM 100 120
6 PM–10 PM (reports) 80 190
10 PM–6 AM 40 40

Cost calculation

  • Base capacity per hour
  • WCUs: 200 × $0.00065 = $0.13
  • RCUs: 200 × $0.00013 = $0.026
  • Hourly total: $0.156
  • Monthly capacity cost
  • 730 h × $0.156 = $113.88
  • Storage
  • 50 GB – first 25 GB free → 25 GB × $0.25 = $6.25

Total monthly cost: $120.13

How Do Read and Write Capacity Units Function?

Understanding DynamoDB's capacity unit system enables precise cost estimation and performance optimization:

  • Read Capacity Unit (RCU): Supports one strongly consistent read per second for items up to 4 KB, or two eventually consistent reads for the same size
  • Write Capacity Unit (WCU): Handles one write operation per second for items up to 1 KB

Larger items consume proportionally more capacity units. A 10 KB strongly consistent read requires 3 RCUs (rounded up from 2.5), while a 3 KB write operation needs 3 WCUs. This mathematical relationship directly impacts cost calculations and performance planning for data-intensive applications.

What Are the Most Effective DynamoDB Cost Optimization Strategies?

Auto-Scaling Configuration Best Practices

Enable auto-scaling to dynamically adjust capacity based on actual demand while maintaining cost efficiency. Configure realistic minimum and maximum thresholds to prevent over-provisioning during low-traffic periods while ensuring adequate capacity for peak loads.

Choosing Between On-Demand and Provisioned Capacity

Following the 2024 price reductions, on-demand capacity became cost-competitive for most workloads with variable traffic patterns. Provisioned capacity remains advantageous for steady-state applications where consistent usage patterns enable accurate capacity forecasting and reserved capacity discounts.

Data Modeling for Cost Efficiency

Design efficient schemas that minimize item sizes and optimize query patterns. Implement sparse indexes to reduce storage costs while maintaining query performance. Consider denormalization strategies that reduce the number of read operations required for common access patterns.

Caching Strategies

Implement DynamoDB Accelerator (DAX) or ElastiCache integration to offload frequent read operations from your primary tables. Effective caching strategies can reduce read costs by approximately 50% while improving application response times and reducing load on DynamoDB partitions.

How Can You Monitor and Control DynamoDB Costs Effectively?

CloudWatch Metrics for Cost Tracking

Monitor ConsumedCapacity and ProvisionedCapacity metrics to identify optimization opportunities. Track throttling events and capacity utilization patterns to inform scaling decisions and capacity mode selections.

Setting Up Billing Alarms

Configure AWS billing alerts to receive notifications when spending exceeds predefined thresholds. Implement graduated alert levels to provide early warnings before costs become problematic.

Cost Allocation Tags

Implement comprehensive tagging strategies to analyze costs by project, department, or application component. Consistent tagging enables detailed cost attribution and supports chargeback models for shared infrastructure.

Usage Patterns Analysis

Analyze historical usage data to identify peak periods and optimize capacity allocation. Use this analysis to evaluate reserved pricing options and determine optimal scaling policies for auto-scaling configurations.

Advanced Cost Optimization with Recent DynamoDB Features

Warm Throughput Management

The November 2024 release introduced warm throughput capabilities, providing real-time visibility into available burst capacity. This feature enables proactive scaling before anticipated traffic surges, preventing costly reactive adjustments during peak periods. Data engineers can monitor pre-warmed capacity through CloudWatch metrics and increase throughput ahead of planned events like product launches or marketing campaigns.

Warm throughput applies to both provisioned and on-demand tables, with pre-warming incurring nominal costs compared to reactive scaling penalties. For applications experiencing predictable traffic spikes, this feature prevents throttling while optimizing cost efficiency through planned capacity management.

Configurable Maximum Throughput

On-demand tables now support maximum throughput ceilings per table or secondary index, preventing unexpected cost overruns during traffic explosions. This capability enables organizations to set spending limits while maintaining performance stability, particularly valuable for applications susceptible to viral content or automated traffic spikes.

Configure maximum throughput based on budget constraints and acceptable performance degradation thresholds. When limits are reached, requests are throttled rather than consuming unlimited capacity, providing cost predictability for variable workloads.

Global Tables Cost Optimization

Global Tables pricing saw dramatic reductions in 2024, with replicated write costs dropping 67% for on-demand and 33% for provisioned capacity. These changes make multi-region deployments significantly more cost-effective for applications requiring global distribution and disaster recovery capabilities.

Implement Global Tables strategically by placing tables in regions closest to user populations, reducing cross-region access costs while improving latency. The new pricing structure supports active-active architectures without prohibitive replication expenses.

What Are Common DynamoDB Pitfalls and Their Solutions?

Hot Partition Problems

Uneven key distribution creates hot partitions that experience throttling despite adequate overall table capacity. Implement composite keys that distribute traffic evenly across partitions, use randomization techniques for frequently accessed items, or redesign access patterns to avoid concentrated traffic on specific partition key values.

Over-Provisioning Scenarios

Excessive capacity allocation inflates costs without corresponding performance benefits. Review usage patterns regularly using CloudWatch metrics, enable auto-scaling with appropriate target utilization percentages, and consider switching to on-demand capacity for workloads with unpredictable traffic patterns.

Data Transfer Cost Traps

Cross-region data transfer fees can accumulate unexpectedly in Global Tables deployments or when applications access DynamoDB from distant regions. Optimize region placement to minimize transfer costs and implement local read patterns where eventual consistency is acceptable.

Backup and Restore Expenses

Continuous backups and point-in-time recovery features include storage costs that can become significant for large datasets. Optimize backup retention policies based on compliance requirements and business recovery objectives, considering the balance between recovery capabilities and storage expenses.

Integration with Open-Source Data Platforms for Cost Optimization

Leveraging Airbyte for Efficient DynamoDB Integration

Open-source data integration platforms like Airbyte provide sophisticated mechanisms for optimizing DynamoDB costs while enabling comprehensive data workflows. These integrations address traditional challenges of expensive ETL operations and proprietary vendor lock-in that constrain cost optimization efforts.

Airbyte's DynamoDB connector implements incremental synchronization using DynamoDB Streams for Change Data Capture, reducing read capacity consumption by 60-80% compared to full table scans. This approach particularly benefits organizations with large datasets where traditional batch processing would consume excessive RCUs during data pipeline operations.

The platform's attribute-level filtering capabilities enable selective column syncing, minimizing unnecessary data transfer and reducing both read operations and network costs. For multi-cloud deployments, Airbyte's deployment flexibility allows processing closer to data sources, reducing cross-region transfer expenses while maintaining integration capabilities.

Strategic Integration Patterns

Implement CDC-based patterns using DynamoDB Streams to capture item-level changes within 24-hour retention windows. This approach enables real-time data processing without expensive periodic scans, particularly valuable for analytics workflows that require current data without complete dataset synchronization.

Configure batch optimization through compression and intelligent chunking to stay within DynamoDB's 400KB item limits while maximizing throughput efficiency. These techniques reduce the number of API calls required for large data transfers, directly decreasing WCU consumption and associated costs.

Enterprise-Scale Cost Management

For enterprise deployments, implement multi-account replication strategies using IAM role chaining to avoid credential sharing while enabling cross-account data pipelines. This architectural approach supports cost allocation across business units while maintaining centralized optimization capabilities.

Leverage post-load transformation capabilities through dbt integration to optimize data structures for DynamoDB access patterns. This approach reduces query complexity and associated RCU consumption for downstream applications accessing processed data.

How Can Airbyte Help Optimize DynamoDB Query Costs?

  1. Incremental Data Syncs – The DynamoDB connector fetches only changed data through DynamoDB Streams integration, dramatically reducing RCU consumption compared to full table scans.

  2. Efficient Data Normalization – Built-in normalization converts nested JSON structures to relational formats, reducing query complexity and minimizing read operations for downstream analytics.

  3. Filtering & Column Selection – Sync only required fields and apply row-level filtering to reduce unnecessary read operations and data transfer costs.

  4. Compression & Deduplication – Minimize redundant reads during integration processes while optimizing batch operations to stay within DynamoDB's throughput limits.

  5. Scheduling & Batch Operations – Consolidate multiple small queries into fewer, larger operations that maximize throughput efficiency and reduce per-operation overhead.

  6. Change Data Capture Implementation – CDC through DynamoDB Streams avoids costly full-table scans when real-time synchronization requirements don't demand immediate consistency.

  7. Monitoring & Observability – Detailed operational logs identify inefficient query patterns and optimization opportunities for ongoing cost management.

  8. Flexible Deployment Options – Deploy on Airbyte Cloud, self-managed infrastructure, or hybrid configurations to optimize data processing costs while maintaining integration capabilities.

Practical Implementation Example

An e-commerce platform originally performed daily full-table scans of order data, consuming substantial RCUs during batch processing windows. By implementing Airbyte's incremental sync capabilities with DynamoDB Streams, the organization reduced read capacity consumption by 75% while achieving near real-time data availability for analytics workflows. This optimization enabled reinvestment of cost savings into additional data sources and expanded analytics capabilities.

Conclusion

DynamoDB's recent pricing evolution, highlighted by the November 2024 cost reductions and enhanced capacity management features, fundamentally improves the value proposition for organizations seeking scalable NoSQL solutions. The 50% reduction in on-demand throughput costs and dramatic improvements in Global Tables pricing make DynamoDB increasingly competitive against both managed alternatives and self-hosted solutions when total cost of ownership is considered.

Effective cost optimization requires understanding the interplay between capacity modes, leveraging new features like warm throughput management, and implementing strategic integrations with platforms like Airbyte that reduce operational overhead while maintaining performance. Organizations that combine these pricing advantages with thoughtful architecture design and comprehensive monitoring can achieve substantial cost reductions while building scalable, resilient data infrastructure.

For organizations evaluating DynamoDB deployments or seeking to optimize existing implementations, focus on workload-appropriate capacity mode selection, strategic use of new cost management features, and integration patterns that maximize operational efficiency. The enhanced pricing structure and expanded feature set position DynamoDB as a compelling foundation for modern data architectures requiring both cost efficiency and operational flexibility.


Suggested Read: IBM db2 Pricing

Limitless data movement with free Alpha and Beta connectors
Introducing: our Free Connector Program
The data movement infrastructure for the modern data teams.
Try a 14-day free trial