Redshift Concurrency Scaling: Managing High Query Loads and Performance

March 5, 2024
15 min read

Redshift is one of the leading data management tools because of the cutting-edge features it has to offer. One of the features it is mostly known for is concurrency scaling. With this feature, you can support virtually unlimited concurrent queries and users while maintaining performance. So, if your organization experiences unpredictable workloads and wants to maintain consistent performance, concurrency scaling is ideal for you. However, it can be a challenge to understand this feature and harness its full potential. 

You will learn about Redshift and its concurrency scaling feature in this article. 

Amazon Redshift Overview

Created by Amazon Web Services (AWS), Redshift is a modern cloud-native data warehouse that allows you to store and handle data efficiently from a centralized repository. The platform provides a petabyte-scale cloud storage to handle and process huge amounts of structured and unstructured data. Unlike most storage systems, Redshift stores data in a column-oriented approach for query efficiency.

In addition, it is designed as a serverless architecture, which enables you to handle analytic workloads of any size without the need to manage the structure of the data warehouse. Some major organizations that use Redshift in the data stack include Figma, Lyft, and Coursera. 

Key features of Redshift:

  • Federated Queries: The federated query capabilities of Redshift help you to query live data across one or more relational database services of Amazon. This includes querying data from Aurora PostgreSQL and Aurora MySQL databases without data migration. 
  • Machine Learning: Redshift uses machine learning (ML) to deliver high throughput, irrespective of your concurrent usage or workloads. It utilizes robust ML models and algorithms to predict incoming query run times and assign them to the required queue for faster processing. 
  • Result Caching: Redshift offers a result caching feature that allows you to have sub-seconds of time for repeat queries. This allows you to increase performance efficiency in business intelligence, dashboards, and visualization tools that often execute repeated queries. 

Concurrency Scaling in Redshift

Redshift offers an exclusive feature for concurrency scaling. With this feature, you can easily support thousands of concurrent users and queries while maintaining a consistently fast performance. 

Redshift automatically adds query processing power in seconds to process queries without delays as the concurrency requirement increases. After the workload demand reduces, the extra processing that Redshift applies automatically gets removed. This allows you to only pay for the time when concurrency scaling clusters are used within the Redshift data warehouse. 

Concurrency scaling in Redshift gives you many capabilities. Some of them are mentioned below: 

  • Get consistently fast performance for hundreds or thousands of concurrent users and queries. 
  • Allocate the clusters to specific workloads and user groups to control the number of clusters that can be used.

How does Concurrency Scaling Work?

Redshift is a multi-cluster data warehouse that consists of one or more clusters of servers that execute queries. You can set the minimum and maximum number of compute clusters you want to allocate to the warehouse according to your requirements.

However, the concurrency scaling feature does this work for you. When a query is submitted to a cluster in the warehouse, it monitors the workload and determines if additional compute resources are required to maintain optimal performance. If Redshift detects a high concurrency level, it automatically assigns extra compute nodes to handle the workload. The additional clusters for more workloads operate in parallel with the main cluster, which enables queries to be processed more efficiently without impacting existing workloads. 

Also, when the resource demand decreases, it scales down additional clusters and works vice versa. 

Benefits of Concurrency Scaling

Here are some of the benefits of the concurrency scaling feature: 

  • Improved Query Performance: Concurrency scaling allocates resources dynamically on demand. As a result, query latency decreases, and system responsiveness is increased overall. 
  • Reduced Queue Times: Concurrency scaling minimizes the queries waiting in the queue and assigns them to an additional cluster. This leads to shorter queue times and faster query execution for a more efficient analytics environment. 
  • Enhanced Scalability: With concurrency scaling, you can seamlessly handle sudden spikes in query demand without experiencing performance degradation. It enables clusters in Redshift to scale efficiently to accommodate growing volumes and increasing query complexity. 

How to Enable Concurrency Scaling in Redshift? 

By default, the concurrency feature is set on or off in Redshift, so you have to manually enable this feature to reap its benefits. Here’s how you can enable concurrency scaling in Redshift: 

  • Login to your AWS account and go to the AWS Redshift console. 
  • Click on Workload Management from the left side of the navigation menu. 
  • Select your cluster WLM parameter group from the subsequent pull-down menu. 
  • You’ll see a new column called Concurrency Scaling mode next to every queue.
  • The default setting will be off. Click on Edit, and you can modify settings for each queue. 

That’s it. Configure the concurrency scaling setting according to your requirements, and you can use the feature in Redshift. 

Concurrency Scaling Capabilities for Read and Write Operations

Redshift is commonly associated with managing high query loads for write operations. However, it can also be handy for reading operations. Turning on concurrency scaling for the workload management (WLM) queue works for read operations such as dashboard queries. 

On the writing operations side, concurrency scaling supports frequently used write operations such as extract, transform, and load (ETL) statements. Concurrency Scaling improves throughput for write operations contending for resources on the main cluster. Therefore, it is especially used when you want to maintain consistent response times and your cluster receives many requests. 

Concurrency scaling allows COPY, INSERT, UPDATE, DELETE, and CREATE TABLE AS (CTAS) statements. When you use non-supported write statements, such as CREATE without TABLE AS, in explicit transactions, none of the write statements will run on concurrency scaling clusters in Redshift. 

Limitations of Concurrency Scaling in Redshift

Concurrency scaling is a handy feature with a lot of benefits. However, it also comes with some limitations. Below are some of the limitations of using this feature in Amazon Redshift: 

  • It doesn’t support queries on tables that use interleaved sort keys. 
  • It doesn’t support queries on temporary tables. 
  • It doesn’t support queries that access external resources protected by restrictive network or virtual private cloud (VPC) configurations. 
  • It doesn’t support queries that contain Python Lambda UDFs and user-defined functions (UDFs). 
  • Concurrency scaling for write operations is not supported for DDL operations such as ALTER TABLE or CREATE TABLE. 
  • Redshift only supports concurrency scaling for write operations on Redshift RA3 nodes, specifically ra3.16xlarge, ra3.4xlarge, and ra3.xlplus. 
  • Redshift does not support ANALYZE for the COPY commands. 

You can read more about the limitations from this link. 

Integrate Data to Redshift Using Airbyte

After learning about concurrency scaling in detail, you might want to implement it. However, for that, you might need to centralize data from disparate sources to Redshift, and that’s where tools like Airbyte can help you out. 

Airbyte is a leading data integration tool that connects disparate data sources to Redshift. With over 350+ pre-built connectors, robust orchestration, and security capabilities, you can automate the whole data integration process. In addition, if you don’t find the pre-built connectors of your choice, you can use the connector development kit provided by the platform to create a custom one within minutes.

Some of the key features of Airbyte include: 

  • Flexibly Manage ELT Pipelines: Airbyte offers three ways to manage data pipelines. The three options are User interface, Terraform provider, or API. The user interface is for easy setup; the Terraform provider lets you programmatically customize APIs. 
  • Change Data Capture (CDC) Capabilities: Airbyte offers CDC capabilities that allow you to synchronize changes made to the dataset at the source with the destination. All you have to do is set up an incremental sync frequency while establishing your data pipeline. In the defined interval, your database changes, if any, will migrate to Redshift without any disruptions.

Conclusion

In this article, you have learned in detail about the concurrency scaling feature of Redshift, including what it is, how it works, read and write operations, and its limitations. You can get started with your concurrency scaling journey without much confusion by using the things mentioned above.

Therefore, it is an ideal feature if you are dealing with unpredictable datasets and want to streamline scalability concurrency scaling. You can start with basics from a single concurrency cluster, then monitor peak load via console to determine whether new clusters are utilized fully.

To integrate data from data sources into Redshift, you can use Airbyte. It automates all of your data integration tasks using the ELT approach. Sign up for Airbyte today!

Limitless data movement with free Alpha and Beta connectors
Introducing: our Free Connector Program
The data movement infrastructure for the modern data teams.
Try a 14-day free trial