Hey everyone, welcome to the June edition of the Drip where we take you downstream to cover highlights of our change-log, community and anything Airbyte related.
We now have a Terraform Provider
We are thrilled to announce the launch of our new Terraform Provider, a significant step forward in enhancing the management of your Modern Data Stack. This provider, built on our new Airbyte API, seamlessly integrates Airbyte into your Terraform-managed stack. It's designed to streamline configuration changes and foster effortless collaboration, harnessing the power of "Infrastructure-as-Code" to save you time and minimize errors. Currently, this tool is available for our Airbyte Cloud customers, and we're excited to extend its availability to Airbyte OSS and self-hosted Airbyte Enterprise deployments in Q3. Our Terraform Provider allows for the programmatic management of Airbyte resources, seamless integration of your Airbyte configuration within your existing infrastructure, and automation at scale.
Interested in discovering more about how our new Terraform Provider can revolutionize your data stack management? Click here to delve into the full article.
Exciting New Features and Improvements
In our continuous effort to enhance your experience with Airbyte, we're excited to share some of the new features and improvements we've introduced:
- Expanded Data Reach: We've extended our data reach with the addition of new destination connectors for Xata.io and Timeplus. We've also introduced a new source, KYVE Network, to further diversify the data you can harness.
- Enhancements to Existing Connectors: We've enriched our Source Outreach with new streams for users, tasks, templates, and snippets. Plus, Destination Databricks now supports schema evolution, offering you more flexibility with your data structures.
- Airbyte Platform Updates: We've made several updates to our platform, including highlighting schema errors for test records, enabling autopropagation in the UI by default, and making cursor granularity optional. Our connector builder now auto-imports schemas and adds request options for the API key authenticator.
We're committed to providing you with a smoother user experience and have addressed various bugs, including improved error handling for several sources and significant improvements to our connector builder.
Stay tuned for more updates as we continue to enhance our platform. Your feedback is invaluable to us, so don't hesitate to share your thoughts and suggestions.
Upcoming Events This Month
We have a series of exciting events lined up this month at Airbyte:
- Live Demo on Creating Data Pipelines: On July 13th at 9 AM PT| 6 PM CET, join us for a live demo where we'll show you how to create data pipelines in minutes and solve your data integration challenges with Airbyte.
- Showcase of Latest Features: On July 14th at 9 AM PT | 12 PM ET | 6 PM CET, our engineering team will be showcasing the latest features released on the Airbyte Platform, including Checkpointing, Schema Propagation, and Column Selection. (Link coming soon)
- Creating Custom Connectors: On July 11th at 9:30 AM PT | 12:30 PM ET | 6:30 PM CET, we'll host a live demo to show you how to create your custom connector in the Airbyte Open Source UI. We'll take an HTTP API source and move data to the destination.
- Programmatic Interaction with Airbyte Cloud: On July 26th at 6 PM EST, join us for a live demo to learn how you can programmatically interact with Airbyte Cloud. We'll cover how to access the developer portal, create a connector, and include an orchestration tool.
- A Night for Data Trailblazers & Innovators: After the AWS New York Summit, join us for an evening of networking, socializing, and connecting with data experts. Enjoy cocktails, hors d'oeuvres, and lots of glow-in-the-dark swag!
Stay tuned for more details on these events. We look forward to seeing you there!
Teams using Airbyte for their AI needs
We're excited to spotlight two innovative teams that are leveraging Airbyte to revolutionize their data integration processes.
The Langchain team, as detailed in the tutorial "Implement AI data pipelines with Langchain, Airbyte, and Dagster", is pioneering the use of Airbyte to construct scalable pipelines. By integrating diverse data sources into large language models, they're able to feed the right contextual data to these models. This tutorial, authored by Joe Reuter, provides a deep dive into how Langchain combines Airbyte with Dagster to create a robust and maintainable data pipeline.
On the flip side, the LlamaIndex team is breaking new ground by using Airbyte and LlamaIndex to interact with their data warehouse using natural language. Their tutorial, "Airbyte and LlamaIndex: ELT and Chat with your data warehouse without writing SQL", demonstrates how they bypass the need for SQL expertise. Authored by AJ Steers, this guide walks through their innovative process of using Airbyte to populate GitHub source data in Snowflake and then querying the database with LlamaIndex and GPT.
These teams exemplify the diverse and innovative applications of Airbyte. We're thrilled to see the transformative impact of our platform and look forward to seeing how more teams will leverage Airbyte to revolutionize their data integration processes.
And that’s all we have for June’s edition of The Drip. Thanks for reading through. If you have any questions:
- Please join our Slack community to talk to us on the Airbyte team as well as other fantastic folks in the community!
Also sign up for our Newsletter to keep up with the state of the art in Data Integration and the broader Data Engineering Ecosystem!