

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"For TUI Musement, Airbyte cut development time in half and enabled dynamic customer experiences."


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”

"With Airbyte, we could just push a few buttons, allow API access, and bring all the data into Google BigQuery. By blending all the different marketing data sources, we can gain valuable insights."
Begin by exporting the data from Looker. Navigate to the Looker interface and run your desired query or access the data you need to move. Once the data is visible, use Looker's export feature to download the results. Choose a CSV format for ease of use, as this is a universal format that can be imported into SQL Server. Save the file to your local machine.
Open the exported CSV file to ensure the data is correctly formatted. Review the column headers and data types. If necessary, clean or adjust the data to ensure compatibility with your target MS SQL Server database schema. Make sure there are no formatting issues, such as extra commas or line breaks, that could disrupt the import process.
Access your MS SQL Server instance and create a table that matches the structure of your CSV file. Use SQL Server Management Studio (SSMS) or any SQL command-line tool to define the table with appropriate column names and data types. Ensure the data types in SQL Server are compatible with the data you are importing.
Open SQL Server Management Studio (SSMS) and connect to your database. Right-click on the database where you want to import the data and navigate to "Tasks" > "Import Data". This will launch the SQL Server Import and Export Wizard. Follow the prompts to select the CSV file as your data source.
In the Import Wizard, specify the data source as "Flat File Source" and browse to select your CSV file. Adjust the file format settings as needed, such as specifying the delimiter (usually a comma for CSV files). Review the preview to confirm that the data is being read correctly.
Proceed to the "Select Source Tables and Views" step in the wizard. Here, map the columns from the CSV file to the corresponding columns in your SQL Server table. Ensure that each column from the CSV is correctly aligned with the SQL Server table structure you created in step 3.
Complete the wizard to begin the data import process. Once finished, verify the import by running a SELECT query on the newly populated table in SQL Server to ensure the data has been transferred accurately. Check for any discrepancies or errors that may need addressing.
By following these steps, you can effectively move data from Looker to MS SQL Server manually, ensuring control over the entire process without relying on third-party connectors.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Looker is a Google-Cloud-based enterprise platform that provides information and insights to help move businesses forward. Looker reveals data in clear and understandable formats that enable companies to build data applications and create data experiences tailored specifically to their own organization. Looker’s capabilities for data applications, business intelligence, and embedded analytics make it helpful for anyone requiring data to perform their job—from data analysts and data scientists to business executives and partners.
Looker's API provides access to a wide range of data categories, including:
1. User and account data: This includes information about users and their accounts, such as user IDs, email addresses, and account settings.
2. Query and report data: Looker's API allows users to retrieve data from queries and reports, including metadata about the queries and reports themselves.
3. Dashboard and visualization data: Users can access data about dashboards and visualizations, including the layout and configuration of these elements.
4. Data model and schema data: Looker's API provides access to information about the data model and schema, including tables, fields, and relationships between them.
5. Data access and permissions data: Users can retrieve information about data access and permissions, including which users have access to which data and what level of access they have.
6. Integration and extension data: Looker's API allows users to integrate and extend Looker with other tools and platforms, such as custom applications and third-party services.
Overall, Looker's API provides a comprehensive set of data categories that enable users to access and manipulate data in a variety of ways.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: