8 Use Cases of LangChain

September 3, 2024
30 min read

Artificial intelligence, or AI, has become the top technological trend in recent days. It has multiple applications in a wide range of fields. Specific terms are popping up in the field of AI that you must know about to be on the edge of this tech revolution.

One such term is LangChain, an AI framework that allows you to build LLM applications effortlessly. The graph below shows the interest of the AI community in the term ‘LangChain’ in recent years.

LangChain's Interest Over Time

If you are wondering how LangChain can help you develop AI applications, you have come to the right place. This article highlights the most popular LangChain use cases to help you build your AI-powered applications.

8 Amazing LangChain Use Cases

LangChain is a powerful framework for building applications that leverage the capabilities of robust large language models (LLMs). Let’s examine its use cases and how they can be used to create, test, and deploy applications.

Before exploring the LangChain use cases, you must check the availability of your data. The data you use to train your model might reside in diverse sources, losing its essence. Therefore, it becomes essential to extract and store this data in a single repository where it can be easily accessed. In this situation, no-code data movement platforms like Airbyte can help streamline data integration.

Airbyte

Airbyte provides 350+ pre-built data connectors to extract and load your data into a destination. Let’s look at some of the key features it offers:

  • GenAI Workflows: Airbyte supports RAG-specific transformations that let you perform complex tasks, such as chunking and embedding. This feature enables you to effortlessly load and store data in a single operation.
  • Connector Development Kit: Airbyte enables you to create custom connectors within minutes with the help of its Connector Development Kit (CDK).
  • PyAirbyte: Airbyte provides a Python library, PyAirbyte, that allows you to extract data using Airbyte connectors, transform it as per your requirements, and then store it in a local cache.

Here’s how PyAirbyte allows you to enhance LangChain use cases by providing data accessibility for model training. Let’s assume you want to extract data from a CSV file present in Google Drive and convert it into a format that LangChain can access.

But, before performing data extraction steps, ensure you install PyAirbyte on your local machine. To achieve this, execute the code below in your preferred code editor like Jupyter Notebook:

%pip install --quiet airbyte

You can execute the code below to perform data extraction. Remember to change the file path, the Google Drive credentials, and other placeholders in the code given below:

import airbyte as ab

service_json = ab.get_secret('service_json')

source = ab.get_source(
    "source-google-drive",
    install_if_missing=True,
    config={
        "folder_url": 
"https://drive.google.com/drive/folders/1txtyBv_mfXYjn0R_-oxV3Vg5QOi-6XaI",
        "credentials": {
            "auth_type": "Service",
            "service_account_info": f"""{service_json}""",
        },
        "streams": [{
            "name": "NFLX",
            "globs": ["**/*.csv"],
            "format": {
            "filetype": "csv"
            },
            "validation_policy": "Emit Record",
            "days_to_sync_if_history_is_full": 3
            }]
        
        },
    
)

Verify the configurations and credentials by running ‘check’:

source.check()

This code reads data from a CSV file present in Google Drive and converts it into a list of objects. You can then split this list to make it compatible with LangChain:

source.select_all_streams()
read_result = source.read()
documents_list = []

Convert the read data into document objects and add them to the list:

for key, value in read_result.items():
    docs = value.to_documents()
    for doc in docs:
        documents_list.append(doc)

Print a single row of the CSV file:

print(str(documents_list[0]))

You can now split this list into smaller elements and store it in vector databases, which will eventually help you train your LLM application. Here’s an in-depth tutorial that will guide you on how to build an end-to-end RAG pipeline.

Let’s get into the primary use cases of LangChain now and learn how it can be beneficial in building efficient LLM apps.

Summarization

LangChain Summarization Use Case

Summarization is the most basic use case of LLMs and LangChain. It enables you to summarize the content of important documents, including articles, chat history, medical papers, legal documents, and research papers.

The length of the document matters a lot, as the LLMs have limitations in the amount of words they can process at once. This is why the larger text is required to be broken into smaller segments.

To build a summarizer for large amounts of data, you can use two common approaches.

One is stuff which means simply stuffing all the documents into a single prompt. Another is map-reduce, which maps the original document into smaller chunks, processes each chunk, and combines the result. Before getting into the summarization process, ensure you satisfy the prerequisites.

Prerequisites

Install the necessary packages and set up the environment variables.

%pip install --upgrade --quiet langchain-openai langchain

from dotenv import load_dotenv
import os

load_dotenv()

openai_api_key = os.getenv('OPENAI_API_KEY', 'YourAPIKeyIfNotSet')

Let’s summarize short and long text documents using OpenAI and LangChain.

Summarizing Short Text

For short text, you do not need to define a chain, as the limit of the words stays within the permissible limit of LLMs.

from langchain.llms import OpenAI
from langchain import PromptTemplate

The default model is already 'text-davinci-003'. You can change it later if you want.

llm = OpenAI(temperature=0, model_name='text-davinci-003', openai_api_key=openai_api_key)

Create a template for your summarizer:

template = """
%INSTRUCTIONS:
Please summarize the following piece of text.
Respond in a manner that a 5 year old would understand.

%TEXT:
{text}
"""

Create a LangChain prompt template that you can insert values to later:

prompt = PromptTemplate(
    input_variables=["text"],
    template=template,
)

You can provide a text to summarize:

confusing_text = """
For the next 130 years, debate raged.
Some scientists called Prototaxites a lichen, others a fungus, and still others clung to the notion that it was some kind of tree.
“The problem is that when you look up close at the anatomy, it’s evocative of a lot of different things, but it’s diagnostic of nothing,” says Boyce, an associate professor in geophysical sciences and the Committee on Evolutionary Biology.
“And it’s so damn big that when whenever someone says it’s something, everyone else’s hackles get up: ‘How could you have a lichen 20 feet tall?’”
"""

Finally, you must create a final prompt from the provided confusion text:

final_prompt = prompt.format(text=confusing_text)
print(final_prompt)

You must pass the prompt to the LLM model and print the summary output:

output = llm(final_prompt)
print (output)

Output:

For 130 years, people argued about what Prototaxites was. Some thought it was a lichen, some thought it was a fungus, and some thought it was a tree. But no one could agree. It was so big that it was hard to figure out what it was.

Summarizing Larger Text

Summarizing more extensive texts can become complicated as the token limit exceeds for larger texts. Fortunately, LangChain provides a load_summarize_chain function to facilitate a larger text summarization feature. The Python code below uses this function to summarize Paul Graham’s essay on startup.

Import the necessary libraries:

from langchain.llms import OpenAI
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter

llm = OpenAI(temperature=0, openai_api_key=openai_api_key)

You can open up a large document to summarize:

with open('data/PaulGrahamEssays/good.txt', 'r') as file:
    text = file.read()

text_splitter = RecursiveCharacterTextSplitter(separators=["\n\n", "\n"], chunk_size=5000, chunk_overlap=350)
docs = text_splitter.create_documents([text])

Get your chain ready to use. You can mention the chain_type as map_reduce or stuff and set verbose=True(Optional) if you want to see what is getting sent to the LLM:

chain = load_summarize_chain(llm=llm, chain_type='map_reduce')

The chain runs through different documents, summarizes the chunks, and then produces a summary from them, storing it inside the output variable:

output = chain.run(docs)
print(output)

Output:

This essay looks at the idea of benevolence in startups and how it can help them succeed. It explains how benevolence can improve morale, make people want to help, and help startups be decisive. It also looks at how markets have evolved to value potential dividends and potential earnings and how users dislike their new operating system. The author argues that starting a company with benevolent aims is currently undervalued and that Y Combinator's motto of "Make something people want" is a powerful concept.

Chatbots

Chatbots are the most common use case of LLMs, which enable you to deploy bots that can maintain long conversations with users. They are commonly niche-specific, which is why chatbots use retrieval-augmented generation (RAG) over private data to answer domain-specific questions.

With the help of a memory element and natural language processing (NLP), chatbots can perform real-time conversations with users. Some of the most common chatbot implementations are NexusGPT, ChatBase, Capital One’s Eno, and H&M’s Kik Chatbot. Let’s look at the Python code used to create a chatbot using LangChain.

from langchain.llms import OpenAI
from langchain import LLMChain
from langchain.prompts.prompt import PromptTemplate

Import the chat-specific component:

from langchain.memory import ConversationBufferMemory

template = """
You are a chatbot that is unhelpful.
Your goal is to not help the user but only make jokes.
Take what the user is saying and make a joke out of it

{chat_history}
Human: {human_input}
Chatbot:"""

prompt = PromptTemplate(
    input_variables=["chat_history", "human_input"], 
    template=template
)
memory = ConversationBufferMemory(memory_key="chat_history")

llm_chain = LLMChain(
    llm=OpenAI(openai_api_key=openai_api_key), 
    prompt=prompt, 
    verbose=True, 
    memory=memory
)

Input:

llm_chain.predict(human_input="Is an pear a fruit or vegetable?")

Output: Yes, a pear is a fruit of confusion!

Input:

llm_chain.predict(human_input="What was one of the fruits I first asked you about?")

Output: I think it was the fruit of knowledge!

Agents

LangChain Agent

LLM agents are robust AI systems capable of generating complex but contextually relevant text. These models can think through a problem, remember previous conversations, and adjust their responses using tools according to certain conditions.

LLM agents have become a trending topic in artificial intelligence. AutoGPT and BabyAGI are examples of advanced LLM agents. Let’s explore how LangChain can help you easily create your AI agent. The agent below pulls data from Google to answer questions.

import os
import json

from langchain.llms import OpenAI

Import LangChain agents:

from langchain.agents import load_tools
from langchain.agents import initialize_agent

You must also import the necessary tools:

from langchain.agents import Tool
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.utilities import TextRequestsWrapper

GOOGLE_CSE_ID = os.getenv('GOOGLE_CSE_ID', 'YourAPIKeyIfNotSet')
GOOGLE_API_KEY = os.getenv('GOOGLE_API_KEY', 'YourAPIKeyIfNotSet')

llm = OpenAI(temperature=0, openai_api_key=openai_api_key)

search = GoogleSearchAPIWrapper(google_api_key=GOOGLE_API_KEY, google_cse_id=GOOGLE_CSE_ID)

requests = TextRequestsWrapper()

toolkit = [
    Tool(
        name = "Search",
        func=search.run,
        description="useful for when you need to search Google to answer questions about current events"
    ),
    Tool(
        name = "Requests",
        func=requests.get,
        description="Useful for when you to make a request to a URL"
    ),
]

agent = initialize_agent(toolkit, llm, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True)

Input:

response = agent({"input": "What is the capital of Canada?"})
response['output']

Output: Ottawa is the capital of Canada.

Interacting with APIs

Interacting With APIs Using LangChain

Connecting LLMs with APIs can expand their capabilities and enable natural language interaction with APIs. For example, LLMs can interact with a weather API to get real-time weather updates and answer users' queries.

Below are the steps required to enable LLM to interact with an API.

Import the required libraries:

from langchain.chains import APIChain
from langchain.llms import OpenAI

Run the following command to create a LLM model:

llm = OpenAI(temperature=0, openai_api_key=openai_api_key)

The LangChain APIChain will read through the documentation to identify endpoints.

api_docs = """

BASE URL: https://restcountries.com/

API Documentation:

The API endpoint /v3.1/name/{name} Used to find information about a country. All URL parameters are listed below:
    - name: Name of country - Ex: italy, france
    
The API endpoint /v3.1/currency/{currency} Used to find information about a region. All URL parameters are listed below:
    - currency: 3 letter currency. Example: USD, COP
    
Woo! This is my documentation
"""

chain_new = APIChain.from_llm_and_api_docs(llm, api_docs, verbose=True)

Make an API call:

chain_new.run('Can you tell me information about france?')

Understanding Code

Understanding Code With LangChain

Code understanding is among the most important LangChain use cases. Recently, LLM tools like GitHub Copilot and Amazon CodeWhisperer have gained popularity due to their code-assist features. These tools help people worldwide, even non-technical professionals, understand complex code repositories.

In addition to code understanding, professionals can leverage complex code to build applications they would otherwise not have been able to. Let’s look at how you can develop your personal coding-assist with the help of LangChain.

Import the os library that can interact with your operating system:

import os

Import the vector support libraries:

from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings

Import the LangChain model:

from langchain.chat_models import ChatOpenAI

You must import text splitter libraries:

from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import TextLoader

llm = ChatOpenAI(model_name='gpt-3.5-turbo', openai_api_key=openai_api_key)

embeddings = OpenAIEmbeddings(disallowed_special=(), openai_api_key=openai_api_key)

root_dir = 'data/thefuzz'
docs = []

Run a for loop to iterate through each folder:

for dirpath, dirnames, filenames in os.walk(root_dir):
    
    for file in filenames:
        try:
            loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8')
            docs.extend(loader.load_and_split())
        except Exception as e: 
            pass

docsearch = FAISS.from_documents(docs, embeddings)

Get your retriever ready:

qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever())

query = "What function do I use if I want to find the most similar item in a list of items?"
output = qa.run(query)

print(output)

Output:

You can use the process.extractOne() function from thefuzz package to find the most similar item in a list of items. Here's an example:

from thefuzz import process

choices = ["apple", "banana", "orange", "pear"]
query = "pineapple"

best_match = process.extractOne(query, choices)
print(best_match)

This would output ‘(u’apple’, 36)’, which means that the most similar item to “pineapple” in the list of choices is “apple”, with a similarity score of 36.

Querying Tabular Data

Querying Tabular Data Using LangChain

In many real-world applications, data resides in tabular form. Querying this data can enable you to extract useful insights and create strategies that improve business performance. LangChain provides the capabilities to perform operations on this data using natural language, making querying tabular data one of the essential LangChain use cases.

Here’s how you can query tabular data using LangChain and San Francisco Trees Data.

You must import the required libraries:

from langchain import OpenAI, SQLDatabase, SQLDatabaseChain

Create an OpenAI LLM model:

llm = OpenAI(temperature=0, openai_api_key=openai_api_key)

Define your data path:

sqlite_db_path = 'data/San_Francisco_Trees.db'
db = SQLDatabase.from_uri(f"sqlite:///{sqlite_db_path}")

db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)

db_chain.run("How many species of trees are there in San Francisco?")

Extraction

Extraction is the process of extracting specific information from a large text document. It is usually performed with output parsing, which organizes extracted data in a structured format like a spreadsheet to make it analysis-ready.

This process includes extracting and loading specific information from text into a database or extracting parameters from user queries to make API calls. Kor is an example of an LLM library that lets you extract data from text.

Here is the code that you can execute to create your own extraction application using LangChain.

Import libraries to construct your chat messages:

from langchain.schema import HumanMessage
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate

You can use a chat model like gpt-3.5-turbo:

from langchain.chat_models import ChatOpenAI

To parse outputs and get structured data back:

from langchain.output_parsers import StructuredOutputParser, ResponseSchema

chat_model = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo', openai_api_key=openai_api_key)

Let’s look at two different approaches to performing extraction.

1. Simple Extraction

In a simple extraction, you must provide a prompt with instructions for the type of output you want.

instructions = """
You will be given a sentence with fruit names, extract those fruit names and assign an emoji to them
Return the fruit name and emojis in a python dictionary
"""

fruit_names = """
Apple, Pear, this is an kiwi
"""

Make your prompt which combines the instructions with the fruit names:

prompt = (instructions + fruit_names)

Call your LLM:

output = chat_model([HumanMessage(content=prompt)])

print (output.content)
print (type(output.content))

Output:

{'Apple': '🍎', 'Pear': '🍐', 'kiwi': '🥝'}

<class 'str'>

2. LangChain Response Schema

LangChain’s response schema assists you while working with AI by auto-generating an instruction guide for AI. With this feature, you don’t need to worry about the prompt engineering required to get results. LangChain response schema reads the LLM-generated output and turns it into a Python object to work with.

Follow the code below to extract the name of a song and artist from a given user prompt:

Use the code below to get the schema you want:

response_schemas = [
    ResponseSchema(name="artist", description="The name of the musical artist"),
    ResponseSchema(name="song", description="The name of the song that the artist plays")
]

This parser will look for the LLM output in the schema and return it back to you:

output_parser = StructuredOutputParser.from_response_schemas(response_schemas

Get the format instructions that LangChain makes:

format_instructions = output_parser.get_format_instructions()

Create the prompt template that brings it all together:

prompt = ChatPromptTemplate(
    messages=[
    HumanMessagePromptTemplate.from_template("Given a command from the user, extract the artist and song names \n \
                                                    {format_instructions}\n{user_prompt}")  
    ],
    input_variables=["user_prompt"],
    partial_variables={"format_instructions": format_instructions}
)

fruit_query = prompt.format_prompt(user_prompt="I really like So Young by Portugal. The Man")

fruit_output = chat_model(fruit_query.to_messages())
output = output_parser.parse(fruit_output.content)

print (output)
print (type(output))

Output:

{'artist': 'Portugal. The Man', 'song': 'So Young'}

<class 'dict'>

Automated Scientific Literature Review

LangChain provides a procedure for evaluating the accuracy of responses obtained from a summarizer. This use case can help review scientific research papers to determine whether the information received is correct or incorrect. For this LangChain use case, you must first create a summarizer that extracts data from research papers. Then, apply evaluation criteria to gain insights into the results.

Execute the code below to perform an evaluation:

Import embeddings, store, and retrieval libraries:

from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQA

Import model and doc loader:

from langchain import OpenAI
from langchain.document_loaders import TextLoader

from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0, openai_api_key=openai_api_key)

Load the essay:

loader = TextLoader('data/PaulGrahamEssays/worked.txt')
doc = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=3000, chunk_overlap=400)
docs = text_splitter.split_documents(doc)

You must create the embeddings and the document search index:

embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
docsearch = FAISS.from_documents(docs, embeddings)

chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever(), input_key="question")

question_answers = [
    {'question' : "Which company sold the microcomputer kit that his friend built himself?", 'answer' : 'Healthkit'},
    {'question' : "What was the small city he talked about in the city that is the financial capital of USA?", 'answer' : 'Yorkville, NY'}
]

predictions = chain.apply(question_answers)

Now, you can start your eval_chain:

eval_chain = QAEvalChain.from_llm(llm)

The eval_chain will grade itself. The code below helps the eval_chain know where the different parts are:

graded_outputs = eval_chain.evaluate(question_answers,
                                     predictions,
                                     question_key="question",
                                     prediction_key="result",
                                     answer_key='answer')

graded_outputs

Output:

[{'text': ' CORRECT'}, {'text': ' INCORRECT'}]

How To Build Your Perfect LangChain Pipeline Using Airbyte?

Training an LLM can be difficult, as it requires a huge amount of data in the right context for appropriate results. If the data you train your LLM on is not of good quality, the results obtained might not be as accurate as you want them to be.

The data you want to train your LLM on might be present in various sources. Integrating this data into a single repository can be beneficial. However, the integration process can become cumbersome, consuming a lot of your time and resources.

This is where you can utilize no-code ELT tools like Airbyte with LangChain and orchestration tools like Dagster to build optimized data pipelines. Let’s consider a real-world example where you extract data from your Salesforce account and use it to train an LLM model to derive sales insights. Before getting started, the first step is to ensure that all the necessary libraries are properly installed. Follow the code below to do so:

pip install openai faiss-cpu requests beautifulsoup4 tiktoken dagster_managed_elements langchain dagster dagster-airbyte dagit

Step 1: Extract Data Using Airbyte

  • Log in to your Airbyte account. On the left panel of the page, click on Sources.
Airbyte Dashboard
  • Search for Salesforce in the Search connector box on the Set up a new source page. Then, select the available Salesforce option.
Configuring Airbyte Source
  • On the next page, authenticate your Salesforce account to configure it as a source. Click on Set up source.
Authenticating Source
  • After configuring Salesforce as a source, click on the Destinations tab on the left panel. In the Destination search bar, search for JSON and select the Local JSON option.
  • Mention the path from which Dagster will be able to access the data.
Salesforce to JSON Connection

Step 2: Configure Dagster Pipeline

  • Create a new “ingest.py” file to configure software-defined assets for Dagster.
  • Fetch existing connections from the Airbyte instance using the load_assets_from_airbyte_instance function. Use AirbyteResource with host and port information to define the Airbyte instance.
from dagster_airbyte import load_assets_from_airbyte_instance, AirbyteResource

airbyte_instance = AirbyteResource(
    host="localhost",
    port="8000",)

airbyte_assets = load_assets_from_airbyte_instance(
    airbyte_instance,
    key_prefix="airbyte_asset",)
  • Load the raw JSONL files from Airbyte to the LangChain document using the AirbyteJSONLoader.
from langchain.document_loaders import AirbyteJSONLoader
from dagster import asset, AssetIn
  • You must set stream_name to the specific stream of records in Airbyte that you want to make accessible to the LLM:
stream_name = ""

airbyte_loader = AirbyteJSONLoader(
    f"/tmp/airbyte_local/_airbyte_raw_{stream_name}.jsonl"
)

@asset(
    non_argument_deps={AssetKey(["airbyte_asset", stream_name])},
)

def raw_documents():
    return airbyte_loader.load()
  • Use the RecursiveCharacterTextSplitter to split the documents into chunks to fit into the LLM.
from langchain.text_splitter import RecursiveCharacterTextSplitter

@asset def documents(raw_documents):
    return RecursiveCharacterTextSplitter(chunk_size=1000).split_documents(raw_documents)
  • Now, you can generate embeddings for the document and save the vectorstore content to a file.
from langchain.vectorstores.faiss import FAISS
from langchain.embeddings import OpenAIEmbeddings
import pickle

@asset def vectorstore(documents):
    vectorstore_contents = FAISS.from_documents(documents, OpenAIEmbeddings())
    with open(“vectorstore.pkl”, “wb”) as f:
        pickle.dump(vectorstore_contents, f)
  • Define how to manage IO and export the asset definitions for Dagster.
from dagster import Definitions

defs = Definitions(
    assets=[airbyte_assets, raw_documents, documents, vectorstore]
)

Step 3: Load Your Data

  • Set the OpenAI API key.
export OPENAI_API_KEY = YOUR_OPENAI_API_KEY
  • Launch Dagster by executing the code below.
dagster dev -f ingest.py
  • To interact with your pipeline and manage assets, navigate to Dagster at http://127.0.0.1:3000/asset-groups.
  • You can either click the Materialize button on the Dagster UI to materialize all the assets or execute the code below on the command line. This step will run all the tasks mentioned above, from extracting Salesforce data into JSON files to creating a local vector database vectorstore.pkl file.
dagster asset materialize --select \* -f ingest.py

Step 4: Create an Application with LangChain

After creating a Dagster pipeline, you can use the stored embeddings with LangChain to develop a question-answering (QA) application.

  • Create a new Python file query.py.
  • Open the vectorstore.pkl file to load the embeddings into your script.
from langchain.vectorstores import VectorStore
import pickle

vectorstore_file = "vectorstore.pkl"with open(vectorstore_file, "rb") as f:
    global vectorstore
    local_vectorstore: VectorStore = pickle.load(f)
  • Initialize OpenAI LLM and RetrievalQA and use local_vectorstore to retrieve relevant documents.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI

qa = RetrievalQA.from_chain_type(
    llm=OpenAI(temperature=0),
    chain_type="stuff",
    retriever=local_vectorstore.as_retriever()
)
  • Implement a continuous QA loop with a prompt, enabling the user to ask questions.
print("Chat LangChain Demo")
print("Ask a question to begin:")

while True:
    query = input("")
    answer = qa.run(query)
    print(answer)
    print("\nWhat else can I help you with:")
  • Run the QA bot using the code below.
OPENAI_API_KEY=YOUR_API_KEY python query.py

During this procedure, the LLM receives questions from the user. It queries the vector store based on the specific task, and the question embeddings are compared to the stored embeddings. The closest matches are identified, and the LLM tries to formulate an answer based on the match.

Key Takeaways

LangChain is a robust framework that lets you design complex LLM applications with advantages in a wide range of domains. Some of the most common LangChain use cases include building a chatbot, text summarizer, and AI agent.

It can also be a helpful component that enables you to interact with APIs in natural language, understand code, and query data for decision-making. Knowing how to build LLM applications from scratch can provide you with a thorough understanding of how to leverage their potential.

FAQs

What Are the Key LangChain Use Cases?

There are multiple LangChain use cases that can benefit you. It can enable you to interact with APIs, understand complex code, query and extract data, perform automated reviews, and build AI applications like chatbots.

What Is RAG?

RAG, or Retrieval Augmented Generation, is a technique that enhances LLMs' performance by feeding them relevant information. It augments the LLM model's knowledge with domain-specific information, enabling the LLM to produce better responses.

Limitless data movement with free Alpha and Beta connectors
Introducing: our Free Connector Program
The data movement infrastructure for the modern data teams.
Try a 14-day free trial