Using LangChain ReAct Agents to Answer Complex Questions

August 29, 2024
30 min read

Retrieval Augmented Generation, or RAG, is efficient in training LLMs with domain-specific information that caters to your customer’s requirements. However, with time, the complexity of customer queries has significantly increased.

Handling complex queries might be a difficult task for your basic RAG chatbot. This is especially applicable when you search for information that is not directly mentioned in some section of the internal documentation. You can easily overcome this issue by incorporating the ReAct principle while building your chatbot.

This article thoroughly describes LangChain ReAct and its benefits in answering complex multi-hop questions.

What Is LangChain ReAct Framework?

LangChain ReAct Framework

LangChain ReAct Framework is a prompting technique that combines reasoning and action elements in large language models (LLMs). It enables LLMs to reason and act according to the situation in a simulation environment.

ReAct agents extend the capabilities of LLMs to respond to queries by mimicking human reactions to problems using external tools. With the LangChain ReAct agents, you can enable your model to react to problems or take actions according to its understanding of the real world.

Working of LangChain ReAct

Fact hallucination is an issue you might encounter while working with language models dealing with complex tasks. The chain-of-thought prompting method efficiently enables LLMs to generate reasoning-based responses for logical problems. However, it lacks exposure to the external world. This is where the LangChain ReAct framework enables LLMs to generate verbal reasoning traces and actions for complex tasks.

If you are wondering how the LangChain ReAct agent works, here’s an overview of the working process:

  • The ReAct agent allows LLMs to interact with their environment through text actions, enhancing their ability to generate accurate responses.
  • These text actions include interrogating or performing tasks to gather information for better understanding.
  • The LangChain ReAct agent helps initiate multiple search actions, such as calling external tools, to resolve complex tasks such as multi-hop reasoning questions.
  • The prompts utilized usually include actions, resulting observations, and the human thoughts involved in the process.
  • LangChain ReAct framework transforms the abilities of LLMs to perform critical thinking as humans do by alternating between thinking and action.

Answering Multi-Hop Questions Using LangChain ReAct Framework

The LangChain React framework can be essential, especially when you want your model to answer multiple questions. This section highlights how you can build your own LLM agent to answer complex questions using the LangChain ReAct agent.

Before proceeding with the steps, you must consider moving your data to a single repository where it can be easily accessed. The data your organization works with might be present in a diverse range of sources. When present in dispersed sources, data loses its essence. You can use SaaS-based, no-code tools like Airbyte to perform data integration.

Airbyte

Airbyte is a data movement platform that offers 350+ pre-built data connectors. These connectors allow you to integrate data from multiple sources into a destination of your choice. In addition to the already available connectors, it provides you with a Connector Development Kit, which enables you to build custom data connectors within minutes.

Let’s explore the key features Airbyte provides that help you streamline data migration:

  • GenAI Workflow Support: Airbyte lets you load semi-structured and unstructured data directly into some of the most popular vector stores, such as Pinecone, Weaviate, and Milvus. You can use this data to build a retrieval-based conversational interface.
  • Data Security: Airbyte offers multiple security features, including encryption, auditing, and SSO. Compliance with prominent privacy standards, such as GDPR, HIPAA, and SOC 2, lets you focus on building projects rather than worrying about data security.
  • Change Data Capture: The CDC feature lets you capture only the changes that occur in the source, reflecting them at the destination without moving the entire database.

Along with all these features, Airbyte also offers PyAirbyte, a library that lets you build and manage data pipelines using the Python programming language. Let's understand how to extract data using PyAirbyte.

Step 1: Extract Data Using PyAirbyte

First, you must install PyAirbyte on your system. To do that, execute the code below in your preferred code editor or command line interface:

%pip install --quiet airbyte

Import the airbyte library to check if your source is available in the offered connectors:

import airbyte as ab

ab.get_available_connectors()

Create and install the source connector to extract data. Replace the “fake-source” placeholder with the name of your connector:

source: ab.Source = ab.get_source("fake-source")

Configure the source and adjust the count according to the dataset:

source.set_config(
    config={
        "count": 50_000,
        "seed": 123,
    },
)

Verify the config and creds by running ‘check’:

source.check()

The output of the above code will provide you with the status of the connection. Select all of the source's streams and read data into the internal cache:

source.select_all_streams()
read_result: ab.ReadResult = source.read()

You can load this data into a file on your local system by using Python libraries like Pandas. Let’s assume that this data contains documentation related to human resources.

To load this data into Pandas, use the to_pandas() function, and to convert it into CSV format, use the to_csv() method. Once the data is extracted, you can build an application that answers multi-hop questions.

Step 2: Initialize the Project by Importing Libraries

This step involves importing the necessary libraries that will be used in this project. Create a Python file with the ‘.py’ extension and replicate the code below in it:

import os
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.storage._lc_store import create_kv_docstore
from langchain.storage import LocalFileStore
from langchain.agents import Tool
from langchain.document_loaders import TextLoader
from langchain.tools.retriever import create_retriever_tool
from langchain.agents import AgentExecutor, create_react_agent
from langchain import hub
from langchain.pydantic_v1 import BaseModel, Field
from langchain.agents import create_openai_tools_agent
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.pydantic_v1 import BaseModel, Field
from langchain.agents import tool
import numpy as np
import langchain
import openai

Step 3: Configure Your LLM

You can follow this step to configure a large language model by copying/pasting the code below in your Python file:

To retrieve the OpenAI API Key from environment variables:

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

You can define the deployment name for the OpenAI model, specify its endpoint and version, and mention all the other information required for configuring the model:

OPENAI_DEPLOYMENT_NAME = "gpt-35-turbo-16k"
OPENAI_DEPLOYMENT_ENDPOINT = "https://.openai.azure.com/"
OPENAI_DEPLOYMENT_VERSION = "2023-12-01-preview"
OPENAI_MODEL_NAME = "gpt-35-turbo-16k"
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = "text-embedding-ada"
OPENAI_ADA_EMBEDDING_MODEL_NAME = "text-embedding-ada-002"

You can initialize the AzureChatOpenAI class:

llm = AzureChatOpenAI(
    deployment_name=OPENAI_DEPLOYMENT_NAME,
    model_name=OPENAI_MODEL_NAME,
    openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
    openai_api_version=OPENAI_DEPLOYMENT_VERSION,
    openai_api_key=OPENAI_API_KEY,
    openai_api_type="azure",
    temperature=0.1,
)

Initialize the OpenAIEmbeddings class with the given parameters:

embeddings = OpenAIEmbeddings(
    deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
    model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
    openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
    openai_api_type="azure",
    chunk_size=1,
    openai_api_key=OPENAI_API_KEY,
    openai_api_version=OPENAI_DEPLOYMENT_VERSION,
)

Step 4: Load the Extracted Data

After configuring your model, you can now load the data extracted using Airbyte to train the model. Copy the code below in your Python script and replace the “data_path” with the location where you stored extracted data using Airbyte:

loader = TextLoader(“data_path”)
documents = loader.load()
persist_directory = "local_vectorstore"
local_store = "local_docstore"
collection_name = "hrpolicy"
PROJECT_ROOT = "...."

Step 5: Create a Data Retriever

You can use this text splitter to create the child documents:

parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=20)

The vectorstore to use to index the child chunks:

vectorstore = Chroma(
    persist_directory=os.path.join(PROJECT_ROOT, "data", persist_directory),
    collection_name=collection_name,
    embedding_function=embeddings,
)

The storage layer for the parent documents:

local_store = LocalFileStore(os.path.join(PROJECT_ROOT, "data", local_store))
store = create_kv_docstore(local_store)
retriever = ParentDocumentRetriever(
    vectorstore=vectorstore,
    docstore=store,
    child_splitter=child_splitter,
    parent_splitter=parent_splitter,
)

To add documents to the retriever, run this code once:

vectorstore.persist()
retriever.add_documents(documents, ids=None)

This command will create two folders in your working session: local_docstore and local_vectorstore.

You can check if the retriever works by executing this code:

retriever.get_relevant_documents("communication initiatives?")

Step 6: Create RetrievalQA Chain to Answer Questions

By now, you have enough information to create your RetrievalQA chain and start answering questions. You can use the code below to create a RetrievalQA:

qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=retriever,
    return_source_documents=True,
)

Standard Questions

Let’s check how your agent responds to certain questions.

qa({"query": "What is the probationary period?"})

Output:

The probationary period is a period of six calendar months during which an employee's performance and suitability for the job are assessed. At the end of the probationary period, if the employee's performance is satisfactory, they will be notified in writing that they have successfully completed their probationary period. The probationary period can be extended by a further three months if the employee's line manager deems it necessary.

This answer makes sense. Here’s another standard question to check the model’s accuracy:

qa({"query": "What is the difference in the number of work hours in Germany vs. United States?"})

Output:

In Germany, the standard workweek is 38 hours (Monday to Friday, 8 AM to 5 PM), while in the United States, employees adhere to a standard 40-hour workweek (Monday to Friday, 9 AM to 5 PM). So, the difference in the number of work hours between Germany and the United States is 2 hours per week.

Complex Questions

You can test your model with a complex multi-hop question.

qa({"query": "What is the percentage difference in the annual budget for Japan and US?"})

Output:

The annual budget for Japan is ¥50 million, and the annual budget for the United States is $500,000. To calculate the percentage difference, we need to convert the budgets to the same currency. Assuming an exchange rate of 1 USD = 100 JPY, the converted budget for Japan is ¥50 million = $500,000.

The percentage difference can be calculated as follows:

Percentage Difference = ((Budget for Japan - Budget for US) / Budget for US) * 100

= (($500,000 - $500,000) / $500,000) * 100

= (0 / $500,000) * 100

= 0%

Therefore, the percentage difference in the annual budget for Japan and the United States is 0%.

The logic used in the above response is correct, but the currency exchange rate is incorrect. To fix this issue, you can provide the exchange rate information, 1 USD = 146.91 JPY, as a prompt.

qa({"query": "What is the percentage difference in the annual budget for Japan and US if 1 USD = 146.91 JPY?"})

Output:

To calculate the percentage difference in the annual budget for Japan and the United States, we need to convert the budgets from their respective currencies to a common currency, such as USD.

Given that 1 USD = 146.91 JPY, we can convert the annual budget for Japan from JPY to USD by dividing it by the exchange rate:

Annual budget for Japan in USD = ¥50,000,000 / 146.91 = $340,344.43

Now we can calculate the percentage difference between the annual budgets for Japan and the United States:

Percentage difference = ((Annual budget for Japan - Annual budget for the United States) / Annual budget for the United States) * 100

Percentage difference = (($340,344.43 - $1,000,000) / $1,000,000) * 100

Percentage difference = (-$659,655.57 / $1,000,000) * 100

Percentage difference = -65.96%

Therefore, the percentage difference in the annual budget for Japan and the United States is approximately -65.96%.

The response produced is close to Japan's actual budget ($338,164.25). However, the output response can be further improved.

You can gauge the response of the model with another question:

qa({"query": "Which country has the highest budget?"})

Output:

Japan has the highest budget for employee benefits, with an annual allocation of ¥50 million.

This is another incorrect response, as the US has a higher budget than Japan. However, it proves that the systematic prompting technique of the ReAct agent can easily answer multi-hop questions.

Step 7: Configure LangChain ReAct Agent

ReAct agent support tools that can take only one input parameter, for example, Tool_Length, Tool_Search, Tool_Date. For tools with more than one input, you can use the Open AI Tools agent. To define the tools that your agent can access, use:

from langchain.tools.retriever import create_retriever_tool

tool_search = create_retriever_tool(
   retriever=retriever,
   name="search_hr_policy",
   description="Searches and returns excerpts from the HR policy.",
)

This code will return relevant information to the HR document. The name and description parameters are mentioned directly for the sake of simplicity, but ensure you pass them through an API call. Finally, you can now configure your LangChain ReAct agent using prompts with various thought, action, and observation steps.

prompt = hub.pull("hwchase17/react")
print(prompt.template)

Output:

Answer the following questions as best you can. You have access to the following tools:

{tools}

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {input}
Thought:{agent_scratchpad}

This prompt works quite well. But to make it compatible with your business needs, you must modify it accordingly.

Create Your LangChain ReAct Agent

Execute the code below to create your ReAct Agent. This code only creates the logical steps for your agent:

react_agent = create_react_agent(llm, [tool_search], prompt)

You must instantiate AgentExecuter to execute the logical steps created.

agent_executor = AgentExecutor(
agent=react_agent,
tools=[tool],
verbose=True,
handle_parsing_errors=True,
max_iterations = 5
)

Test Your LangChain ReAct Agent

You can now test your agent to see if it responds as expected.

query = "Which country has the highest budget?"
agent_executor.invoke({"input": query})

If you are using the updated prompt with today's date passed to it, use the following:

agent_executor.invoke({"input": query, "today_date": date.today()})

Output:

> Entering new AgentExecutor chain...
I don't have access to information about country budgets. I should try searching for this information.
Action: search_hr_policy
Action Input: "highest country budget"[Document(page_content='**Grievance and Disciplinary Procedures:**
Our grievance and disciplinary procedures are outlined on the company intranet. Termination conditions may include gross misconduct or repeated policy violations. In such cases, a disciplinary process will be followed, including a three-strike system, before termination. Employees leaving GlobalCorp should follow the exit process detailed in the employee handbook.

**Confidentiality and Data Security:**
Employees are expected to maintain confidentiality, and strict data security measures are in place. An annual budget of $1 million is allocated for cybersecurity training and awareness programs in the U.S., while budgets for Germany and Japan are €500,000 and ¥50 million, respectively.', Document(page_content='**Health and Safety:**
Workplace safety is a shared responsibility. Emergency procedures are clearly posted throughout our office buildings, and an annual budget of $10,000, €5,000, and ¥1 million is allocated for safety drills and equipment maintenance in the U.S., Germany, and Japan, respectively.

**Communication:**
Important updates are conveyed through company-wide emails and team meetings. An annual budget of $500,000, €250,000, and ¥25 million is allocated for communication initiatives, including employee engagement events in the U.S., Germany, and Japan, respectively.

This policy undergoes an annual review to ensure relevance and compliance. Welcome to GlobalCorp, where our commitment to a diverse, inclusive, and respectful workplace is the foundation of our success.

1.     Recruitment and Selection
1.     Introduction', Document(page_content='**Compensation and Benefits - United States:**
In the United States, employees enjoy health insurance, dental coverage, and a 401(k) retirement plan. An annual budget of $1 million is allocated for employee benefits.

**Compensation and Benefits - Germany:**
In Germany, benefits include health insurance, a pension plan, and an additional 5 days of vacation annually. An annual budget of €500,000 is allocated for employee benefits.

**Compensation and Benefits - Japan:**
In Japan, employees receive health insurance, a commuting allowance, and wellness benefits. An annual budget of ¥50 million is allocated for employee benefits.

**Leave Policies - United States:**
In the United States, employees have 15 days of paid vacation and 10 days of paid sick leave annually. An annual budget of $500,000 is allocated for leave-related expenses.' I found information about budgets in the HR policy, but it seems to be related to cybersecurity training, safety drills, and communication initiatives. It doesn't provide information about country budgets. I need to search for information about country budgets elsewhere.
Action: search_hr_policy
Action Input: "country budgets"[Document(page_content='**Grievance and Disciplinary Procedures:**
Our grievance and disciplinary procedures are outlined on the company intranet. Termination conditions may include gross misconduct or repeated policy violations. In such cases, a disciplinary process will be followed, including a three-strike system, before termination. Employees leaving GlobalCorp should follow the exit process detailed in the employee handbook.

**Confidentiality and Data Security:**
Employees are expected to maintain confidentiality, and strict data security measures are in place. An annual budget of $1 million is allocated for cybersecurity training and awareness programs in the U.S., while budgets for Germany and Japan are €500,000 and ¥50 million, respectively.' Document(page_content='**GlobalCorp Human Resources Policy**

Welcome to GlobalCorp, where our Human Resources Policy is designed to provide a comprehensive framework for employees across our offices in the United States, Germany, and Japan. We operate under an at-will employment relationship, and any contractual agreements should be documented in writing.

At the core of our culture is a commitment to professionalism and ethical conduct. Clear and respectful communication is highly valued, and a business casual dress code is encouraged.

**Work Hours:**
Employees in the United States adhere to a standard 40-hour workweek (Monday to Friday, 9 AM to 5 PM). In Germany, the standard workweek is 38 hours (Monday to Friday, 8 AM to 5 PM), and in Japan, employees work 40 hours per week (Monday to Friday, 9 AM to 6 PM). Punctuality is paramount, and employees are expected to arrive on time. Time-off requests follow country-specific guidelines.', metadata={'source': '../data/globalcorp_hr_policy.txt'}), Document(page_content='**Compensation and Benefits - United States:**
In the United States, employees enjoy health insurance, dental coverage, and a 401(k) retirement plan. An annual budget of $1 million is allocated for employee benefits.

**Compensation and Benefits - Germany:**
In Germany, benefits include health insurance, a pension plan, and an additional 5 days of vacation annually. An annual budget of €500,000 is allocated for employee benefits.

**Compensation and Benefits - Japan:**
In Japan, employees receive health insurance, a commuting allowance, and wellness benefits. An annual budget of ¥50 million is allocated for employee benefits.

**Leave Policies - United States:**
In the United States, employees have 15 days of paid vacation and 10 days of paid sick leave annually. An annual budget of $500,000 is allocated for leave-related expenses.' I still couldn't find information about country budgets in the HR policy. I should try searching for this information using a different tool or source.
Action: search_external_source
Action Input: "highest country budget"search_external_source is not a valid tool, try one of [search_hr_policy].I couldn't find information about country budgets in the HR policy or using the available tools. I need to consult a different source or resource to find the answer to this question.
Final Answer: I don't have access to information about which country has the highest budget.

Note:

  • The Action Input for “highest country budget” is passed as an argument to get_relevant_function(). However, no section in the data represents the highest country budget, so no relevant information can be found.
  • The Observation produces the result of running the action; the retrieved documents are in the next line of the action input. The given information is incorrect, as it mostly happens with the GPT 3.5 model.

You can explore another example. This time, you will modify the query you input to determine whether Japan’s budget differs from that of the United States.

query = "Is the budget for Japan different than the United States?"
agent_executor.invoke({"input": query})

Output:

> Entering new AgentExecutor chain…

I should check the HR policy to see if there is any information about budget differences between Japan and the United States.

Action: search_hr_policy

Action Inputs…

Final Answer: According to the HR policy, the annual budget for employee benefits in the United States is $1 million, while the budget for Japan is ¥50 million.

> Finished chain.

This prompt works well, which indicates that even for prompts with similar meanings, the way you phrase the prompt sentence matters a lot. Another conclusion is that you can opt for the GPT 4 model, which works way better than the GPT 3.5 for the LangChain ReAct framework.

Conclusion

Reasoning and Action, or ReAct, advances the way of thinking from standard prompting to reason tracing and implementing actions. The inclusion of the LangChain ReAct framework improves the LLM’s response. Although using ReAct enhances performance, you must consider additional measures like using the latest language model and providing proper input prompts.

When comparing LangChain ReAct prompting with other methods, you’ll find ReAct offers full control and flexibility in building chatbots. However, it is important that you responsibly select the right tool for your business requirements.

FAQs

How Does LangChain Actually Implement the ReAct Pattern on a High Level?

LangChain implements ReAct patterns using tools and an agent scratchpad. The tools include functions or APIs that help the agent gather information, and the scratchpad keeps a log of observations, actions, and thoughts.

What Is a Better Way of Creating a ReAct Agent, or Are There Any Alternatives to it?

An alternative method that you can choose is using native LLM agents/tools like agent executor, StructuredChat, and Self Ask with Search.

Limitless data movement with free Alpha and Beta connectors
Introducing: our Free Connector Program
The data movement infrastructure for the modern data teams.
Try a 14-day free trial