最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - How to Use Template in Langchain to Insert Results from Chain for Further Reasoning? - Stack Overflow

programmeradmin1浏览0评论

I'm working with Langchain and OpenAI to develop a conversational AI. I've integrated multiple tools into the chain and am using a template to structure the conversation. However, I'm stuck on how to use the results from the chain (chain.invoke(...)) in the template to allow the agent to continue reasoning based on these results. Here's the relevant part of my code:

from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
import os
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool

load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
if api_key is not None:
    os.environ["OPENAI_API_KEY"] = api_key
else:
    raise ValueError("OPENAI_API_KEY environment variable is not set.")

llm = ChatOpenAI(
    model="gpt-4o",
    temperature=0,
)

template = ChatPromptTemplate([
    ("system", "You are a helpful AI bot. Your name is Bob."),
    ("human", "Hello, how are you doing?"),
    ("ai", "I'm doing well, thanks!"),
    ("human", "{user_input}"),
    ("placeholder", "{conversation}")
])

@tool
def weather(city: str) -> str:
    """Gives the weather in a given city"""
    return f"The weather in {city} is sunny"

@tool
def sum_numbers(numbers: str) -> str:
    """Sums two numbers"""
    return str(sum(map(int, numbers.split())))

llm_with_tools = llm.bind_tools([weather, sum_numbers])

chain = template | llm_with_tools

res = chain.invoke({"user_input": "What is the weather in Tokyo? also what is 3 + 1? Give me the answer as if you are a cat"})

How can I modify the template or the invocation so that Bob can use the results from chain.invoke(...) for further reasoning in a continued conversation? For instance, after obtaining the weather and the sum, I want the AI to use these results in its next interactions.

I'm using

langchain==0.3.20
langchain-community==0.3.19
langchain-openai==0.3.8
openai==1.66.3
python-dotenv==1.0.1

I'm working with Langchain and OpenAI to develop a conversational AI. I've integrated multiple tools into the chain and am using a template to structure the conversation. However, I'm stuck on how to use the results from the chain (chain.invoke(...)) in the template to allow the agent to continue reasoning based on these results. Here's the relevant part of my code:

from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
import os
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool

load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
if api_key is not None:
    os.environ["OPENAI_API_KEY"] = api_key
else:
    raise ValueError("OPENAI_API_KEY environment variable is not set.")

llm = ChatOpenAI(
    model="gpt-4o",
    temperature=0,
)

template = ChatPromptTemplate([
    ("system", "You are a helpful AI bot. Your name is Bob."),
    ("human", "Hello, how are you doing?"),
    ("ai", "I'm doing well, thanks!"),
    ("human", "{user_input}"),
    ("placeholder", "{conversation}")
])

@tool
def weather(city: str) -> str:
    """Gives the weather in a given city"""
    return f"The weather in {city} is sunny"

@tool
def sum_numbers(numbers: str) -> str:
    """Sums two numbers"""
    return str(sum(map(int, numbers.split())))

llm_with_tools = llm.bind_tools([weather, sum_numbers])

chain = template | llm_with_tools

res = chain.invoke({"user_input": "What is the weather in Tokyo? also what is 3 + 1? Give me the answer as if you are a cat"})

How can I modify the template or the invocation so that Bob can use the results from chain.invoke(...) for further reasoning in a continued conversation? For instance, after obtaining the weather and the sum, I want the AI to use these results in its next interactions.

I'm using

langchain==0.3.20
langchain-community==0.3.19
langchain-openai==0.3.8
openai==1.66.3
python-dotenv==1.0.1
Share Improve this question edited Mar 15 at 0:50 desertnaut 60.5k32 gold badges155 silver badges182 bronze badges asked Mar 14 at 13:42 NorhtherNorhther 5003 gold badges16 silver badges45 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

You can use the memory. For example, first create a ConversaionBufferMemory to store the obtained results:

from langchain.memory import ConversationBufferMemory

mmemory = ConversationBufferMemory(
    memory_key="context", return_messages=True
)

Then store the obtained results in the memory. For example, you can do it as follow for the weather information:

@tool
def weather(city: str) -> str:
    """Gives the weather in a given city"""
    weather_info = f"The weather in {city} is sunny."
    memory.save_context({"context": weather_info}, {})  # Store weather in memory
    return weather_info

Then, use it in your LLMCahin as follow:

chain = LLMChain(llm=llm_with_tools, prompt=prompt, memory=memory)

You can manage the memory from now on, to control what should be passed to the context and accordingly to all chains that are using the same memory.

发布评论

评论列表(0)

  1. 暂无评论