最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - Extracting AI Responses from a Multi-Agent Graph using LangGraph - Stack Overflow

programmeradmin3浏览0评论

I’m streaming events from my hierarchical multi-agent graph with human in the loop like this:

events = graph.stream(lang_input, config=thread_config, stream_mode="updates", subgraphs=True)

How do I extract just the AI-generated responses from this? The return type seems arbitrary, making it unclear which part contains the actual AI outputs, especially since my graph contains LLM nodes nested in subgraphs. There does not seem to be a structured response from graph.stream(..) so im a bit stumped.

here is a sample version of the output I received

[(('supervisor:<id>',), {'agent': {'messages': [AIMessage(content='', additional_kwargs={'function_call': {'name': 'transfer_to_agent', 'arguments': '{}'}})]}}),
 ((), {'supervisor': [{'messages': [HumanMessage(content='fetch today's plan'), AIMessage(content='', additional_kwargs={'function_call': {'name': 'transfer_to_agent', 'arguments': '{}'}})]}, {'messages': [ToolMessage(content='Transferred to agent')]}]}),
 (('agent:<id>', 'tool_manager:<id>'), {'agent': {'messages': [AIMessage(content="Good evening! Here's your plan for today.", additional_kwargs={'function_call': {'name': 'fetch_plan', 'arguments': '{"date": "2025-03-14", "user_id": "<user_id>"}'}})]}}),
 (('agent:<id>', 'tool_manager:<id>'), {'tools': {'messages': [ToolMessage(content="[Plan details here]")]}}),
 (('agent:<id>', 'tool_manager:<id>'), {'agent': {'messages': [AIMessage(content="Here's today's detailed plan:\n- Breakfast: Skipped\n- Lunch: Chicken salad\n- Dinner: Bhuna Ghost\n\nWould you like to make any changes?")]}})
((), {'__interrupt__': (Interrupt(value='human_input', resumable=True, ns=['meal_planning_agent:<id>', 'human:<id>'], when='during'),)})]]

I’m streaming events from my hierarchical multi-agent graph with human in the loop like this:

events = graph.stream(lang_input, config=thread_config, stream_mode="updates", subgraphs=True)

How do I extract just the AI-generated responses from this? The return type seems arbitrary, making it unclear which part contains the actual AI outputs, especially since my graph contains LLM nodes nested in subgraphs. There does not seem to be a structured response from graph.stream(..) so im a bit stumped.

here is a sample version of the output I received

[(('supervisor:<id>',), {'agent': {'messages': [AIMessage(content='', additional_kwargs={'function_call': {'name': 'transfer_to_agent', 'arguments': '{}'}})]}}),
 ((), {'supervisor': [{'messages': [HumanMessage(content='fetch today's plan'), AIMessage(content='', additional_kwargs={'function_call': {'name': 'transfer_to_agent', 'arguments': '{}'}})]}, {'messages': [ToolMessage(content='Transferred to agent')]}]}),
 (('agent:<id>', 'tool_manager:<id>'), {'agent': {'messages': [AIMessage(content="Good evening! Here's your plan for today.", additional_kwargs={'function_call': {'name': 'fetch_plan', 'arguments': '{"date": "2025-03-14", "user_id": "<user_id>"}'}})]}}),
 (('agent:<id>', 'tool_manager:<id>'), {'tools': {'messages': [ToolMessage(content="[Plan details here]")]}}),
 (('agent:<id>', 'tool_manager:<id>'), {'agent': {'messages': [AIMessage(content="Here's today's detailed plan:\n- Breakfast: Skipped\n- Lunch: Chicken salad\n- Dinner: Bhuna Ghost\n\nWould you like to make any changes?")]}})
((), {'__interrupt__': (Interrupt(value='human_input', resumable=True, ns=['meal_planning_agent:<id>', 'human:<id>'], when='during'),)})]]
Share Improve this question asked Mar 15 at 15:01 Sukumar GanesanSukumar Ganesan 214 bronze badges 3
  • which part contains the actual AI outputs - when you find AIMessage tag in your response when AI contents lay there. – Coco Q. Commented Mar 15 at 15:15
  • true, but I am not sure of the structure of the response. some are lists, other are dictionaries, some are list of dictionaries. i would have to write a generic parser that would have to go through this to get the AIMessage. i was hoping to avoid that. – Sukumar Ganesan Commented Mar 15 at 16:04
  • Your response structure is a nested dictionary, as your graph has agents nested inside other agents. The response structure mirrors graph structure. – Coco Q. Commented Mar 18 at 11:11
Add a comment  | 

1 Answer 1

Reset to default 0

I use for-loop to interate all events of graph:

def stream_graph_updates(user_input: str):
    for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}, subgraphs=True):
        for value in event:
            print("Assistant:", value)
            print("----")

Example output:

Assistant: ()
----
Assistant: {'supervisor': {'next': 'expert'}}
----
You're using a XLMRobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
Assistant: ()
----
Assistant: {'expert': {'messages': [AIMessage(content='Không có thông tin liên quan đến mô hình triển khai Vector trên môi trường DevOps-2 trong các tài liệu được cung cấp.', additional_kwargs={}, response_metadata={}, name='expert')]}}
----
Assistant: ()
----
Assistant: {'supervisor': {'next': '__end__'}}
----

发布评论

评论列表(0)

  1. 暂无评论