最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - Llama Index AgentWorkflow WorkflowRuntimeError: Error in step 'run_agent_step': 'toolUse&#3

programmeradmin3浏览0评论

I have a simple llama-index AgentWorkflow based on the first example from this llama-index doc example notebook:

from llama_index.core.agent.workflow import AgentWorkflow
import asyncio

async def magic_number():
    """Get the magic number."""
    print("Here")
    await asyncio.sleep(1)
    return 42

workflow = AgentWorkflow.from_tools_or_functions(
    [magic_number],
    verbose=True,
    llm=llm # <--- Need to define llm for this to run
)

async def main():
    result = await workflow.run(user_msg="Get the magic number")
    print(result)


if __name__ == "__main__":
    asyncio.run(main(), debug=True)

Which produces this error:

Running step init_run
Step init_run produced event AgentInput
Executing <Task pending name='init_run' coro=<Workflow._start.<locals>._task() running at tasks.py:410> took 0.135 seconds
Running step setup_agent
Step setup_agent produced event AgentSetup
Running step run_agent_step
Executing <Task pending name='run_agent_step' coro=<Workflow._start.<locals>._task() running at tasks.py:410> took 0.706 seconds
Exception in callback Dispatcher.span.<locals>.wrapper.<locals>.handle_future_result(span_id='Workflow.run...-e79838aa3b7a', bound_args=<BoundArgumen...mory': None})>, instance=<llama_index....00203B74F7620>, context=<_contextvars...00203B6D93440>)(<WorkflowHand...handler.py:20>) at dispatcher.py:274
handle: <Handle Dispatcher.span.<locals>.wrapper.<locals>.handle_future_result(span_id='Workflow.run...-e79838aa3b7a', bound_args=<BoundArgumen...mory': None})>, instance=<llama_index....00203B74F7620>, context=<_contextvars...00203B6D93440>)(<WorkflowHand...handler.py:20>) at workflow.py:553>
source_traceback: Object created at (most recent call last):
  File "test.py", line 36, in <module>
    asyncio.run(main(), debug=True)
  File "runners.py", line 194, in run
    return runner.run(main)
  File "runners.py", line 118, in run
    return self._loop.run_until_complete(task)
  File "base_events.py", line 708, in run_until_complete
    self.run_forever()
  File "base_events.py", line 679, in run_forever
    self._run_once()
  File "base_events.py", line 2019, in _run_once
    handle._run()
  File "events.py", line 89, in _run
    self._context.run(self._callback, *self._args)
  File "workflow.py", line 553, in _run_workflow
    result.set_exception(e)
Traceback (most recent call last):
  File "workflow.py", line 304, in _task
    new_ev = await instrumented_step(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "dispatcher.py", line 368, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "multi_agent_workflow.py", line 329, in run_agent_step
    agent_output = await agent.take_step(
                   ^^^^^^^^^^^^^^^^^^^^^^
    ...<4 lines>...
    )
    ^
  File "function_agent.py", line 48, in take_step
    async for last_chat_response in response:
    ...<16 lines>...
        )
  File "callbacks.py", line 88, in wrapped_gen
    async for x in f_return_val:
    ...<8 lines>...
        last_response = x
  File "base.py", line 495, in gen
    tool_use = content_block_start["toolUse"]
               ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'toolUse'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "events.py", line 89, in _run
    self._context.run(self._callback, *self._args)
    ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "dispatcher.py", line 286, in handle_future_result
    raise exception
  File "workflow.py", line 542, in _run_workflow
    raise exception_raised
  File "workflow.py", line 311, in _task
    raise WorkflowRuntimeError(
        f"Error in step '{name}': {e!s}"
    ) from e
llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'run_agent_step': 'toolUse'
Traceback (most recent call last):
  File "workflow.py", line 304, in _task
    new_ev = await instrumented_step(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "dispatcher.py", line 368, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "multi_agent_workflow.py", line 329, in run_agent_step
    agent_output = await agent.take_step(
                   ^^^^^^^^^^^^^^^^^^^^^^
    ...<4 lines>...
    )
    ^
  File "function_agent.py", line 48, in take_step
    async for last_chat_response in response:
    ...<16 lines>...
        )
  File "callbacks.py", line 88, in wrapped_gen
    async for x in f_return_val:
    ...<8 lines>...
        last_response = x
  File "base.py", line 495, in gen
    tool_use = content_block_start["toolUse"]
               ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'toolUse'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "test.py", line 36, in <module>
    asyncio.run(main(), debug=True)
    ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
  File "runners.py", line 194, in run
    return runner.run(main)
           ~~~~~~~~~~^^^^^^
  File "runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
  File "base_events.py", line 721, in run_until_complete
    return future.result()
           ~~~~~~~~~~~~~^^
  File "test.py", line 31, in main
    result = await workflow.run(user_msg="Get the magic number")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "workflow.py", line 542, in _run_workflow
    raise exception_raised
  File "workflow.py", line 311, in _task
    raise WorkflowRuntimeError(
        f"Error in step '{name}': {e!s}"
    ) from e
llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'run_agent_step': 'toolUse'

This should be simpler than the example in the docs, but I keep getting this error. Can anyone help me understand why?

I am running Python 3.13, llama-index 0.12.24.post1, and using LLM anthropic claude 3.5 sonnet.

I have a simple llama-index AgentWorkflow based on the first example from this llama-index doc example notebook:

from llama_index.core.agent.workflow import AgentWorkflow
import asyncio

async def magic_number():
    """Get the magic number."""
    print("Here")
    await asyncio.sleep(1)
    return 42

workflow = AgentWorkflow.from_tools_or_functions(
    [magic_number],
    verbose=True,
    llm=llm # <--- Need to define llm for this to run
)

async def main():
    result = await workflow.run(user_msg="Get the magic number")
    print(result)


if __name__ == "__main__":
    asyncio.run(main(), debug=True)

Which produces this error:

Running step init_run
Step init_run produced event AgentInput
Executing <Task pending name='init_run' coro=<Workflow._start.<locals>._task() running at tasks.py:410> took 0.135 seconds
Running step setup_agent
Step setup_agent produced event AgentSetup
Running step run_agent_step
Executing <Task pending name='run_agent_step' coro=<Workflow._start.<locals>._task() running at tasks.py:410> took 0.706 seconds
Exception in callback Dispatcher.span.<locals>.wrapper.<locals>.handle_future_result(span_id='Workflow.run...-e79838aa3b7a', bound_args=<BoundArgumen...mory': None})>, instance=<llama_index....00203B74F7620>, context=<_contextvars...00203B6D93440>)(<WorkflowHand...handler.py:20>) at dispatcher.py:274
handle: <Handle Dispatcher.span.<locals>.wrapper.<locals>.handle_future_result(span_id='Workflow.run...-e79838aa3b7a', bound_args=<BoundArgumen...mory': None})>, instance=<llama_index....00203B74F7620>, context=<_contextvars...00203B6D93440>)(<WorkflowHand...handler.py:20>) at workflow.py:553>
source_traceback: Object created at (most recent call last):
  File "test.py", line 36, in <module>
    asyncio.run(main(), debug=True)
  File "runners.py", line 194, in run
    return runner.run(main)
  File "runners.py", line 118, in run
    return self._loop.run_until_complete(task)
  File "base_events.py", line 708, in run_until_complete
    self.run_forever()
  File "base_events.py", line 679, in run_forever
    self._run_once()
  File "base_events.py", line 2019, in _run_once
    handle._run()
  File "events.py", line 89, in _run
    self._context.run(self._callback, *self._args)
  File "workflow.py", line 553, in _run_workflow
    result.set_exception(e)
Traceback (most recent call last):
  File "workflow.py", line 304, in _task
    new_ev = await instrumented_step(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "dispatcher.py", line 368, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "multi_agent_workflow.py", line 329, in run_agent_step
    agent_output = await agent.take_step(
                   ^^^^^^^^^^^^^^^^^^^^^^
    ...<4 lines>...
    )
    ^
  File "function_agent.py", line 48, in take_step
    async for last_chat_response in response:
    ...<16 lines>...
        )
  File "callbacks.py", line 88, in wrapped_gen
    async for x in f_return_val:
    ...<8 lines>...
        last_response = x
  File "base.py", line 495, in gen
    tool_use = content_block_start["toolUse"]
               ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'toolUse'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "events.py", line 89, in _run
    self._context.run(self._callback, *self._args)
    ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "dispatcher.py", line 286, in handle_future_result
    raise exception
  File "workflow.py", line 542, in _run_workflow
    raise exception_raised
  File "workflow.py", line 311, in _task
    raise WorkflowRuntimeError(
        f"Error in step '{name}': {e!s}"
    ) from e
llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'run_agent_step': 'toolUse'
Traceback (most recent call last):
  File "workflow.py", line 304, in _task
    new_ev = await instrumented_step(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "dispatcher.py", line 368, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "multi_agent_workflow.py", line 329, in run_agent_step
    agent_output = await agent.take_step(
                   ^^^^^^^^^^^^^^^^^^^^^^
    ...<4 lines>...
    )
    ^
  File "function_agent.py", line 48, in take_step
    async for last_chat_response in response:
    ...<16 lines>...
        )
  File "callbacks.py", line 88, in wrapped_gen
    async for x in f_return_val:
    ...<8 lines>...
        last_response = x
  File "base.py", line 495, in gen
    tool_use = content_block_start["toolUse"]
               ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'toolUse'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "test.py", line 36, in <module>
    asyncio.run(main(), debug=True)
    ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
  File "runners.py", line 194, in run
    return runner.run(main)
           ~~~~~~~~~~^^^^^^
  File "runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
  File "base_events.py", line 721, in run_until_complete
    return future.result()
           ~~~~~~~~~~~~~^^
  File "test.py", line 31, in main
    result = await workflow.run(user_msg="Get the magic number")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "workflow.py", line 542, in _run_workflow
    raise exception_raised
  File "workflow.py", line 311, in _task
    raise WorkflowRuntimeError(
        f"Error in step '{name}': {e!s}"
    ) from e
llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'run_agent_step': 'toolUse'

This should be simpler than the example in the docs, but I keep getting this error. Can anyone help me understand why?

I am running Python 3.13, llama-index 0.12.24.post1, and using LLM anthropic claude 3.5 sonnet.

Share Improve this question edited Mar 17 at 13:48 LMc asked Mar 17 at 13:35 LMcLMc 18.8k4 gold badges39 silver badges54 bronze badges 2
  • In my case, llm was a BedrockConverse instance. After upgrading llama-index-llms-bedrock-converse to 0.4.11 this works: GitHub PR – LMc Commented Mar 21 at 15:15
  • ^^ this worked. – Harshit Singhai Commented Mar 31 at 7:07
Add a comment  | 

1 Answer 1

Reset to default 0

In my case, llm was a BedrockConverse instance. After upgrading llama-index-llms-bedrock-converse to 0.4.11 this works.

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论