最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - llama-index RAG: how to display retrieved context? - Stack Overflow

programmeradmin2浏览0评论

I am using LlamaIndex to perform retrieval-augmented generation (RAG).

Currently, I can retrieve and answer questions using the following minimal 5 line example, from /:

from llama_index import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)

This returns an answer, but I would like to display the retrieved context (e.g., the document chunks or sources) before the answer.

Desired output format would look something like:

Here's my retrieved context:
[x]
[y]
[z]

And here's my answer:
[answer]

What is the simplest reproducible way to modify the 5 line example to achieve this?

发布评论

评论列表(0)

  1. 暂无评论