I was following an old tutorial about chaining in Langchain. With it, I was writing some demo chains of my own, such as:
prompt_candidates = ChatPromptTemplate.from_template(
"""A research has shown that the following SUBJECTS are somehow related with the CAREER '{career}':
{research_list}
From the SUBJECTS list, get me a list of ACADEMIC THEMES. The ACADEMIC THEMES list must accomplish the following requirements:
[Long list of requirements, shortened for both readability and propietary contents]
"""
)
prompt_finalists = ChatPromptTemplate.from_template(
"""I have a list of ACADEMIC THEMES, all of them for the CAREER '{career}':
{academic_themes}
Another Curricular Design Expert has determined however, that these ACADEMIC THEMES need the following CORRECTIONS:
[Another long list of requirements, shortened for both readability and propietary contents]
Rewrite the ACADEMIC THEMES, so they become compliant with the CORRECTIONS raised.
"""
)
# Chains definition
candidates_chain = LLMChain(llm=llm, prompt=prompt_candidates, output_key="academic_themes")
finalists_chain = LLMChain(llm=llm, prompt=prompt_finalists, output_key="finalists")
# Chaining
final_chain = SequentialChain(
chains=[candidates_chain, finalists_chain],
input_variables=["career", "research_list"],
output_variables=["finalists"],
verbose=False
)
However, I got the following warning:
LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use RunnableSequence, e.g., `prompt | llm` instead.
candidates_chain = LLMChain(llm=llm, prompt=prompt_candidates, output_key="academic_themes")
Indeed, I was reading the docs, which ask you to use the pipe "|" operator; however the examples provided there are very simple, and usually involve a prompt and a llm, which are very straightforward (and are even provided in the same warning message); however I could not figure out how to adapt the pipe operator in my own chain.
I was thinking of something like:
from langchain_core.output_parsers import StrOutputParser
chain_a = prompt_candidates | llm | StrOutputParser()
chain_b = prompt_finalists | llm | StrOutputParser()
composed_chain = chain_a | chain_b
output_chain=composed_chain.invoke(
{
"career": "Artificial Intelligence",
"research_list": "\n".join(research_col)
}
)
But this gets me:
TypeError: Expected mapping type as input to ChatPromptTemplate. Received <class 'str'>.
I have tried several stuff, but nothing functional. What am I doing wrong?