You can also use the callback handler with LCEL, LangChain Expression Language.
Copy
from langchain_openai import ChatOpenAIfrom langchain_core.runnables import RunnablePassthrough, RunnableConfigfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateimport lunary# Initialize the Lunary handlerhandler = lunary.LunaryCallbackHandler()config = RunnableConfig({"callbacks": [handler]})prompt = ChatPromptTemplate.from_template( "Tell me a short joke about {topic}")output_parser = StrOutputParser()model = ChatOpenAI(model="gpt-4")chain = ( {"topic": RunnablePassthrough()} | prompt | model | output_parser)chain.invoke("ice cream", config=config) # You need to pass the config each time you call `.invoke()`
The callback handler works seamlessly with LangChain agents and chains.For agents, it is recommended to pass a name in the metadatas to track them in the dashboard.Example:
from langchain.schema import SystemMessage, HumanMessagefrom langserve import RemoteRunnableopenai = RemoteRunnable("http://localhost:8000/openai/")prompt = [ SystemMessage(content="Act like either a cat or a parrot."), HumanMessage(content="Hello!"),]res = openai.invoke("Hello", config={"metadata": { "user_id": "123", "tags": ["user1"]}})print(res)