Record and replay chat conversations in your chatbot app. Helps you understand where your chatbot falls short and how to improve it.
Chats integrate seamlessly with traces by reconciliating messages with LLM calls and agents.
You can record chats in the backend or directly on the frontend if it’s easier for you.
Setup the SDK
Open a thread
Start by opening a thread.
const thread = lunary.openThread()
const thread = lunary.openThread()
thread = lunary.open_thread()
You can resume an existing thread by passing an ID from an existing thread.
// Save `thread.id` somewhere
const thread = lunary.openThread({
id: 'your-thread-id'; // Replace with your actual thread ID
})
// Save `thread.id` somewhere
const thread = lunary.openThread({
id: 'your-thread-id'; // Replace with your actual thread ID
})
# Save `thread.id` somewhere
existing_thread_id = 'your-thread-id' # Replace with your actual thread ID
thread = lunary.open_thread(existing_thread_id)
You can also add tags to a thread by passing a object with a tags
param:
const thread = lunary.openThread({
tags: ['support']
})
const thread = lunary.openThread({
tags: ['support']
})
thread = lunary.open_thread(existing_thread_id, tags=['support'])
Track messages
Now you can track messages. The supported roles are assistant
, user
, system
, & tool
.
thread.trackMessage({
role: 'user',
content: 'Hello, please help me'
})
thread.trackMessage({
role: 'assistant',
content: 'Hello, how can I help you?'
})
thread.trackMessage({
role: 'user',
content: 'Hello, please help me'
})
thread.trackMessage({
role: 'assistant',
content: 'Hello, how can I help you?'
})
thread.track_message({
"role": "user",
"content": "Hello, please help me"
})
thread.track_message({
"role": "assistant",
"content": "Hello, how can I help you?"
})
Track custom events
You can track custom events that happen within your chatbot. This can include things like:
- opening a document
- clicking a button
- submitting a form
- user activity or inactivity
- other events that you want to track
thread.trackEvent("event-name")
// you can also track additional metadata
thread.trackEvent("open-document", {
documentName: "my-document.pdf",
})
thread.trackEvent("event-name")
// you can also track additional metadata
thread.trackEvent("open-document", {
documentName: "my-document.pdf",
})
thread.track_event("event-name")
# you can also use the following optional parameters
thread.track_event("event-name", user_id="user1", user_props={"email": "hello@test.com"}, metadata={})
Capture user feedback
Finally, you can track user feedback on bot replies:
The ID is the same as the one returned by trackMessage
.
const msgId = thread.trackMessage({
role: "assistant",
content: "Hope you like my answers :)"
})
lunary.trackFeedback(msgId, { thumb: "up" })
const msgId = thread.trackMessage({
role: "assistant",
content: "Hope you like my answers :)"
})
lunary.trackFeedback(msgId, { thumb: "up" })
msg_id = thread.track_message({
"role": "assistant",
"content": "Hope you like my answers :)"
})
lunary.track_feedback(msg_id, { "thumb": "up" })
To remove feedback, pass null
as the feedback data.
lunary.trackFeedback(msgId, { thumb: null })
lunary.trackFeedback(msgId, { thumb: null })
lunary.track_feedback(msg_id, { "thumb": None })
Reconciliate with LLM calls & agents
To take full advantage of Lunary’s tracing capabilities, you can reconcile your LLM and agents runs with the messages.
We will automatically reconciliate messages with runs.
const msgId = thread.trackMessage({ role: "user", content: "Hello!" });
const res = await openai.chat.completions
.create({
model: "gpt-4o",
temperature: 1,
messages: [message],
})
.setParent(msgId);
thread.trackMessage({
role: "assistant",
content: res.choices[0].message.content,
});
const msgId = thread.trackMessage({ role: "user", content: "Hello!" });
const res = await openai.chat.completions
.create({
model: "gpt-4o",
temperature: 1,
messages: [message],
})
.setParent(msgId);
thread.trackMessage({
role: "assistant",
content: res.choices[0].message.content,
});
msg_id = thread.track_message({ "role": "user", "content": "Hello!" })
chat_completion = client.chat.completions.create(
messages=[message],
model="gpt-4o",
parent=msg_id
)
thread.track_message(
{"role": "assistant", "content": chat_completion.choices[0].message.content})
If you’re using LangChain or agents behind your chatbot, you can inject the current message id into context as a parent:
const msgId = thread.trackMessage({ role: "user", content: "Hello!" });
// In your backend, inject the message id into the context
const agent = lunary.wrapAgent(function ChatbotAgent(query) {
// your custom code...
});
await agent("Hello!").setParent(msgId);
const msgId = thread.trackMessage({ role: "user", content: "Hello!" });
// In your backend, inject the message id into the context
const agent = lunary.wrapAgent(function ChatbotAgent(query) {
// your custom code...
});
await agent("Hello!").setParent(msgId);
msg_id = thread.track_message({ "role": "user", "content": "Hello!" })
# In your backend, inject the message id into the context
with lunary.parent(msg_id):
# your custom code...
pass
Note that it’s safe to pass the message ID from your frontend to your backend, if you’re tracking chats directly on the frontend for example.