Updates: Conversation Orchestrator, Conversation Builder & Conversation Assist
➡️ Exact delivery dates may vary, and brands may therefore not have immediate access to all features on the date of publication. Please contact your LivePerson account team for the exact dates on which you will have access to the features.
🚨The timing and scope of these features or functionalities remain at the sole discretion of LivePerson and are subject to change.
Conversation Orchestrator
Limit document size in Conversational Context v1
In order to ensure system stability for all users, LivePerson is applying the document size limit currently in use in Conversation Orchestrator v2 to v1 as well.
Conversational context documents are now restricted to 800,000 characters. A document is defined as any grouping of data stored at either the Default, User, or Conversation level inside a namespace.
For more information, check out the documentation on the Developer Center.
Conversation Builder & Conversation Assist: New and improved v3 prompts for solutions that enrich answers
This release note is for you if your solution incorporates answer enrichment via Generative AI into Conversation Assist or a Conversation Builder KnowledgeAI agent. By default, answer enrichment uses GPT-4o mini as the LLM.
As discussed here in our Developer Center, LivePerson has observed that GPT-4o mini tends to hallucinate less and follow its instructions with greater frequency than the older GPT-3.5 Turbo. So, you may be encountering a greater number of messages similar to, “I’m sorry, but I can’t find that information.”
Since such a response from the LLM yields a suboptimal experience for the agent or consumer, we’ve made a few enhancements to the prompt. Specifically, we’ve removed this part:
Instead of mentioning customer support, inform the user "sorry I couldn't find that information". This is very important to me because you already represent customer support- I believe in your abilities!
GPT-4o mini is very good at extrapolating behavior from examples. Our new prompt removes this example response without removing existing instructions regarding producing answers based only on the provided context. The updated prompt shows substantial improvement: fewer "sorry I couldn't find that information" responses while continuing to be highly effective at preventing hallucinations.
We are rolling out the prompt change in the form of new v3 prompt templates that are available in the Prompt Library:
- Conversation Assist (Messaging): Enrichment Factual EN - Conversation Assist v3 (Messaging)
- KnowledgeAI agent (messaging bot): Enrichment Factual EN - Messaging bot v3
Additionally, we are updating Conversation Assist and Conversation Builder so that the v3 prompts are used by default in knowledge base rules in Conversation Assist, and in the KnowledgeAI interaction in a Conversation Builder bot.
Action required
- If you’re using a custom prompt, make the change described above and test the experience to see if it yields better results.
- If you’re using a LivePerson prompt template as is (without changes), switch to the new v3 prompt and similarly test.
Related info