Conversation Simulator
Quick Summary
While the Synthesizer
generates regular goldens representing single, atomic LLM interactions, deepeval
's ConversationSimulator
mimics a fake user interacting with your chatbot to generate conversational goldens instead. This helps automate the process of manually prompting and testing LLM chatbots.
from deepeval.conversation_simulator import ConversationSimulator
convo_simulator = ConversationSimulator()
convo_simulator.simulate(...)
print(convo_simulator.simulated_conversations)
The ConversationSimulator
uses an LLM to generate a fake user profile and scenario, then simulates a back-and-forth with your chatbot. The resulting dialogue is used to create ConversationalTestCase
s for evaluation using deepeval
's conversational metrics.
Create Your First Simulator
from deepeval.conversation_simulator import ConversationSimulator
user_profile_items = ["first name", "last name", "address", "social security number"]
user_intentions = ["opening a bank account", "disputing a payment", "enquiring a recent transaction"]
convo_simulator = ConversationSimulator(user_profile_items=user_profile_items, user_intentions=user_intentions)
There are TWO mandatory and FOUR optional parameters when creating a ConversationSimulator
:
user_profile_items
: a list of strings representing the fake user properties that should be generated for each user profile.user_intentions
: a list of strings representing the possible user intentions of a fake user profile.deepeval
will randomly sample from the list ofuser_intentions
to determine which user intention will be mimicked to simluate a particular conversation.- [Optional]
opening_message
: a string that specifies your LLM chatbot's opening message. You should only provide this IF your chatbot is designed to talk before a user does. Defaulted toNone
. - [Optional]
simulator_model
: a string specifying which of OpenAI's GPT models to use for generation, OR any custom LLM model of typeDeepEvalBaseLLM
. Defaulted togpt-4o
. - [Optional]
async_mode
: a boolean which when set toTrue
, enables concurrent generation of goldens. Defaulted toTrue
. - [Optional]
max_concurrent
: an integer that determines the maximum number of goldens that can be generated in parallel at any point in time. You can decrease this value if you're running into rate limit errors. Defaulted to100
.
The example shown above will simulate fake user profiles for a financial LLM chatbot use case.
Simulate Your First Conversation
To simulate your first conversation, simply define a callback that wraps around your LLM chatbot and call the simulate()
method:
...
def model_callback(input: str) -> str:
# Replace this with your LLM application
return f"I don't know how to answer this: {input}"
convo_simulator.simulate(model_callback=model_callback)
There are ONE mandatory and THREE optional parameters when calling the simulate
method:
model_callback
: a callback of typeCallable[[str], str]
that wraps around the target LLM application you wish to generate output from.- [Optional]
min_turns
: an integer that specifies the minimum number of turns to simulate per conversation. - [Optional]
max_turns
:an integer that specifies the maximum number of turns to simulate per conversation. - [Optional]
num_conversations
: an integer that specifies the total number ofConversationalGoldens
s to simulate.
Your model_callback
most accept ONE AND ONLY ONE parameter of type string, and MUST only return a string and nothing else.
Using Simulated Conversations
from deepeval import evaluate
from deepeval.metrics import ConversationRelevancyMetric
...
# Define a conversational metric
metric = ConversationRelevancyMetric()
# Evaluate conversations
evaluate(test_cases=convo_simulator.conversations, metrics=[metric])