MCP Task Completion
The MCP task completion metric is a conversational metric that uses LLM-as-a-judge to evaluate how effectively an MCP based LLM agent accomplishes a task. Task Completion is a self-explaining LLM-Eval, meaning it outputs a reason for its metric score.
Required Arguments
To use the MCPTaskCompletionMetric
, you'll have to provide the following arguments when creating a ConversationalTestCase
:
turns
You will also need to provide mcp_tools_called
, mcp_resources_called
and mcp_prompts_called
inside the turns whenever there is an MCP interaction in your agent's workflow. You can learn more about creating MCP test cases here.
Read the How Is It Calculated section below to learn more.
Usage
The MCPTaskCompletionMetric()
can be used for end-to-end multi-turn evaluations of MCP based agents.
from deepeval import evaluate
from deepeval.metrics import MCPTaskCompletionMetric
from deepeval.test_case import Turn, ConversationalTestCase, MCPServer
convo_test_case = ConversationalTestCase(
turns=[Turn(role="...", content="..."), Turn(role="...", content="...")],
mcp_servers=[MCPServer(...)]
)
metric = MCPTaskCompletionMetric(threshold=0.5)
# To run metric as a standalone
# metric.measure(convo_test_case)
# print(metric.score, metric.reason)
evaluate(test_cases=[convo_test_case], metrics=[metric])
There are SIX optional parameters when creating a MCPTaskCompletionMetric
:
- [Optional]
threshold
: a float representing the minimum passing threshold, defaulted to 0.5. - [Optional]
model
: a string specifying which of OpenAI's GPT models to use, OR any custom LLM model of typeDeepEvalBaseLLM
. Defaulted to 'gpt-4o'. - [Optional]
include_reason
: a boolean which when set toTrue
, will include a reason for its evaluation score. Defaulted toTrue
. - [Optional]
strict_mode
: a boolean which when set toTrue
, enforces a binary metric score: 1 for perfection, 0 otherwise. It also overrides the current threshold and sets it to 1. Defaulted toFalse
. - [Optional]
async_mode
: a boolean which when set toTrue
, enables concurrent execution within themeasure()
method. Defaulted toTrue
. - [Optional]
verbose_mode
: a boolean which when set toTrue
, prints the intermediate steps used to calculate said metric to the console, as outlined in the How Is It Calculated section. Defaulted toFalse
.
As a standalone
You can also run the MCPTaskCompletionMetric
on a single test case as a standalone, one-off execution.
...
metric.measure(convo_test_case)
print(metric.score, metric.reason)
This is great for debugging or if you wish to build your own evaluation pipeline, but you will NOT get the benefits (testing reports, Confident AI platform) and all the optimizations (speed, caching, computation) the evaluate()
function or deepeval test run
offers.
How Is It Calculated
The MCPTaskCompletionMetric
score is calculated according to the following equation:
The MCPTaskCompletionMetric
converts turns into individual unit interactions and iterates over each interaction to evaluate whether the agent finished the task given by user for that interaction using an LLM.