DeepEval just got a new look 🎉 Read the announcement to learn more.
Non-LLM

Json Correctness

Single-turn
Referenceless

The json correctness metric measures whether your LLM application is able to generate actual_outputs with the correct json schema.

Required Arguments

To use the JsonCorrectnessMetric, you'll have to provide the following arguments when creating an LLMTestCase:

  • input
  • actual_output

Read the How Is It Calculated section below to learn how test case parameters are used for metric calculation.

Usage

First define your schema by creating a pydantic BaseModel:

from pydantic import BaseModel

class ExampleSchema(BaseModel):
    name: str

Then supply it as the expected_schema when creating a JsonCorrectnessMetric, which can be used for end-to-end evaluation:

from deepeval import evaluate
from deepeval.metrics import JsonCorrectnessMetric
from deepeval.test_case import LLMTestCase


metric = JsonCorrectnessMetric(
    expected_schema=ExampleSchema,
    model="gpt-4",
    include_reason=True
)
test_case = LLMTestCase(
    input="Output me a random Json with the 'name' key",
    # Replace this with the actual output from your LLM application
    actual_output="{'name': null}"
)

# To run metric as a standalone
# metric.measure(test_case)
# print(metric.score, metric.reason)

evaluate(test_cases=[test_case], metrics=[metric])

There are ONE mandatory and SIX optional parameters when creating an PromptAlignmentMetric:

  • expected_schema: a pydantic BaseModel specifying the schema of the Json that is expected from your LLM.
  • [Optional] threshold: a float representing the minimum passing threshold, defaulted to 0.5.
  • [Optional] model: a string specifying which of OpenAI's GPT models to use to generate reasons, OR any custom LLM model of type DeepEvalBaseLLM. Defaulted to gpt-5.4.
  • [Optional] include_reason: a boolean which when set to True, will include a reason for its evaluation score. Defaulted to True.
  • [Optional] strict_mode: a boolean which when set to True, enforces a binary metric score: 1 for perfection, 0 otherwise. It also overrides the current threshold and sets it to 1. Defaulted to False.
  • [Optional] async_mode: a boolean which when set to True, enables concurrent execution within the measure() method. Defaulted to True.
  • [Optional] verbose_mode: a boolean which when set to True, prints the intermediate steps used to calculate said metric to the console, as outlined in the How Is It Calculated section. Defaulted to False.

Within components

You can also run the JsonCorrectnessMetric within nested components for component-level evaluation.

from deepeval.dataset import Golden
from deepeval.tracing import observe, update_current_span
...

@observe(metrics=[metric])
def inner_component():
    # Set test case at runtime
    test_case = LLMTestCase(input="...", actual_output="...")
    update_current_span(test_case=test_case)
    return

@observe
def llm_app(input: str):
    # Component can be anything from an LLM call, retrieval, agent, tool use, etc.
    inner_component()
    return

evaluate(observed_callback=llm_app, goldens=[Golden(input="Hi!")])

As a standalone

You can also run the JsonCorrectnessMetric on a single test case as a standalone, one-off execution.

...

metric.measure(test_case)
print(metric.score, metric.reason)

How Is It Calculated?

The PromptAlignmentMetric score is calculated according to the following equation:

Json Correctness={1If the actual output fits the expected schema,0Otherwise\text{Json Correctness} = \begin{cases} 1 & \text{If the actual output fits the expected schema}, \\ 0 & \text{Otherwise} \end{cases}

The JsonCorrectnessMetric does not use an LLM for evaluation and instead uses the provided expected_schema to determine whether the actual_output can be loaded into the schema.

On this page