Skip to main content

OpenRouter

deepeval's integration with OpenRouter allows you to use the OpenRouter gateway, connecting any OpenRouter supported model to power all of deepeval's metrics.

Command Line

To configure your OpenRouter model through the CLI, run the following command:

deepeval set-openrouter \
--model "openai/gpt-4.1" \ # Ex: openai/gpt-4.1
--base-url "https://openrouter.ai/api/v1" \
--temperature=0 \
--prompt-api-key
info

The CLI command above sets OpenRouter as the default provider for all metrics, unless overridden in Python code. To use a different default model provider, you must first unset OpenRouter:

deepeval unset-openrouter
Persisting settings

You can persist CLI settings with the optional --save flag. See Flags and Configs -> Persisting CLI settings.

Python

Alternatively, you can define OpenRouterModel directly in Python code:

from deepeval.models import OpenRouterModel
from deepeval.metrics import AnswerRelevancyMetric

model = OpenRouterModel(
model="openai/gpt-4.1",
api_key="your-openrouter-api-key",
# Optional: override the default OpenRouter endpoint
base_url="https://openrouter.ai/api/v1",
# Optional: pass OpenRouter headers via **kwargs
default_headers={
"HTTP-Referer": "https://your-site.com",
"X-Title": "My eval pipeline",
},
)

answer_relevancy = AnswerRelevancyMetric(model=model)

There are ZERO mandatory and SEVEN optional parameters when creating an OpenRouterModel:

  • [Optional] model: A string specifying the OpenRouter model to use. Defaults to OPENROUTER_MODEL_NAME if set; otherwise falls back to "openai/gpt-4.1".
  • [Optional] api_key: A string specifying your OpenRouter API key for authentication. Defaults to OPENROUTER_API_KEY if not passed; raises an error at runtime if unset.
  • [Optional] base_url: A string specifying the base URL for the OpenRouter API endpoint. Defaults to OPENROUTER_BASE_URL if set; otherwise falls back to "https://openrouter.ai/api/v1".
  • [Optional] temperature: A float specifying the model temperature. Defaults to TEMPERATURE if not passed; falls back to 0.0 if unset.
  • [Optional] cost_per_input_token: A float specifying the cost for each input token for the provided model. Defaults to OPENROUTER_COST_PER_INPUT_TOKEN if not passed; raises an error at runtime if unset.
  • [Optional] cost_per_output_token: A float specifying the cost for each output token for the provided model. Defaults to OPENROUTER_COST_PER_OUTPUT_TOKEN if not passed; raises an error at runtime if unset.
  • [Optional] generation_kwargs: A dictionary of additional generation parameters forwarded to OpenRouter's chat.completions.create(...) call

Any additional **kwargs you would like to use for your OpenRouter client can be passed directly to OpenRouterModel(...). These are forwarded to the underlying OpenAI client constructor. We recommend double-checking the parameters and headers supported by your chosen model in the official OpenRouter docs.

tip

Pass headers specific to OpenRouter via kwargs:

model = OpenRouterModel(
model="openai/gpt-4.1",
api_key="your-openrouter-api-key",
default_headers={
"HTTP-Referer": "https://your-site.com",
"X-Title": "My eval pipeline",
},
)
Confident AI
Try DeepEval on Confident AI for FREE
View and save evaluation results, curate datasets and manage annotations, monitor online performance, trace for AI observability, and auto-optimize prompts.
Try it for Free