OpenRouter
deepeval's integration with OpenRouter allows you to use the OpenRouter gateway, connecting any OpenRouter supported model to power all of deepeval's metrics.
Command Line
To configure your OpenRouter model through the CLI, run the following command:
deepeval set-openrouter \
--model "openai/gpt-4.1" \ # Ex: openai/gpt-4.1
--base-url "https://openrouter.ai/api/v1" \
--temperature=0 \
--prompt-api-key
The CLI command above sets OpenRouter as the default provider for all metrics, unless overridden in Python code. To use a different default model provider, you must first unset OpenRouter:
deepeval unset-openrouter
You can persist CLI settings with the optional --save flag.
See Flags and Configs -> Persisting CLI settings.
Python
Alternatively, you can define OpenRouterModel directly in Python code:
from deepeval.models import OpenRouterModel
from deepeval.metrics import AnswerRelevancyMetric
model = OpenRouterModel(
model="openai/gpt-4.1",
api_key="your-openrouter-api-key",
# Optional: override the default OpenRouter endpoint
base_url="https://openrouter.ai/api/v1",
# Optional: pass OpenRouter headers via **kwargs
default_headers={
"HTTP-Referer": "https://your-site.com",
"X-Title": "My eval pipeline",
},
)
answer_relevancy = AnswerRelevancyMetric(model=model)
There are ZERO mandatory and SEVEN optional parameters when creating an OpenRouterModel:
- [Optional]
model: A string specifying the OpenRouter model to use. Defaults toOPENROUTER_MODEL_NAMEif set; otherwise falls back to "openai/gpt-4.1". - [Optional]
api_key: A string specifying your OpenRouter API key for authentication. Defaults toOPENROUTER_API_KEYif not passed; raises an error at runtime if unset. - [Optional]
base_url: A string specifying the base URL for the OpenRouter API endpoint. Defaults toOPENROUTER_BASE_URLif set; otherwise falls back to "https://openrouter.ai/api/v1". - [Optional]
temperature: A float specifying the model temperature. Defaults toTEMPERATUREif not passed; falls back to0.0if unset. - [Optional]
cost_per_input_token: A float specifying the cost for each input token for the provided model. Defaults toOPENROUTER_COST_PER_INPUT_TOKENif not passed; raises an error at runtime if unset. - [Optional]
cost_per_output_token: A float specifying the cost for each output token for the provided model. Defaults toOPENROUTER_COST_PER_OUTPUT_TOKENif not passed; raises an error at runtime if unset. - [Optional]
generation_kwargs: A dictionary of additional generation parameters forwarded to OpenRouter'schat.completions.create(...)call
Any additional **kwargs you would like to use for your OpenRouter client can be passed directly to OpenRouterModel(...). These are forwarded to the underlying OpenAI client constructor. We recommend double-checking the parameters and headers supported by your chosen model in the official OpenRouter docs.
Pass headers specific to OpenRouter via kwargs:
model = OpenRouterModel(
model="openai/gpt-4.1",
api_key="your-openrouter-api-key",
default_headers={
"HTTP-Referer": "https://your-site.com",
"X-Title": "My eval pipeline",
},
)