Skip to main content

DeepSeek

deepeval allows you to use deepseek-chat and deepseek-reasoner directly from DeepSeek to run all of deepeval's metrics, which can be set through the CLI or in python.

Command Line

To configure your DeepSeek model through the CLI, run the following command:

deepeval set-deepseek --model=deepseek-chat \
--temperature=0

The CLI command above sets deepseek-chat as the default model for all metrics, unless overridden in Python code. To use a different default model provider, you must first unset DeepSeek:

deepeval unset-deepseek
Persisting settings

You can persist CLI settings with the optional --save flag. See Flags and Configs -> Persisting CLI settings.

Python

You can also specify your model directly in code using DeepSeekModel.

from deepeval.models import DeepSeekModel
from deepeval.metrics import AnswerRelevancyMetric

model = DeepSeekModel(
model="deepseek-chat",
api_key="your-api-key",
temperature=0
)

answer_relevancy = AnswerRelevancyMetric(model=model)

There are ZERO mandatory and SIX optional parameters when creating a DeepSeekModel:

  • [Optional] model: A string specifying the name of the DeepSeek model to use. Defaults to DEEPSEEK_MODEL_NAME if not passed; raises an error at runtime if unset.
  • [Optional] api_key: A string specifying your DeepSeek API key for authentication. Defaults to DEEPSEEK_API_KEY if not passed; raises an error at runtime if unset.
  • [Optional] temperature: A float specifying the model temperature. Defaults to TEMPERATURE if not passed; falls back to 0.0 if unset.
  • [Optional] cost_per_input_token: A float specifying the cost for each input token for the provided model. Defaults to DEEPSEEK_COST_PER_INPUT_TOKEN if available in deepeval's model cost registry, else None.
  • [Optional] cost_per_output_token: A float specifying the cost for each output token for the provided model. Defaults to DEEPSEEK_COST_PER_OUTPUT_TOKEN if available in deepeval's model cost registry, else None.
  • [Optional] generation_kwargs: A dictionary of additional generation forwarded to the OpenAI chat.completions.create(...) call.

Parameters may be explicitly passed to the model at initialization time, or configured with optional settings. The mandatory parameters are required at runtime, but you can provide them either explicitly as constructor arguments, or via deepeval settings / environment variables (constructor args take precedence). See Environment variables and settings for the DeepSeek-related environment variables.

tip

Any **kwargs you would like to use for your model can be passed through the generation_kwargs parameter. However, we request you to double check the params supported by the model and your model provider in their official docs.

Available DeepSeek Models

Below is the comprehensive list of available DeepSeek models in deepeval:

  • deepseek-chat
  • deepseek-v3.2
  • deepseek-v3.2-exp
  • deepseek-v3.1
  • deepseek-v3
  • deepseek-reasoner
  • deepseek-r1
  • deepseek-r1-lite
  • deepseek-v2.5
  • deepseek-coder
  • deepseek-coder-6.7b
  • deepseek-coder-33b
Confident AI
Try DeepEval on Confident AI for FREE
View and save evaluation results, curate datasets and manage annotations, monitor online performance, trace for AI observability, and auto-optimize prompts.
Try it for Free