Moonshot
DeepEval's integration with Moonshot AI allows you to use any Moonshot models to power all of DeepEval's metrics.
Command Line
To configure your Moonshot model through the CLI, run the following command:
deepeval set-moonshot \
--model "kimi-k2-0711-preview" \
--api-key "your-api-key" \
--temperature=0
The CLI command above sets Moonshot as the default provider for all metrics, unless overridden in Python code. To use a different default model provider, you must first unset Moonshot:
deepeval unset-moonshot
You can persist CLI settings with the optional --save
flag.
See Flags and Configs -> Persisting CLI settings.
Python
Alternatively, you can define KimiModel
directly in python code:
from deepeval.models import KimiModel
from deepeval.metrics import AnswerRelevancyMetric
model = KimiModel(
model_name="kimi-k2-0711-preview",
api_key="your-api-key",
temperature=0
)
answer_relevancy = AnswerRelevancyMetric(model=model)
There are TWO mandatory and ONE optional parameters when creating an KimiModel
:
model
: A string specifying the name of the Kimi model to use.api_key
: A string specifying your Kimi API key for authentication.- [Optional]
temperature
: A float specifying the model temperature. Defaulted to 0. - [Optional]
generation_kwargs
: A dictionary of additional generation parameters supported by your model provider.
Any **kwargs
you would like to use for your model can be passed through the generation_kwargs
parameter. However, we request you to double check the params supported by the model and your model provider in their official docs.
Available Moonshot Models
Below is a comprehensive list of available Moonshot models:
kimi-k2-0711-preview
kimi-thinking-preview
moonshot-v1-8k
moonshot-v1-32k
moonshot-v1-128k
moonshot-v1-8k-vision-preview
moonshot-v1-32k-vision-preview
moonshot-v1-128k-vision-preview
kimi-latest-8k
kimi-latest-32k
kimi-latest-128k