Ollama
DeepEval allows you to use any model served by Ollama to run evals, either through the CLI or directly in Python. Some capabilities, such as multimodal support, are detected from a known-model list.
Before getting started, make sure your Ollama model is installed and running. See the full list of available models here.
ollama run deepseek-r1:1.5b
Environment Setup
DeepEval can use a local Ollama server (default: http://localhost:11434).
Optionally set a custom host:
# .env.local
LOCAL_MODEL_BASE_URL=http://localhost:11434
Command Line
To configure your Ollama model through the CLI, run the following command. Replace deepseek-r1:1.5b with any Ollama-supported model of your choice.
deepeval set-ollama deepseek-r1:1.5b
You may also specify the base URL of your local Ollama model instance if you've defined a custom port. By default, the base URL is set to http://localhost:11434.
deepeval set-ollama deepseek-r1:1.5b \
--base-url="http://localhost:11434"
The CLI command above sets Ollama as the default provider for all metrics, unless overridden in Python code. To use a different default model provider, you must first unset Ollama:
deepeval unset-ollama
You can persist CLI settings with the optional --save flag.
See Flags and Configs -> Persisting CLI settings.
Python
Alternatively, you can specify your model directly in code using OllamaModel from DeepEval's model collection.
from deepeval.models import OllamaModel
from deepeval.metrics import AnswerRelevancyMetric
model = OllamaModel(
model="deepseek-r1:1.5b",
base_url="http://localhost:11434",
temperature=0
)
answer_relevancy = AnswerRelevancyMetric(model=model)
There is ONE mandatory parameter and THREE optional parameters when creating an OllamaModel:
- [Optional]
model: A string specifying the name of the Ollama model to use. Defaults toOLLAMA_MODEL_NAMEif not passed; raises an error at runtime if unset. - [Optional]
base_url: A string specifying the base URL of the Ollama server. Defaults toLOCAL_MODEL_BASE_URLif not passed; falls back tohttp://localhost:11434if unset. - [Optional]
temperature: A float specifying the model temperature. Defaults toTEMPERATUREif not passed; falls back to0.0if unset. - [Optional]
generation_kwargs: A dictionary of additional generation parameters forwarded to Ollama’schat(..., options={...})call.
Any **kwargs you would like to use for your model can be passed through the generation_kwargs parameter. However, we request you to double check the params supported by the model and your model provider in their official docs.
Available Ollama Models
This list only displays some of the available models. For a comprehensive list, refer to the Ollama's official documentation.
Below is a list of commonly used Ollama models:
deepseek-r1llama3.1gemmaqwenmistralcodellamaphi3tinyllamastarcoder2