Gemini
DeepEval allows you to directly integrate Gemini models into all available LLM-based metrics, either through the command line or directly within your python code.
Command Line
Run the following command in your terminal to configure your deepeval environment to use Gemini models for all metrics.
deepeval set-gemini \
--model-name=<model> \ # e.g. "gemini-2.0-flash-001"
--google-api-key=<api_key>
The CLI command above sets Gemini as the default provider for all metrics, unless overridden in Python code. To use a different default model provider, you must first unset Gemini:
deepeval unset-gemini
You can persist CLI settings with the optional --save flag.
See Flags and Configs -> Persisting CLI settings.
Python
Alternatively, you can specify your model directly in code using GeminiModel from DeepEval's model collection. By default, model is set to gemini-1.5-pro.
from deepeval.models import GeminiModel
from deepeval.metrics import AnswerRelevancyMetric
model = GeminiModel(
model="gemini-1.5-pro",
api_key="Your Gemini API Key",
temperature=0
)
answer_relevancy = AnswerRelevancyMetric(model=model)
There are ZERO mandatory and FOUR optional parameters when creating an GeminiModel. Parameters may be explicitly passed to the model at initialization time, or configured with optional settings. The mandatory parameters are required at runtime, but you can provide them either explicitly as constructor arguments, or via DeepEval settings / environment variables (constructor args take precedence). See Environment variables and settings for the Gemini-related environment variables:
- [Optional]
model: A string specifying the name of the Gemini model to use. Defaults toGEMINI_MODEL_NAMEif not passed; raises an error at runtime if unset. - [Optional]
api_key: A string specifying the Google API key for authentication. Defaults toGOOGLE_API_KEYif not passed; raises an error at runtime if unset. - [Optional]
temperature: A float specifying the model temperature. Defaults toTEMPERATUREif not passed; falls back to0.0if unset. - [Optional]
generation_kwargs: A dictionary of additional generation parameters forwarded to the Gemini APIgenerate_content(...)call.
At runtime, you must provide an API key (via api_key or GOOGLE_API_KEY) unless you’re using Vertex AI. See Vertex AI.
Any **kwargs you would like to use for your model can be passed through the generation_kwargs parameter. However, we request you to double check the params supported by the model and your model provider in their official docs.
Available Gemini Models
This list only displays some of the available models. For a comprehensive list, refer to the Gemini's official documentation.
Below is a list of commonly used Gemini models:
gemini-1.5-pro
gemini-1.5-pro-002
gemini-1.5-flash
gemini-1.5-flash-002
gemini-1.5-flash-8b
gemini-2.0-flash
gemini-2.0-flash-lite
gemini-2.5-pro
gemini-2.5-flash
gemini-2.5-flash-lite
gemini-3-pro
gemini-3-pro-preview
gemini-pro
gemini-pro-vision