deepeval automatically loads environment variables from dotenv files in this order: .env → .env.{APP_ENV} → .env.local (highest precedence). Existing process environment variables are never overwritten—process env always wins.
General Settings
These are the core settings for controlling deepeval's behavior, file paths, and run identifiers.
| Variable | Values | Effect |
|---|
CONFIDENT_API_KEY | string / unset | Logs in to Confident AI. Enables tracing observability, and automatically upload test results to the cloud on evaluation complete. |
DEEPEVAL_DISABLE_DOTENV | 1 / unset | Disable dotenv autoload at import. |
ENV_DIR_PATH | path / unset | Directory containing .env files (defaults to CWD when unset). |
APP_ENV | string / unset | When set, loads .env.{APP_ENV} between .env and .env.local. |
DEEPEVAL_DISABLE_LEGACY_KEYFILE | 1 / unset | Disable reading legacy .deepeval/.deepeval JSON keystore into env. |
DEEPEVAL_DEFAULT_SAVE | dotenv[:path] / unset | Default persistence target for deepeval set-* --save when --save is omitted. |
DEEPEVAL_FILE_SYSTEM | READ_ONLY / unset | Restrict file writes in constrained environments. |
DEEPEVAL_RESULTS_FOLDER | path / unset | Export a timestamped JSON of the latest test run into this directory (created if needed). |
DEEPEVAL_IDENTIFIER | string / unset | Default identifier for runs (same idea as deepeval test run -id ...). |
Display / Truncation
These settings control output verbosity and text truncation in logs and displays.
| Variable | Values | Effect |
|---|
DEEPEVAL_MAXLEN_TINY | int | Max length used for "tiny" shorteners (default: 40). |
DEEPEVAL_MAXLEN_SHORT | int | Max length used for "short" shorteners (default: 60). |
DEEPEVAL_MAXLEN_MEDIUM | int | Max length used for "medium" shorteners (default: 120). |
DEEPEVAL_MAXLEN_LONG | int | Max length used for "long" shorteners (default: 240). |
DEEPEVAL_SHORTEN_DEFAULT_MAXLEN | int / unset | Overrides the default max length used by shorten(...) (falls back to DEEPEVAL_MAXLEN_LONG when unset). |
DEEPEVAL_SHORTEN_SUFFIX | string | Suffix used by shorten(...) (default: ...). |
DEEPEVAL_VERBOSE_MODE | 1 / unset | Enable verbose mode globally (where supported). |
DEEPEVAL_LOG_STACK_TRACES | 1 / unset | Log stack traces for errors (where supported). |
Retry / Backoff Tuning
These settings control retry and backoff behavior for API calls.
| Variable | Type | Default | Notes |
|---|
DEEPEVAL_RETRY_MAX_ATTEMPTS | int | 2 | Total attempts (1 retry) |
DEEPEVAL_RETRY_INITIAL_SECONDS | float | 1.0 | Initial backoff |
DEEPEVAL_RETRY_EXP_BASE | float | 2.0 | Exponential base (≥ 1) |
DEEPEVAL_RETRY_JITTER | float | 2.0 | Random jitter added per retry |
DEEPEVAL_RETRY_CAP_SECONDS | float | 5.0 | Max sleep between retries |
DEEPEVAL_SDK_RETRY_PROVIDERS | list / unset | Provider slugs for which retries are delegated to provider SDKs (supports ["*"]). | |
DEEPEVAL_RETRY_BEFORE_LOG_LEVEL | int / unset | Log level for "before retry" logs (defaults to LOG_LEVEL if set, else INFO). | |
DEEPEVAL_RETRY_AFTER_LOG_LEVEL | int / unset | Log level for "after retry" logs (defaults to ERROR). | |
Timeouts / Concurrency
These options let you tune timeout limits and concurrency for parallel execution and provider calls.
| Variable | Values | Effect |
|---|
DEEPEVAL_MAX_CONCURRENT_DOC_PROCESSING | int | Max concurrent document processing tasks (default: 2). |
DEEPEVAL_TIMEOUT_THREAD_LIMIT | int | Max threads used by timeout machinery (default: 128). |
DEEPEVAL_TIMEOUT_SEMAPHORE_WARN_AFTER_SECONDS | float | Warn if acquiring timeout semaphore takes too long (default: 5.0). |
DEEPEVAL_PER_ATTEMPT_TIMEOUT_SECONDS_OVERRIDE | float / unset | Per-attempt timeout override for provider calls (preferred override key). |
DEEPEVAL_PER_TASK_TIMEOUT_SECONDS_OVERRIDE | float / unset | Outer timeout budget override for a metric/test-case (preferred override key). |
DEEPEVAL_TASK_GATHER_BUFFER_SECONDS_OVERRIDE | float / unset | Override extra buffer time added to gather/drain after tasks complete. |
DEEPEVAL_PER_ATTEMPT_TIMEOUT_SECONDS | float (computed) | Read-only computed value. To override, set DEEPEVAL_PER_ATTEMPT_TIMEOUT_SECONDS_OVERRIDE. |
DEEPEVAL_PER_TASK_TIMEOUT_SECONDS | float (computed) | Read-only computed value. To override, set DEEPEVAL_PER_TASK_TIMEOUT_SECONDS_OVERRIDE. |
DEEPEVAL_TASK_GATHER_BUFFER_SECONDS | float (computed) | Read-only computed value. To override, set DEEPEVAL_TASK_GATHER_BUFFER_SECONDS_OVERRIDE. |
Telemetry / Debug
These flags let you enable debug mode, opt out of telemetry, and control diagnostic logging.
| Variable | Values | Effect |
|---|
DEEPEVAL_DEBUG_ASYNC | 1 / unset | Enable extra async debugging (where supported). |
DEEPEVAL_TELEMETRY_OPT_OUT | 1 / unset | Opt out of telemetry (unset defaults to telemetry enabled). |
DEEPEVAL_UPDATE_WARNING_OPT_IN | 1 / unset | Opt in to update warnings (where supported). |
DEEPEVAL_GRPC_LOGGING | 1 / unset | Enable extra gRPC logging. |
Model Settings
You can configure model providers by setting a combination of environment variables (API keys, model names, provider flags, etc.). However, we recommend using the CLI commands instead, which will set these variables for you.
For example, running:
deepeval set-openai --api-key=<key> --model=gpt-4o
automatically sets OPENAI_API_KEY, OPENAI_MODEL_NAME, and USE_OPENAI_MODEL=1.
Explicit constructor arguments (e.g. OpenAIModel(api_key=...)) always take precedence over environment variables. You can also set TEMPERATURE to provide a default temperature for all model instances.
Variable Options
When set to 1, USE_{PROVIDER}_MODEL (e.g. USE_OPENAI_MODEL) tells deepeval which provider to use for LLM-as-a-judge metrics when no model is explicitly passed.
Each provider also has its own set of variables for API keys, model names, and other provider-specific options. Expand the sections below to see the full list for each provider.
Remember, please do not play around with these variables manually, it should soley be for debugging purposes. Instead, use the CLI instead as deepeval takes care of managing these variables for you.
AWS / Amazon Bedrock
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not set, the AWS SDK default credentials chain is used.
| Variable | Values | Effect |
|---|
AWS_ACCESS_KEY_ID | string / unset | Optional AWS access key ID for authentication. |
AWS_SECRET_ACCESS_KEY | string / unset | Optional AWS secret access key for authentication. |
USE_AWS_BEDROCK_MODEL | 1 / unset | Prefer Bedrock as the default LLM provider (where applicable). |
AWS_BEDROCK_MODEL_NAME | string / unset | Bedrock model ID (e.g. anthropic.claude-3-opus-20240229-v1:0). |
AWS_BEDROCK_REGION | string / unset | AWS region (e.g. us-east-1). |
AWS_BEDROCK_COST_PER_INPUT_TOKEN | float / unset | Optional input-token cost used for cost reporting. |
AWS_BEDROCK_COST_PER_OUTPUT_TOKEN | float / unset | Optional output-token cost used for cost reporting. |
Anthropic
| Variable | Values | Effect |
|---|
ANTHROPIC_API_KEY | string / unset | Anthropic API key. |
ANTHROPIC_MODEL_NAME | string / unset | Optional default Anthropic model name. |
ANTHROPIC_COST_PER_INPUT_TOKEN | float / unset | Optional input-token cost used for cost reporting. |
ANTHROPIC_COST_PER_OUTPUT_TOKEN | float / unset | Optional output-token cost used for cost reporting. |
Azure OpenAI
| Variable | Values | Effect |
|---|
USE_AZURE_OPENAI | 1 / unset | Prefer Azure OpenAI as the default LLM provider (where applicable). |
AZURE_OPENAI_API_KEY | string / unset | Azure OpenAI API key. |
AZURE_OPENAI_ENDPOINT | string / unset | Azure OpenAI endpoint URL. |
OPENAI_API_VERSION | string / unset | Azure OpenAI API version. |
AZURE_DEPLOYMENT_NAME | string / unset | Azure deployment name. |
AZURE_MODEL_NAME | string / unset | Optional Azure model name (for metadata / reporting). |
AZURE_MODEL_VERSION | string / unset | Optional Azure model version (for metadata / reporting). |
OpenAI
| Variable | Values | Effect |
|---|
USE_OPENAI_MODEL | 1 / unset | Prefer OpenAI as the default LLM provider (where applicable). |
OPENAI_API_KEY | string / unset | OpenAI API key. |
OPENAI_MODEL_NAME | string / unset | Optional default OpenAI model name. |
OPENAI_COST_PER_INPUT_TOKEN | float / unset | Optional input-token cost used for cost reporting. |
OPENAI_COST_PER_OUTPUT_TOKEN | float / unset | Optional output-token cost used for cost reporting. |
DeepSeek
| Variable | Values | Effect |
|---|
USE_DEEPSEEK_MODEL | 1 / unset | Prefer DeepSeek as the default LLM provider (where applicable). |
DEEPSEEK_API_KEY | string / unset | DeepSeek API key. |
DEEPSEEK_MODEL_NAME | string / unset | Optional default DeepSeek model name. |
DEEPSEEK_COST_PER_INPUT_TOKEN | float / unset | Optional input-token cost used for cost reporting. |
DEEPSEEK_COST_PER_OUTPUT_TOKEN | float / unset | Optional output-token cost used for cost reporting. |
Gemini
| Variable | Values | Effect |
|---|
USE_GEMINI_MODEL | 1 / unset | Prefer Gemini as the default LLM provider (where applicable). |
GOOGLE_API_KEY | string / unset | Google API key. |
GEMINI_MODEL_NAME | string / unset | Optional default Gemini model name. |
GOOGLE_GENAI_USE_VERTEXAI | 1 / 0 / unset | If set, use Vertex AI via google-genai (where supported). |
GOOGLE_CLOUD_PROJECT | string / unset | Optional GCP project (Vertex AI). |
GOOGLE_CLOUD_LOCATION | string / unset | Optional GCP location/region (Vertex AI). |
GOOGLE_SERVICE_ACCOUNT_KEY | string / unset | Optional service account key (Vertex AI). |
VERTEX_AI_MODEL_NAME | string / unset | Optional Vertex AI model name. |
Grok
| Variable | Values | Effect |
|---|
USE_GROK_MODEL | 1 / unset | Prefer Grok as the default LLM provider (where applicable). |
GROK_API_KEY | string / unset | Grok API key. |
GROK_MODEL_NAME | string / unset | Optional default Grok model name. |
GROK_COST_PER_INPUT_TOKEN | float / unset | Optional input-token cost used for cost reporting. |
GROK_COST_PER_OUTPUT_TOKEN | float / unset | Optional output-token cost used for cost reporting. |
LiteLLM
| Variable | Values | Effect |
|---|
USE_LITELLM | 1 / unset | Prefer LiteLLM as the default LLM provider (where applicable). |
LITELLM_API_KEY | string / unset | Optional API key passed to LiteLLM. |
LITELLM_MODEL_NAME | string / unset | Default LiteLLM model name. |
LITELLM_API_BASE | string / unset | Optional base URL for the LiteLLM endpoint. |
LITELLM_PROXY_API_BASE | string / unset | Optional proxy base URL (if using a proxy). |
LITELLM_PROXY_API_KEY | string / unset | Optional proxy API key (if using a proxy). |
Local Model
| Variable | Values | Effect |
|---|
USE_LOCAL_MODEL | 1 / unset | Prefer the local model adapter as the default LLM provider (where applicable). |
LOCAL_MODEL_API_KEY | string / unset | Optional API key for the local model endpoint (if required). |
LOCAL_MODEL_NAME | string / unset | Optional default local model name. |
LOCAL_MODEL_BASE_URL | string / unset | Base URL for the local model endpoint. |
LOCAL_MODEL_FORMAT | string / unset | Optional format hint for the local model integration. |
Kimi (Moonshot)
| Variable | Values | Effect |
|---|
USE_MOONSHOT_MODEL | 1 / unset | Prefer Moonshot as the default LLM provider (where applicable). |
MOONSHOT_API_KEY | string / unset | Moonshot API key. |
MOONSHOT_MODEL_NAME | string / unset | Optional default Moonshot model name. |
MOONSHOT_COST_PER_INPUT_TOKEN | float / unset | Optional input-token cost used for cost reporting. |
MOONSHOT_COST_PER_OUTPUT_TOKEN | float / unset | Optional input-token cost used for cost reporting. |
Ollama
| Variable | Values | Effect |
|---|
OLLAMA_MODEL_NAME | string / unset | Optional default Ollama model name. |
Portkey
| Variable | Values | Effect |
|---|
USE_PORTKEY_MODEL | 1 / unset | Prefer Portkey as the default LLM provider (where applicable). |
PORTKEY_API_KEY | string / unset | Portkey API key. |
PORTKEY_MODEL_NAME | string / unset | Optional default model name passed to Portkey. |
PORTKEY_BASE_URL | string / unset | Optional Portkey base URL. |
PORTKEY_PROVIDER_NAME | string / unset | Optional provider name (Portkey routing). |
Embeddings
| Variable | Values | Effect |
|---|
USE_AZURE_OPENAI_EMBEDDING | 1 / unset | Prefer Azure OpenAI embeddings as the default embeddings provider (where applicable). |
AZURE_EMBEDDING_DEPLOYMENT_NAME | string / unset | Azure embedding deployment name. |
USE_LOCAL_EMBEDDINGS | 1 / unset | Prefer local embeddings as the default embeddings provider (where applicable). |
LOCAL_EMBEDDING_API_KEY | string / unset | Optional API key for the local embeddings endpoint (if required). |
LOCAL_EMBEDDING_MODEL_NAME | string / unset | Optional default local embedding model name. |
LOCAL_EMBEDDING_BASE_URL | string / unset | Base URL for the local embeddings endpoint. |