Skip to main content

Environment Variables

deepeval automatically loads environment variables from dotenv files in this order: .env.env.{APP_ENV}.env.local (highest precedence). Existing process environment variables are never overwritten—process env always wins.

General Settings

These are the core settings for controlling deepeval's behavior, file paths, and run identifiers.

VariableValuesEffect
CONFIDENT_API_KEYstring / unsetLogs in to Confident AI. Enables tracing observability, and automatically upload test results to the cloud on evaluation complete.
DEEPEVAL_DISABLE_DOTENV1 / unsetDisable dotenv autoload at import.
ENV_DIR_PATHpath / unsetDirectory containing .env files (defaults to CWD when unset).
APP_ENVstring / unsetWhen set, loads .env.{APP_ENV} between .env and .env.local.
DEEPEVAL_DISABLE_LEGACY_KEYFILE1 / unsetDisable reading legacy .deepeval/.deepeval JSON keystore into env.
DEEPEVAL_DEFAULT_SAVEdotenv[:path] / unsetDefault persistence target for deepeval set-* --save when --save is omitted.
DEEPEVAL_FILE_SYSTEMREAD_ONLY / unsetRestrict file writes in constrained environments.
DEEPEVAL_RESULTS_FOLDERpath / unsetExport a timestamped JSON of the latest test run into this directory (created if needed).
DEEPEVAL_IDENTIFIERstring / unsetDefault identifier for runs (same idea as deepeval test run -id ...).

Display / Truncation

These settings control output verbosity and text truncation in logs and displays.

VariableValuesEffect
DEEPEVAL_MAXLEN_TINYintMax length used for "tiny" shorteners (default: 40).
DEEPEVAL_MAXLEN_SHORTintMax length used for "short" shorteners (default: 60).
DEEPEVAL_MAXLEN_MEDIUMintMax length used for "medium" shorteners (default: 120).
DEEPEVAL_MAXLEN_LONGintMax length used for "long" shorteners (default: 240).
DEEPEVAL_SHORTEN_DEFAULT_MAXLENint / unsetOverrides the default max length used by shorten(...) (falls back to DEEPEVAL_MAXLEN_LONG when unset).
DEEPEVAL_SHORTEN_SUFFIXstringSuffix used by shorten(...) (default: ...).
DEEPEVAL_VERBOSE_MODE1 / unsetEnable verbose mode globally (where supported).
DEEPEVAL_LOG_STACK_TRACES1 / unsetLog stack traces for errors (where supported).

Retry / Backoff Tuning

These settings control retry and backoff behavior for API calls.

VariableTypeDefaultNotes
DEEPEVAL_RETRY_MAX_ATTEMPTSint2Total attempts (1 retry)
DEEPEVAL_RETRY_INITIAL_SECONDSfloat1.0Initial backoff
DEEPEVAL_RETRY_EXP_BASEfloat2.0Exponential base (≥ 1)
DEEPEVAL_RETRY_JITTERfloat2.0Random jitter added per retry
DEEPEVAL_RETRY_CAP_SECONDSfloat5.0Max sleep between retries
DEEPEVAL_SDK_RETRY_PROVIDERSlist / unsetProvider slugs for which retries are delegated to provider SDKs (supports ["*"]).
DEEPEVAL_RETRY_BEFORE_LOG_LEVELint / unsetLog level for "before retry" logs (defaults to LOG_LEVEL if set, else INFO).
DEEPEVAL_RETRY_AFTER_LOG_LEVELint / unsetLog level for "after retry" logs (defaults to ERROR).

Timeouts / Concurrency

These options let you tune timeout limits and concurrency for parallel execution and provider calls.

VariableValuesEffect
DEEPEVAL_MAX_CONCURRENT_DOC_PROCESSINGintMax concurrent document processing tasks (default: 2).
DEEPEVAL_TIMEOUT_THREAD_LIMITintMax threads used by timeout machinery (default: 128).
DEEPEVAL_TIMEOUT_SEMAPHORE_WARN_AFTER_SECONDSfloatWarn if acquiring timeout semaphore takes too long (default: 5.0).
DEEPEVAL_PER_ATTEMPT_TIMEOUT_SECONDS_OVERRIDEfloat / unsetPer-attempt timeout override for provider calls (preferred override key).
DEEPEVAL_PER_TASK_TIMEOUT_SECONDS_OVERRIDEfloat / unsetOuter timeout budget override for a metric/test-case (preferred override key).
DEEPEVAL_TASK_GATHER_BUFFER_SECONDS_OVERRIDEfloat / unsetOverride extra buffer time added to gather/drain after tasks complete.
DEEPEVAL_PER_ATTEMPT_TIMEOUT_SECONDSfloat (computed)Read-only computed value. To override, set DEEPEVAL_PER_ATTEMPT_TIMEOUT_SECONDS_OVERRIDE.
DEEPEVAL_PER_TASK_TIMEOUT_SECONDSfloat (computed)Read-only computed value. To override, set DEEPEVAL_PER_TASK_TIMEOUT_SECONDS_OVERRIDE.
DEEPEVAL_TASK_GATHER_BUFFER_SECONDSfloat (computed)Read-only computed value. To override, set DEEPEVAL_TASK_GATHER_BUFFER_SECONDS_OVERRIDE.

Telemetry / Debug

These flags let you enable debug mode, opt out of telemetry, and control diagnostic logging.

VariableValuesEffect
DEEPEVAL_DEBUG_ASYNC1 / unsetEnable extra async debugging (where supported).
DEEPEVAL_TELEMETRY_OPT_OUT1 / unsetOpt out of telemetry (unset defaults to telemetry enabled).
DEEPEVAL_UPDATE_WARNING_OPT_IN1 / unsetOpt in to update warnings (where supported).
DEEPEVAL_GRPC_LOGGING1 / unsetEnable extra gRPC logging.

Model Settings

You can configure model providers by setting a combination of environment variables (API keys, model names, provider flags, etc.). However, we recommend using the CLI commands instead, which will set these variables for you.

info

For example, running:

deepeval set-openai --api-key=<key> --model=gpt-4o

automatically sets OPENAI_API_KEY, OPENAI_MODEL_NAME, and USE_OPENAI_MODEL=1.

Explicit constructor arguments (e.g. OpenAIModel(api_key=...)) always take precedence over environment variables. You can also set TEMPERATURE to provide a default temperature for all model instances.

Variable Options

When set to 1, USE_{PROVIDER}_MODEL (e.g. USE_OPENAI_MODEL) tells deepeval which provider to use for LLM-as-a-judge metrics when no model is explicitly passed.

Each provider also has its own set of variables for API keys, model names, and other provider-specific options. Expand the sections below to see the full list for each provider.

caution

Remember, please do not play around with these variables manually, it should soley be for debugging purposes. Instead, use the CLI instead as deepeval takes care of managing these variables for you.

AWS / Amazon Bedrock

If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not set, the AWS SDK default credentials chain is used.

VariableValuesEffect
AWS_ACCESS_KEY_IDstring / unsetOptional AWS access key ID for authentication.
AWS_SECRET_ACCESS_KEYstring / unsetOptional AWS secret access key for authentication.
USE_AWS_BEDROCK_MODEL1 / unsetPrefer Bedrock as the default LLM provider (where applicable).
AWS_BEDROCK_MODEL_NAMEstring / unsetBedrock model ID (e.g. anthropic.claude-3-opus-20240229-v1:0).
AWS_BEDROCK_REGIONstring / unsetAWS region (e.g. us-east-1).
AWS_BEDROCK_COST_PER_INPUT_TOKENfloat / unsetOptional input-token cost used for cost reporting.
AWS_BEDROCK_COST_PER_OUTPUT_TOKENfloat / unsetOptional output-token cost used for cost reporting.
Anthropic
VariableValuesEffect
ANTHROPIC_API_KEYstring / unsetAnthropic API key.
ANTHROPIC_MODEL_NAMEstring / unsetOptional default Anthropic model name.
ANTHROPIC_COST_PER_INPUT_TOKENfloat / unsetOptional input-token cost used for cost reporting.
ANTHROPIC_COST_PER_OUTPUT_TOKENfloat / unsetOptional output-token cost used for cost reporting.
Azure OpenAI
VariableValuesEffect
USE_AZURE_OPENAI1 / unsetPrefer Azure OpenAI as the default LLM provider (where applicable).
AZURE_OPENAI_API_KEYstring / unsetAzure OpenAI API key.
AZURE_OPENAI_ENDPOINTstring / unsetAzure OpenAI endpoint URL.
OPENAI_API_VERSIONstring / unsetAzure OpenAI API version.
AZURE_DEPLOYMENT_NAMEstring / unsetAzure deployment name.
AZURE_MODEL_NAMEstring / unsetOptional Azure model name (for metadata / reporting).
AZURE_MODEL_VERSIONstring / unsetOptional Azure model version (for metadata / reporting).
OpenAI
VariableValuesEffect
USE_OPENAI_MODEL1 / unsetPrefer OpenAI as the default LLM provider (where applicable).
OPENAI_API_KEYstring / unsetOpenAI API key.
OPENAI_MODEL_NAMEstring / unsetOptional default OpenAI model name.
OPENAI_COST_PER_INPUT_TOKENfloat / unsetOptional input-token cost used for cost reporting.
OPENAI_COST_PER_OUTPUT_TOKENfloat / unsetOptional output-token cost used for cost reporting.
DeepSeek
VariableValuesEffect
USE_DEEPSEEK_MODEL1 / unsetPrefer DeepSeek as the default LLM provider (where applicable).
DEEPSEEK_API_KEYstring / unsetDeepSeek API key.
DEEPSEEK_MODEL_NAMEstring / unsetOptional default DeepSeek model name.
DEEPSEEK_COST_PER_INPUT_TOKENfloat / unsetOptional input-token cost used for cost reporting.
DEEPSEEK_COST_PER_OUTPUT_TOKENfloat / unsetOptional output-token cost used for cost reporting.
Gemini
VariableValuesEffect
USE_GEMINI_MODEL1 / unsetPrefer Gemini as the default LLM provider (where applicable).
GOOGLE_API_KEYstring / unsetGoogle API key.
GEMINI_MODEL_NAMEstring / unsetOptional default Gemini model name.
GOOGLE_GENAI_USE_VERTEXAI1 / 0 / unsetIf set, use Vertex AI via google-genai (where supported).
GOOGLE_CLOUD_PROJECTstring / unsetOptional GCP project (Vertex AI).
GOOGLE_CLOUD_LOCATIONstring / unsetOptional GCP location/region (Vertex AI).
GOOGLE_SERVICE_ACCOUNT_KEYstring / unsetOptional service account key (Vertex AI).
VERTEX_AI_MODEL_NAMEstring / unsetOptional Vertex AI model name.
Grok
VariableValuesEffect
USE_GROK_MODEL1 / unsetPrefer Grok as the default LLM provider (where applicable).
GROK_API_KEYstring / unsetGrok API key.
GROK_MODEL_NAMEstring / unsetOptional default Grok model name.
GROK_COST_PER_INPUT_TOKENfloat / unsetOptional input-token cost used for cost reporting.
GROK_COST_PER_OUTPUT_TOKENfloat / unsetOptional output-token cost used for cost reporting.
LiteLLM
VariableValuesEffect
USE_LITELLM1 / unsetPrefer LiteLLM as the default LLM provider (where applicable).
LITELLM_API_KEYstring / unsetOptional API key passed to LiteLLM.
LITELLM_MODEL_NAMEstring / unsetDefault LiteLLM model name.
LITELLM_API_BASEstring / unsetOptional base URL for the LiteLLM endpoint.
LITELLM_PROXY_API_BASEstring / unsetOptional proxy base URL (if using a proxy).
LITELLM_PROXY_API_KEYstring / unsetOptional proxy API key (if using a proxy).
Local Model
VariableValuesEffect
USE_LOCAL_MODEL1 / unsetPrefer the local model adapter as the default LLM provider (where applicable).
LOCAL_MODEL_API_KEYstring / unsetOptional API key for the local model endpoint (if required).
LOCAL_MODEL_NAMEstring / unsetOptional default local model name.
LOCAL_MODEL_BASE_URLstring / unsetBase URL for the local model endpoint.
LOCAL_MODEL_FORMATstring / unsetOptional format hint for the local model integration.
Kimi (Moonshot)
VariableValuesEffect
USE_MOONSHOT_MODEL1 / unsetPrefer Moonshot as the default LLM provider (where applicable).
MOONSHOT_API_KEYstring / unsetMoonshot API key.
MOONSHOT_MODEL_NAMEstring / unsetOptional default Moonshot model name.
MOONSHOT_COST_PER_INPUT_TOKENfloat / unsetOptional input-token cost used for cost reporting.
MOONSHOT_COST_PER_OUTPUT_TOKENfloat / unsetOptional input-token cost used for cost reporting.
Ollama
VariableValuesEffect
OLLAMA_MODEL_NAMEstring / unsetOptional default Ollama model name.
Portkey
VariableValuesEffect
USE_PORTKEY_MODEL1 / unsetPrefer Portkey as the default LLM provider (where applicable).
PORTKEY_API_KEYstring / unsetPortkey API key.
PORTKEY_MODEL_NAMEstring / unsetOptional default model name passed to Portkey.
PORTKEY_BASE_URLstring / unsetOptional Portkey base URL.
PORTKEY_PROVIDER_NAMEstring / unsetOptional provider name (Portkey routing).
Embeddings
VariableValuesEffect
USE_AZURE_OPENAI_EMBEDDING1 / unsetPrefer Azure OpenAI embeddings as the default embeddings provider (where applicable).
AZURE_EMBEDDING_DEPLOYMENT_NAMEstring / unsetAzure embedding deployment name.
USE_LOCAL_EMBEDDINGS1 / unsetPrefer local embeddings as the default embeddings provider (where applicable).
LOCAL_EMBEDDING_API_KEYstring / unsetOptional API key for the local embeddings endpoint (if required).
LOCAL_EMBEDDING_MODEL_NAMEstring / unsetOptional default local embedding model name.
LOCAL_EMBEDDING_BASE_URLstring / unsetBase URL for the local embeddings endpoint.