Skip to main content

LM Studio

deepeval supports running evaluations using local LLMs that expose OpenAI-compatible APIs. One such provider is LM Studio, a user-friendly desktop app for running models locally.

Command Line

To start using LM Studio with deepeval, follow these steps:

  1. Make sure LM Studio is running. The typical base URL for LM Studio is: http://localhost:1234/v1/.
  2. Run the following command in your terminal to connect deepeval to LM Studio:
deepeval set-local-model --model-name=<model_name> \
--base-url="http://localhost:1234/v1/" \
--api-key=<api-key>
tip

Use any placeholder string for --api-key if your local endpoint doesn't require authentication.

Persisting settings

You can persist CLI settings with the optional --save flag. See Flags and Configs -> Persisting CLI settings.

Reverting to OpenAI

To switch back to using OpenAI’s hosted models, run:

deepeval unset-local-model
info

For more help on enabling LM Studio’s server or configuring models, check out the LM Studio docs.

Confident AI
Try DeepEval on Confident AI for FREE
View and save evaluation results, curate datasets and manage annotations, monitor online performance, trace for AI observability, and auto-optimize prompts.
Try it for Free