LM Studio
deepeval supports running evaluations using local LLMs that expose OpenAI-compatible APIs. One such provider is LM Studio, a user-friendly desktop app for running models locally.
Command Line
To start using LM Studio with deepeval, follow these steps:
- Make sure LM Studio is running. The typical base URL for LM Studio is:
http://localhost:1234/v1/. - Run the following command in your terminal to connect
deepevalto LM Studio:
deepeval set-local-model \
--model=<model_name> \
--base-url="http://localhost:1234/v1/"
tip
If your local endpoint doesn't require authentication enter any placeholder string when prompted to enter an api key.
Persisting settings
You can persist CLI settings with the optional --save flag.
See Flags and Configs -> Persisting CLI settings.
Reverting to OpenAI
To switch back to using OpenAI’s hosted models, run:
deepeval unset-local-model
info
For more help on enabling LM Studio’s server or configuring models, check out the LM Studio docs.