feat(opencode): add LiteLLM provider with auto model discovery#14468
feat(opencode): add LiteLLM provider with auto model discovery#14468balcsida wants to merge 17 commits intoanomalyco:devfrom
Conversation
|
The following comment was made by an LLM, it may be inaccurate: Based on my search, I found two related PRs that are NOT duplicates but are closely related:
These PRs appear to be related work in the same area but are separate features. PR #14468 (the current PR) is a distinct, more comprehensive implementation of the LiteLLM provider with auto-discovery, and it references these earlier related PRs rather than duplicating them. No duplicate PRs found. |
d4a5701 to
1e2b913
Compare
e074b91 to
54cedd2
Compare
54cedd2 to
d3e6960
Compare
|
lgtm, but test fails. check e2e. this test fails: Did you try testing locally? bun test:e2e:local |
Issue for this PR
Closes #13891
Type of change
What does this PR do?
Adds a native
litellmprovider that auto-discovers models from a LiteLLM proxy at startup. Previously users had to manually define every model inopencode.json— now settingLITELLM_API_KEYandLITELLM_HOSTis enough.Discovery: Fetches
/model/infofor rich metadata (pricing, limits, capabilities). Falls back to the standard/modelsendpoint for older LiteLLM versions or non-LiteLLM OpenAI-compatible proxies.What gets mapped from
/model/info:supported_openai_paramsReasoning transforms: Claude models behind LiteLLM get
thinkingbudget variants, other models getreasoningEffort. This prevents false-positive reasoning param injection for aliased models (e.g.o3-custom→ Mistral).Files changed:
litellm.ts(new): Discovery module — fetches, parses, maps model metadataprovider.ts: Seedslitellmprovider from env vars; custom loader calls discovery and injects models (user config takes precedence)transform.ts: LiteLLM-specific reasoning variant logicllm.ts: Adds explicitproviderID === "litellm"to proxy detectionLITELLM_API_KEYLITELLM_HOSThttp://localhost:4000LITELLM_BASE_URLLITELLM_HOSTLITELLM_CUSTOM_HEADERS{}LITELLM_TIMEOUT5000Builds on ideas from #13896 (fallback endpoint, over-200k pricing, temperature detection) and #14277 (clean fallback chain). Supersedes my earlier #14202 which was auto-closed for not following the PR template.
How did you verify your code works?
LITELLM_API_KEY+LITELLM_HOSTpointing to a running LiteLLM proxyopencode— LiteLLM models appear in model pickero3-custom→ Mistral) — reasoning params NOT injectedthinkingvariants work/model/info) — falls back to/modelswith sensible defaultsScreenshots / recordings
N/A — no UI changes
Checklist