feat: add LiteLLM provider with auto model discovery#14202
Closed
balcsida wants to merge 4 commits intoanomalyco:devfrom
Closed
feat: add LiteLLM provider with auto model discovery#14202balcsida wants to merge 4 commits intoanomalyco:devfrom
balcsida wants to merge 4 commits intoanomalyco:devfrom
Conversation
Contributor
|
The following comment was made by an LLM, it may be inaccurate: Duplicate PR FoundPR #13896: "feat(opencode): add auto loading models for litellm providers" This appears to be a duplicate or closely related PR addressing the same feature — automatic model loading for LiteLLM providers. Both PRs are implementing first-class LiteLLM support with auto-discovery. You should check if #13896 is still open/merged and coordinate with that work to avoid duplication. |
Contributor
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
This was referenced Feb 18, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes #13891
What
Adds native LiteLLM provider support with automatic model discovery from the proxy's
/model/infoendpoint. Previously, users had to manually define every model inopencode.jsonwhen using a LiteLLM proxy. Now, settingLITELLM_API_KEYandLITELLM_HOSTis enough — models are discovered automatically at startup.Changes
litellm.ts: Discovery module that fetches/model/info, maps pricing from per-token to per-million-token, extracts capabilities (reasoning, vision, PDF, audio, video, tool calling), filters wildcards, and infers interleaved thinking for Claude modelsprovider.ts: Seedslitellmprovider when env vars are detected; adds custom loader that resolves host/key, calls discovery, and injects models (user config takes precedence)transform.ts: LiteLLM-specific reasoning variants — Claude/Anthropic models getthinkingbudget variants, others getreasoningEffort. Prevents false-positive reasoning param injection for aliased modelsllm.ts: Adds explicitproviderID === "litellm"toisLiteLLMProxycheckWhy not models.dev?
LiteLLM is a self-hosted proxy where available models are user-configured and vary per deployment. Unlike static providers (OpenRouter, Helicone), the model list can only be known at runtime. A models.dev entry can't represent this — runtime discovery is required.
Env vars
LITELLM_API_KEYLITELLM_HOSThttp://localhost:4000LITELLM_BASE_URLLITELLM_HOSTLITELLM_CUSTOM_HEADERS{}LITELLM_TIMEOUT5000Verification
LITELLM_API_KEY+LITELLM_HOSTpointing to a running proxyopencode— LiteLLM models appear in model pickero3-custom→ Mistral) — reasoning params NOT injected