Skip to content

feat: add LiteLLM provider with auto model discovery#14202

Closed
balcsida wants to merge 4 commits intoanomalyco:devfrom
balcsida:feat/litellm-support
Closed

feat: add LiteLLM provider with auto model discovery#14202
balcsida wants to merge 4 commits intoanomalyco:devfrom
balcsida:feat/litellm-support

Conversation

@balcsida
Copy link

@balcsida balcsida commented Feb 18, 2026

Closes #13891

What

Adds native LiteLLM provider support with automatic model discovery from the proxy's /model/info endpoint. Previously, users had to manually define every model in opencode.json when using a LiteLLM proxy. Now, setting LITELLM_API_KEY and LITELLM_HOST is enough — models are discovered automatically at startup.

Changes

  • New litellm.ts: Discovery module that fetches /model/info, maps pricing from per-token to per-million-token, extracts capabilities (reasoning, vision, PDF, audio, video, tool calling), filters wildcards, and infers interleaved thinking for Claude models
  • provider.ts: Seeds litellm provider when env vars are detected; adds custom loader that resolves host/key, calls discovery, and injects models (user config takes precedence)
  • transform.ts: LiteLLM-specific reasoning variants — Claude/Anthropic models get thinking budget variants, others get reasoningEffort. Prevents false-positive reasoning param injection for aliased models
  • llm.ts: Adds explicit providerID === "litellm" to isLiteLLMProxy check

Why not models.dev?

LiteLLM is a self-hosted proxy where available models are user-configured and vary per deployment. Unlike static providers (OpenRouter, Helicone), the model list can only be known at runtime. A models.dev entry can't represent this — runtime discovery is required.

Env vars

Variable Description Default
LITELLM_API_KEY API key for the proxy
LITELLM_HOST Base URL of the proxy http://localhost:4000
LITELLM_BASE_URL Alias for LITELLM_HOST
LITELLM_CUSTOM_HEADERS JSON string of custom headers {}
LITELLM_TIMEOUT Discovery timeout in ms 5000

Verification

  1. Set LITELLM_API_KEY + LITELLM_HOST pointing to a running proxy
  2. Run opencode — LiteLLM models appear in model picker
  3. Select a model and send a message — streaming works
  4. Test aliased reasoning model (e.g. o3-custom → Mistral) — reasoning params NOT injected
  5. Test actual reasoning model — reasoning variants work
  6. Test without proxy running — graceful fallback, provider hidden

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Duplicate PR Found

PR #13896: "feat(opencode): add auto loading models for litellm providers"
#13896

This appears to be a duplicate or closely related PR addressing the same feature — automatic model loading for LiteLLM providers. Both PRs are implementing first-class LiteLLM support with auto-discovery. You should check if #13896 is still open/merged and coordinate with that work to avoid duplication.

@balcsida balcsida changed the title feat: first-class LiteLLM provider with auto model discovery feat: add LiteLLM provider with auto model discovery Feb 18, 2026
@github-actions github-actions bot added the needs:compliance This means the issue will auto-close after 2 hours. label Feb 18, 2026
@github-actions
Copy link
Contributor

This PR doesn't fully meet our contributing guidelines and PR template.

What needs to be fixed:

  • PR description is missing required template sections. Please use the PR template.

Please edit this PR description to address the above within 2 hours, or it will be automatically closed.

If you believe this was flagged incorrectly, please let a maintainer know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs:compliance This means the issue will auto-close after 2 hours.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Auto-load available models from LiteLLM proxy with autoload option

1 participant

Comments