Skip to content

Comments

feat(opencode): add LiteLLM provider with auto model discovery#14468

Open
balcsida wants to merge 17 commits intoanomalyco:devfrom
balcsida:feat/litellm-support
Open

feat(opencode): add LiteLLM provider with auto model discovery#14468
balcsida wants to merge 17 commits intoanomalyco:devfrom
balcsida:feat/litellm-support

Conversation

@balcsida
Copy link

Issue for this PR

Closes #13891

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Adds a native litellm provider that auto-discovers models from a LiteLLM proxy at startup. Previously users had to manually define every model in opencode.json — now setting LITELLM_API_KEY and LITELLM_HOST is enough.

Discovery: Fetches /model/info for rich metadata (pricing, limits, capabilities). Falls back to the standard /models endpoint for older LiteLLM versions or non-LiteLLM OpenAI-compatible proxies.

What gets mapped from /model/info:

  • Per-token pricing → per-million-token (including over-200k tier)
  • Context/output limits
  • Capabilities: reasoning, vision, PDF, audio, video, tool calling
  • Temperature support via supported_openai_params
  • Interleaved thinking inference for Claude/Anthropic models

Reasoning transforms: Claude models behind LiteLLM get thinking budget variants, other models get reasoningEffort. This prevents false-positive reasoning param injection for aliased models (e.g. o3-custom → Mistral).

Files changed:

  • litellm.ts (new): Discovery module — fetches, parses, maps model metadata
  • provider.ts: Seeds litellm provider from env vars; custom loader calls discovery and injects models (user config takes precedence)
  • transform.ts: LiteLLM-specific reasoning variant logic
  • llm.ts: Adds explicit providerID === "litellm" to proxy detection
Env var Description Default
LITELLM_API_KEY API key for the proxy
LITELLM_HOST Base URL of the proxy http://localhost:4000
LITELLM_BASE_URL Alias for LITELLM_HOST
LITELLM_CUSTOM_HEADERS JSON string of extra headers {}
LITELLM_TIMEOUT Discovery timeout in ms 5000

Builds on ideas from #13896 (fallback endpoint, over-200k pricing, temperature detection) and #14277 (clean fallback chain). Supersedes my earlier #14202 which was auto-closed for not following the PR template.

How did you verify your code works?

  1. Set LITELLM_API_KEY + LITELLM_HOST pointing to a running LiteLLM proxy
  2. Run opencode — LiteLLM models appear in model picker
  3. Select a model and send a message — streaming works
  4. Test aliased reasoning model (e.g. o3-custom → Mistral) — reasoning params NOT injected
  5. Test actual Claude reasoning model — thinking variants work
  6. Test without proxy running — graceful fallback, provider hidden
  7. Test with older proxy (no /model/info) — falls back to /models with sensible defaults

Screenshots / recordings

N/A — no UI changes

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on my search, I found two related PRs that are NOT duplicates but are closely related:

  1. PR feat(opencode): add dynamic model fetching for OpenAI-compatible provider #14277: feat(opencode): add dynamic model fetching for OpenAI-compatible provider

    • Related because the current PR builds on ideas from this PR (clean fallback chain mentioned in description)
  2. PR feat(opencode): add auto loading models for litellm providers #13896: feat(opencode): add auto loading models for litellm providers

    • Related because the current PR explicitly builds on ideas from this PR (fallback endpoint, over-200k pricing, temperature detection mentioned in description)

These PRs appear to be related work in the same area but are separate features. PR #14468 (the current PR) is a distinct, more comprehensive implementation of the LiteLLM provider with auto-discovery, and it references these earlier related PRs rather than duplicating them.

No duplicate PRs found.

@balcsida balcsida force-pushed the feat/litellm-support branch from d4a5701 to 1e2b913 Compare February 20, 2026 16:52
@alexyaroshuk
Copy link
Contributor

lgtm, but test fails. check e2e. this test fails:
e2e/projects/projects-switch.spec.ts:49:1 › switching back to a project opens the latest workspace session

Did you try testing locally? bun test:e2e:local

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Auto-load available models from LiteLLM proxy with autoload option

2 participants