Skip to content

Comments

feat(opencode): add dynamic model fetching for OpenAI-compatible provider#14277

Open
shivamashtikar wants to merge 1 commit intoanomalyco:devfrom
shivamashtikar:feat/dynamic-model-fetch
Open

feat(opencode): add dynamic model fetching for OpenAI-compatible provider#14277
shivamashtikar wants to merge 1 commit intoanomalyco:devfrom
shivamashtikar:feat/dynamic-model-fetch

Conversation

@shivamashtikar
Copy link

Issue for this PR

Closes #12814

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Adds a fetchModels config option for providers that dynamically fetches available models from the provider's API at startup, eliminating the need to manually list every model in opencode.json.

How it works:

  1. When fetchModels: true is set on a provider config, the provider initialization calls the API to discover models.
  2. It first tries the LiteLLM-specific /model/info endpoint, which returns rich metadata (context/output limits, costs, vision/reasoning/toolcall capabilities).
  3. If /model/info is unavailable (non-LiteLLM provider), it falls back to the standard OpenAI-compatible /models endpoint with sensible defaults.
  4. Manually configured models in opencode.json always override fetched ones, so users can still fine-tune specific models.

Changes:

  • config.ts: Added fetchModels boolean option to the provider config schema.
  • provider.ts: Added fetchModelInfo, fetchModelList, and fetchModels functions. Integrated into state() after config processing so manual models take precedence.

Example config:

{
  "provider": {
    "litellm": {
      "npm": "@ai-sdk/openai-compatible",
      "fetchModels": true,
      "options": {
        "baseURL": "https://my-litellm-proxy.com",
        "apiKey": "{env:API_KEY}"
      }
    }
  }
}

Works for any OpenAI-compatible provider (LiteLLM, Ollama, vLLM, etc).

How did you verify your code works?

Ran bun run --conditions=browser ./src/index.ts models litellm against a LiteLLM proxy. Verified:

  • All 16 models auto-populated from /model/info endpoint
  • Context limits, output limits, costs, and capabilities (vision, reasoning, PDF, toolcall) correctly mapped
  • Models with /model/info data get real limits (e.g., claude-opus-4-5: 1M context, 128K output) instead of defaults
  • Fallback to /models works when /model/info is unavailable
  • Manual model overrides in config take precedence over fetched data
  • Built binary with --single flag and confirmed it works

Screenshots / recordings

N/A - no UI changes

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

…iders

Add fetchModels config option that fetches available models from a
provider's API at startup. Tries LiteLLM /model/info first for rich
metadata (limits, costs, capabilities), falls back to standard /models
endpoint. Manually configured models override fetched ones.
@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Found a potential duplicate:

PR #13896 - feat(opencode): add auto loading models for litellm providers

Why it's related: This PR appears to implement very similar functionality - automatic model loading for LiteLLM providers. Given that PR #14277 also adds dynamic model fetching for OpenAI-compatible providers (including LiteLLM), these PRs may be addressing the same or overlapping feature requests. You should verify whether PR #13896 was already merged or closed and check if #14277 extends/improves upon it.

@shivamashtikar
Copy link
Author

this PR provides compatibility with any open-ai compatible endpoints along with litellm as opposed to #13896 which only handles for litellm

@PratikNarola1
Copy link

LGTM

@8x22b
Copy link

8x22b commented Feb 19, 2026

PLEASE MERGE IT

@espetro
Copy link

espetro commented Feb 20, 2026

up! we're looking forward to it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Dynamic model switching

4 participants