feat(opencode): add dynamic model fetching for OpenAI-compatible provider#14277
feat(opencode): add dynamic model fetching for OpenAI-compatible provider#14277shivamashtikar wants to merge 1 commit intoanomalyco:devfrom
Conversation
…iders Add fetchModels config option that fetches available models from a provider's API at startup. Tries LiteLLM /model/info first for rich metadata (limits, costs, capabilities), falls back to standard /models endpoint. Manually configured models override fetched ones.
|
The following comment was made by an LLM, it may be inaccurate: Found a potential duplicate: PR #13896 - feat(opencode): add auto loading models for litellm providers Why it's related: This PR appears to implement very similar functionality - automatic model loading for LiteLLM providers. Given that PR #14277 also adds dynamic model fetching for OpenAI-compatible providers (including LiteLLM), these PRs may be addressing the same or overlapping feature requests. You should verify whether PR #13896 was already merged or closed and check if #14277 extends/improves upon it. |
|
this PR provides compatibility with any open-ai compatible endpoints along with litellm as opposed to #13896 which only handles for litellm |
|
LGTM |
|
PLEASE MERGE IT |
|
up! we're looking forward to it! |
Issue for this PR
Closes #12814
Type of change
What does this PR do?
Adds a
fetchModelsconfig option for providers that dynamically fetches available models from the provider's API at startup, eliminating the need to manually list every model inopencode.json.How it works:
fetchModels: trueis set on a provider config, the provider initialization calls the API to discover models./model/infoendpoint, which returns rich metadata (context/output limits, costs, vision/reasoning/toolcall capabilities)./model/infois unavailable (non-LiteLLM provider), it falls back to the standard OpenAI-compatible/modelsendpoint with sensible defaults.opencode.jsonalways override fetched ones, so users can still fine-tune specific models.Changes:
config.ts: AddedfetchModelsboolean option to the provider config schema.provider.ts: AddedfetchModelInfo,fetchModelList, andfetchModelsfunctions. Integrated intostate()after config processing so manual models take precedence.Example config:
{ "provider": { "litellm": { "npm": "@ai-sdk/openai-compatible", "fetchModels": true, "options": { "baseURL": "https://my-litellm-proxy.com", "apiKey": "{env:API_KEY}" } } } }Works for any OpenAI-compatible provider (LiteLLM, Ollama, vLLM, etc).
How did you verify your code works?
Ran
bun run --conditions=browser ./src/index.ts models litellmagainst a LiteLLM proxy. Verified:/model/infoendpoint/model/infodata get real limits (e.g., claude-opus-4-5: 1M context, 128K output) instead of defaults/modelsworks when/model/infois unavailable--singleflag and confirmed it worksScreenshots / recordings
N/A - no UI changes
Checklist