feat: auto-detect local Ollama models#16653
Conversation
- Add Ollama detection utility that queries localhost:11434/api/tags - Auto-register Ollama provider when running locally with detected models - Add --local flag to /models command to show only local models - Enable tool calling for Ollama models via custom loader
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
|
The following comment was made by an LLM, it may be inaccurate: Based on my search, I found several potentially related PRs:
I recommend checking PR #11951 first as it specifically mentions probing Ollama models, which is most closely aligned with the current PR's goal of auto-detecting local Ollama models. |
- Auto-detect reasoning models (qwen3, phi4, gemma3, llama3, deepseek, qwq, gpt-oss) - Add config support to force reasoning ON/OFF via capabilities.reasoning - Enable interleaved with reasoning_content field for reasoning models - Increase token limits (context: 200k, output: 32k) for reasoning models - Add default reasoningEffort: medium for reasoning models - Add think parameter support for Ollama API (true/false or low/medium/high for GPT-OSS)
- configModel.reasoning instead of configModel.capabilities.reasoning - configModel.interleaved instead of configModel.capabilities.interleaved
Allow variants (low/medium/high) to be generated for Ollama models that were previously blocked by the deepseek/minimax/glm/mistral/kimi check in ProviderTransform.variants()
Reasoning detection was only running for auto-detected Ollama models, but was skipped when Ollama was configured in opencode.json. Now applies reasoning detection to all Ollama models regardless of how they're loaded.
|
This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window. Feel free to open a new pull request that follows our guidelines. |
Issue for this PR
Closes #
Type of change
What does this PR do?
Please provide a description of the issue, the changes you made to fix it, and why they work. It is expected that you understand why your changes work and if you do not understand why at least say as much so a maintainer knows how much to value the PR.
added Ollama Support for more Local Model support
If you paste a large clearly AI generated description here your PR may be IGNORED or CLOSED!
How did you verify your code works?
have used and tested qwithout issues
Screenshots / recordings
NA
If this is a UI change, please include a screenshot or recording.
Checklist
If you do not follow this template your PR will be automatically rejected.