Skip to content

feat: auto-detect local Ollama models#16653

Closed
koryboyd wants to merge 7 commits intoanomalyco:devfrom
koryboyd:dev
Closed

feat: auto-detect local Ollama models#16653
koryboyd wants to merge 7 commits intoanomalyco:devfrom
koryboyd:dev

Conversation

@koryboyd
Copy link

@koryboyd koryboyd commented Mar 9, 2026

  • Add Ollama detection utility that queries localhost:11434/api/tags
  • Auto-register Ollama provider when running locally with detected models
  • Add --local flag to /models command to show only local models
  • Enable tool calling for Ollama models via custom loader

Issue for this PR

Closes #

Type of change

  • Bug fix
  • [ x] New feature
  • [ x] Refactor / code improvement
  • Documentation

What does this PR do?

Please provide a description of the issue, the changes you made to fix it, and why they work. It is expected that you understand why your changes work and if you do not understand why at least say as much so a maintainer knows how much to value the PR.
added Ollama Support for more Local Model support
If you paste a large clearly AI generated description here your PR may be IGNORED or CLOSED!

How did you verify your code works?

have used and tested qwithout issues

Screenshots / recordings

NA
If this is a UI change, please include a screenshot or recording.

Checklist

  • [ x] I have tested my changes locally
  • [ x] I have not included unrelated changes in this PR

If you do not follow this template your PR will be automatically rejected.

- Add Ollama detection utility that queries localhost:11434/api/tags
- Auto-register Ollama provider when running locally with detected models
- Add --local flag to /models command to show only local models
- Enable tool calling for Ollama models via custom loader
@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

This PR doesn't fully meet our contributing guidelines and PR template.

What needs to be fixed:

  • No "Type of change" checkbox is checked. Please select at least one.
  • Not all checklist items are checked. Please confirm you have tested locally and have not included unrelated changes.

Please edit this PR description to address the above within 2 hours, or it will be automatically closed.

If you believe this was flagged incorrectly, please let a maintainer know.

@github-actions github-actions bot added the needs:compliance This means the issue will auto-close after 2 hours. label Mar 9, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

The following comment was made by an LLM, it may be inaccurate:

Based on my search, I found several potentially related PRs:

  1. PR feat(provider): auto-detect Ollama context limits #10758 - feat(provider): auto-detect Ollama context limits

  2. PR opencode: added logic to probe loaded models from lmstudio, ollama an… #11951 - opencode: added logic to probe loaded models from lmstudio, ollama and...

  3. PR fix: improve tool name repair for local/Ollama models #10558 - fix: improve tool name repair for local/Ollama models

  4. PR feat(opencode): add LiteLLM provider with auto model discovery #14468 - feat(opencode): add LiteLLM provider with auto model discovery

I recommend checking PR #11951 first as it specifically mentions probing Ollama models, which is most closely aligned with the current PR's goal of auto-detecting local Ollama models.

@koryboyd koryboyd marked this pull request as draft March 9, 2026 01:38
@koryboyd koryboyd marked this pull request as ready for review March 9, 2026 01:38
Kory Boyd and others added 3 commits March 9, 2026 12:24
- Auto-detect reasoning models (qwen3, phi4, gemma3, llama3, deepseek, qwq, gpt-oss)
- Add config support to force reasoning ON/OFF via capabilities.reasoning
- Enable interleaved with reasoning_content field for reasoning models
- Increase token limits (context: 200k, output: 32k) for reasoning models
- Add default reasoningEffort: medium for reasoning models
- Add think parameter support for Ollama API (true/false or low/medium/high for GPT-OSS)
- configModel.reasoning instead of configModel.capabilities.reasoning
- configModel.interleaved instead of configModel.capabilities.interleaved
Kory Boyd added 2 commits March 9, 2026 12:46
Allow variants (low/medium/high) to be generated for Ollama models
that were previously blocked by the deepseek/minimax/glm/mistral/kimi
check in ProviderTransform.variants()
Reasoning detection was only running for auto-detected Ollama models,
but was skipped when Ollama was configured in opencode.json. Now applies
reasoning detection to all Ollama models regardless of how they're loaded.
@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window.

Feel free to open a new pull request that follows our guidelines.

@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Mar 9, 2026
@github-actions github-actions bot closed this Mar 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant