Skip to content

1ru.7: Make LM Studio and Ollama first-class local endpoint integrations#13

Closed
penso wants to merge 1 commit intomainfrom
beads-polyphony-1ru.7
Closed

1ru.7: Make LM Studio and Ollama first-class local endpoint integrations#13
penso wants to merge 1 commit intomainfrom
beads-polyphony-1ru.7

Conversation

@penso
Copy link
Owner

@penso penso commented Mar 16, 2026

Automated handoff for 1ru.7.

Issue:
Base branch: main
Head branch: beads-polyphony-1ru.7
Commit: dba0627

@penso
Copy link
Owner Author

penso commented Mar 16, 2026

Summary

The branch makes ollama and lmstudio usable as first-class openai_chat providers, with sensible default local base URLs and no required API key. The main review concern is around edge-case behavior rather than the happy path.

Risks

  • crates/agent-openai/src/lib.rs:118, crates/agent-openai/src/lib.rs:288, crates/agent-openai/src/lib.rs:362: the new "skip unreachable local provider" path depends on reqwest timeouts or connection-refused errors, but OpenAiRuntime still builds a default client with no request timeout. If a local endpoint accepts the connection and then stalls, model discovery can still hang instead of degrading cleanly.
  • crates/workflow/src/render.rs:347: LM Studio auth fallback only checks LMSTUDIO_API_KEY. LM Studio documents LM_API_TOKEN for bearer auth, so authenticated LM Studio setups will not work out of the box unless users duplicate the token into repo-specific config or set api_key explicitly.
  • crates/workflow/src/render.rs:472, crates/agent-openai/src/lib.rs:103: OLLAMA_BASE_URL, LMSTUDIO_BASE_URL, and explicit base_url overrides are accepted verbatim. There is no validation that the override still points at the expected OpenAI-compatible /v1 base, so a typo like http://host:1234 will fail later with a fairly opaque runtime error.

Recommended human checks

  • Smoke-test both providers against real local servers, including lmstudio with fetch_models = true.
  • Try a dead-but-listening local endpoint once, not just a refused port, and confirm discovery behavior is acceptable.
  • Verify an authenticated LM Studio setup using only LM_API_TOKEN, or document the required env var alias before merge.

@penso penso closed this Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant