Skip to content

feat: support OpenClaw provider ecosystem (minimax-portal, ollama, custom) in memos-local-openclaw #1306

@Ink-kai

Description

@Ink-kai

Summary

memos-local-openclaw fails with multiple provider combinations common in OpenClaw deployments:

  1. minimax-portal: OpenClaw default, uses Anthropic API, but switch cases only have explicit "anthropic"
  2. Ollama /v1/embeddings: Broken OpenAI compat endpoint, fallback loses vector embeddings
  3. Ollama /v1/chat/completions: Broken OpenAI compat endpoint, summarizer timeouts
  4. Timeout too short: judgeDedup default 15s insufficient for local model loading

Root Causes

  1. Provider switch cases only have generic names ("anthropic"), missing custom provider keys like "minimax-portal"
  2. Ollama OpenAI compatibility broken for both /v1/embeddings and /v1/chat/completions (native APIs work)
  3. Default 15s timeout too short (Ollama loads 2-6s per cold request)

Proposed Solutions

1. Add explicit cases for common OpenClaw providers

Add minimax-portal and other known providers to all 6 switch functions:

case "anthropic":
case "minimax-portal":
  return summarizeAnthropic(...);

2. Support Ollama native API

case "ollama":
  return embedOllamaNative(texts, cfg, log);  // uses /api/embeddings
  return summarizeOllamaNative(text, cfg, log);  // uses /api/generate

3. Increase default timeouts for local models

  • judgeDedup: 15s → 120s
  • summarize: 60s → 120s

4. Add startup health-check

Fail fast if configured providers are unreachable.

Related

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions