Skip to content

feat: add MiniMax as a first-class LLM provider#1250

Open
octo-patch wants to merge 5 commits intoMemTensor:dev-20260319-v2.0.10from
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as a first-class LLM provider#1250
octo-patch wants to merge 5 commits intoMemTensor:dev-20260319-v2.0.10from
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Add MiniMax as a first-class LLM provider in MemOS, following the same clean pattern as the existing Qwen and DeepSeek integrations.

What is MiniMax?

MiniMax is an AI company that provides OpenAI-compatible LLM APIs. Their latest models include:

  • MiniMax-M2.5 — general-purpose model with 204K context window
  • MiniMax-M2.5-highspeed — faster variant for latency-sensitive workloads

API Base URL: https://api.minimax.io/v1 (fully OpenAI-compatible)

Changes

  • New file: src/memos/llms/minimax.pyMinimaxLLM class inheriting from OpenAILLM
  • Config: MinimaxLLMConfig in src/memos/configs/llm.py with api_key, api_base, extra_body fields
  • Factory: Registered minimax backend in both LLMFactory and LLMConfigFactory
  • API Config: Added minimax_config() to APIConfig with environment variable support (MINIMAX_API_KEY, MINIMAX_API_BASE)
  • Backend mapping: Added minimax to backend_model dicts in get_product_default_config() and create_user_config()
  • Examples: Added MiniMax usage scenario (Scenario 7) in examples/basic_modules/llm.py
  • Tests: Unit tests for config validation, generate, and streaming in tests/llms/test_minimax.py and tests/configs/test_llm.py
  • Docs: Updated .env.example and README.md to list MiniMax as a supported provider

Usage

from memos.configs.llm import LLMConfigFactory
from memos.llms.factory import LLMFactory

config = LLMConfigFactory.model_validate({
    "backend": "minimax",
    "config": {
        "model_name_or_path": "MiniMax-M2.5",
        "api_key": "your-api-key",
        "api_base": "https://api.minimax.io/v1",
        "temperature": 0.7,
        "max_tokens": 1024,
    },
})
llm = LLMFactory.from_config(config)
response = llm.generate([{"role": "user", "content": "Hello!"}])

Or via environment variables:

MOS_CHAT_MODEL_PROVIDER=minimax
MOS_CHAT_MODEL=MiniMax-M2.5
MINIMAX_API_KEY=your-api-key

Test Plan

  • Unit tests pass: pytest tests/llms/test_minimax.py (4 tests)
  • Config tests pass: pytest tests/configs/test_llm.py (6 tests)
  • Existing tests unaffected: pytest tests/llms/test_deepseek.py tests/llms/test_qwen.py (4 tests)
  • Integration tested with real MiniMax API (both streaming and non-streaming)

hijzy and others added 4 commits March 16, 2026 15:25
## Description

Please include a summary of the change, the problem it solves, the
implementation approach, and relevant context. List any dependencies
required for this change.

Related Issue (Required):  Fixes @issue_number

## Type of change

Please delete options that are not relevant.

- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Refactor (does not change functionality, e.g. code style
improvements, linting)
- [ ] Documentation update

## How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide
instructions so we can reproduce. Please also list any relevant details
for your test configuration

- [ ] Unit Test
- [ ] Test Script Or Test Steps (please provide)
- [ ] Pipeline Automated API Test (please provide)

## Checklist

- [ ] I have performed a self-review of my own code | 我已自行检查了自己的代码
- [ ] I have commented my code in hard-to-understand areas |
我已在难以理解的地方对代码进行了注释
- [ ] I have added tests that prove my fix is effective or that my
feature works | 我已添加测试以证明我的修复有效或功能正常
- [ ] I have created related documentation issue/PR in
[MemOS-Docs](https://github.com/MemTensor/MemOS-Docs) (if applicable) |
我已在 [MemOS-Docs](https://github.com/MemTensor/MemOS-Docs) 中创建了相关的文档
issue/PR(如果适用)
- [ ] I have linked the issue to this PR (if applicable) | 我已将 issue
链接到此 PR(如果适用)
- [ ] I have mentioned the person who will review this PR | 我已提及将审查此 PR
的人

## Reviewer Checklist
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] Made sure Checks passed
- [ ] Tests have been provided
Add MiniMax LLM support via the OpenAI-compatible API, following the
same pattern as the existing Qwen and DeepSeek providers.

Changes:
- Add MinimaxLLMConfig with api_key, api_base, extra_body fields
- Add MinimaxLLM class inheriting from OpenAILLM
- Register minimax backend in LLMFactory and LLMConfigFactory
- Add minimax_config() to APIConfig with env var support
  (MINIMAX_API_KEY, MINIMAX_API_BASE)
- Add minimax to backend_model dicts in product/user config
- Add MiniMax example scenario in examples/basic_modules/llm.py
- Add unit tests for config and LLM (generate, stream, think prefix)
- Update .env.example and README with MiniMax provider info

MiniMax API: https://api.minimax.io/v1 (OpenAI-compatible)
Models: MiniMax-M2.5, MiniMax-M2.5-highspeed (204K context)
@CaralHsi CaralHsi requested a review from endxxxx March 17, 2026 06:11
@CaralHsi CaralHsi changed the base branch from main to dev-20260319-v2.0.10 March 17, 2026 06:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants