Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Dec 13, 2025

Related GitHub Issue

Closes: #10071

Description

This PR attempts to address a request from Issue #10071 to add DeepSeek V3.2 to the Fireworks AI provider.

Changes made:

  • Added accounts/fireworks/models/deepseek-v3p2 to the FireworksModelId type
  • Added DeepSeek V3.2 model configuration with the following specs:
    • maxTokens: 16384
    • contextWindow: 163840
    • supportsNativeTools: true
    • Pricing similar to V3.1 (input: $0.56/M, output: $1.68/M)
  • Added corresponding test case for the new model

Feedback and guidance are welcome.

Test Procedure

  • Ran the Fireworks provider test suite: cd src && npx vitest run api/providers/__tests__/fireworks.spec.ts
  • All 26 tests passed, including the new test for DeepSeek V3.2

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Testing: New and/or updated tests have been added to cover my changes (if applicable).
  • Documentation Impact: I have considered if my changes require documentation updates (see "Documentation Updates" section below).
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Documentation Updates

  • No documentation updates are required.

Additional Notes

The model configuration is based on the existing DeepSeek V3.1 model specs. The actual Fireworks API model ID follows the naming convention used for other DeepSeek models (v3, v3p1, v3p2).


Important

Add DeepSeek V3.2 model to Fireworks AI provider with configuration and tests.

  • Model Addition:
    • Added deepseek-v3p2 to FireworksModelId in fireworks.ts.
    • Configured DeepSeek V3.2 with maxTokens: 16384, contextWindow: 163840, supportsNativeTools: true, and pricing similar to V3.1.
  • Handler Updates:
    • Updated FireworksHandler in fireworks.ts to handle the new model.
    • Ensured reasoning_effort parameter is included when enabled.
  • Testing:
    • Added tests in fireworks.spec.ts for DeepSeek V3.2 configuration and reasoning effort handling.
    • Verified all tests pass, including new tests for DeepSeek V3.2.

This description was created by Ellipsis for d8684b0. You can customize this summary. It will automatically update as commits are pushed.

@roomote
Copy link
Contributor Author

roomote bot commented Dec 13, 2025

Rooviewer Clock   See task on Roo Cloud

Review complete. No issues found.

This commit adds the Kimi K2 Thinking model (accounts/fireworks/models/kimi-k2-thinking) to the Fireworks AI provider. The model configuration correctly includes supportsReasoningEffort: ["low", "medium", "high"] with defaultToolProtocol: "native", following the established pattern for reasoning-enabled models like DeepSeek V3.1/V3.2. Test coverage has been added to verify the new model configuration.

Previous reviews

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

@kavehsfv
Copy link

@roomote please make sure you adding thinking flag to this model as well

@roomote
Copy link
Contributor Author

roomote bot commented Dec 13, 2025

Fixaroo Clock   See task on Roo Cloud

Added supportsReasoningBinary: true to DeepSeek V3.2 model to enable thinking mode. All local checks passed.

View commit | Revert commit

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Dec 13, 2025
@kavehsfv
Copy link

@roomote

The way you added reasoning is not working. could you check following documentation of Fireworks ai for this?

https://docs.fireworks.ai/guides/reasoning#reasoning

Reasoning

How to use reasoning with Fireworks models

For thinking/reasoning models, Fireworks provides access to the model's
reasoning process through the reasoning_content field. This field contains the
model's internal reasoning, which would otherwise appear in <think></think>
tags within the content field. For some models, the reasoning content may
instead be included directly in the content field itself.

Prerequisites

We recommend using the Fireworks Python SDK to work with
reasoning, as it supports Fireworks-specific parameters and response fields.

The SDK is currently in alpha. Use the `--pre` flag when installing to get the latest version. ```bash pip theme={null} theme={null} theme={null} pip install --pre fireworks-ai ```
poetry add --pre fireworks-ai
uv add --pre fireworks-ai

Basic usage

Select a reasoning model from our serverless model library.

from fireworks import Fireworks

client = Fireworks()

completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is 25 * 37?",
        }
    ],
    model="accounts/fireworks/models/<reasoning-model>",
)

for choice in completion.choices:
    # Access the reasoning content (thinking process)
    if choice.message.reasoning_content:
        print("Reasoning:", choice.message.reasoning_content)
    print("Answer:", choice.message.content)

Controlling reasoning effort

You can control the reasoning token length using the reasoning_effort parameter:

completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "Solve this step by step: If a train travels at 60 mph for 2.5 hours, how far does it go?",
        }
    ],
    model="accounts/fireworks/models/<reasoning-model>",
    reasoning_effort="medium",
)

See the reasoning_effort parameter for more details.

Streaming with reasoning content

When streaming, the reasoning content is available in each chunk's delta:

from fireworks import Fireworks

client = Fireworks()

stream = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is the square root of 144?",
        }
    ],
    model="accounts/fireworks/models/<reasoning-model>",
    reasoning_effort="medium",
    stream=True,
)

reasoning_parts = []
content_parts = []

for chunk in stream:
    delta = chunk.choices[0].delta
    if delta.reasoning_content:
        reasoning_parts.append(delta.reasoning_content)
    if delta.content:
        content_parts.append(delta.content)

print("Reasoning:", "".join(reasoning_parts))
print("Answer:", "".join(content_parts))

Interleaved thinking

When building multi-turn tool-calling agents with models that support
interleaved thinking, you must include the reasoning_content from previous
assistant turns in subsequent requests. You can do this in two ways:

  1. Pass the Message object directly (recommended) - The SDK message object
    already contains the reasoning_content field alongside content and tool_calls
  2. Manually include reasoning_content - When constructing messages as
    dictionaries, explicitly add the reasoning_content field
Interleaved thinking is triggered when the last message in your API request has `"role": "tool"`, enabling the model to use its previous reasoning process when responding to the tool result. If a model does not support interleaved thinking, it simply ignores the extra reasoning context so this pattern is safe to use broadly.

Here's how to preserve reasoning context using both approaches:

```python theme={null} # First turn: Get a response with reasoning_content first_response = client.chat.completions.create( messages=[{"role": "user", "content": "What is 15 + 27?"}], model="accounts/fireworks/models/", tools=tools, )
# The assistant message contains reasoning_content, content, and tool_calls
assistant_message = first_response.choices[0].message
# assistant_message.reasoning_content -> "The user is asking for addition..."
# assistant_message.tool_calls -> [ToolCall(id="...", function=...)]

# Second turn: Pass the Message object directly
# This automatically includes reasoning_content alongside the message
second_response = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "What is 15 + 27?"},
        assistant_message,  # Pass the complete Message object
        {"role": "tool", "content": "42", "tool_call_id": assistant_message.tool_calls[0].id},
    ],
    model="accounts/fireworks/models/<reasoning-model>",
    tools=tools,
)
```
```python theme={null} # First turn: Get a response with reasoning_content first_response = client.chat.completions.create( messages=[{"role": "user", "content": "What is 15 + 27?"}], model="accounts/fireworks/models/", tools=tools, )
assistant_message = first_response.choices[0].message

# Second turn: Manually construct the assistant message dict
# Include reasoning_content explicitly alongside role, content, and tool_calls
second_response = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "What is 15 + 27?"},
        {
            "role": "assistant",
            "content": assistant_message.content,
            "reasoning_content": assistant_message.reasoning_content,  # Include reasoning
            "tool_calls": assistant_message.tool_calls,
        },
        {"role": "tool", "content": "42", "tool_call_id": assistant_message.tool_calls[0].id},
    ],
    model="accounts/fireworks/models/<reasoning-model>",
    tools=tools,
)
```
If you construct the assistant message manually as a dictionary but omit the `reasoning_content` field, the model will not have access to its previous reasoning process.

The following script demonstrates this behavior and validates that the
reasoning_content from the first turn is included in subsequent requests:

"""Test that reasoning_content is passed in multi-turn conversations.

This test proves that reasoning_content from previous turns is included
in subsequent requests by examining the raw prompt sent to the model.
"""

from fireworks import Fireworks
from dotenv import load_dotenv

load_dotenv()

client = Fireworks()

MODEL = "accounts/fireworks/models/kimi-k2-thinking"
# MODEL = "accounts/fireworks/models/minimax-m2"

# Define tools to enable interleaved thinking
tools = [
    {
        "type": "function",
        "function": {
            "name": "calculator",
            "description": "Perform basic arithmetic operations",
            "parameters": {
                "type": "object",
                "properties": {
                    "operation": {
                        "type": "string",
                        "enum": ["add", "subtract", "multiply", "divide"],
                    },
                    "a": {"type": "number"},
                    "b": {"type": "number"},
                },
                "required": ["operation", "a", "b"],
            },
        },
    }
]


def print_header(title: str, char: str = "═", width: int = 60):
    """Print a formatted section header."""
    print(f"\n{char * width}")
    print(f"  {title}")
    print(f"{char * width}")


def print_field(label: str, value: str, indent: int = 2):
    """Print a labeled field with optional indentation."""
    prefix = " " * indent
    print(f"{prefix}{label}: {value}")


def print_multiline(label: str, content: str, max_preview: int = 200, indent: int = 2):
    """Print multiline content with a label and optional truncation."""
    prefix = " " * indent
    print(f"{prefix}{label}:")
    preview = content[:max_preview] + "..." if len(content) > max_preview else content
    for line in preview.split("\n"):
        print(f"{prefix}{line}")


# First turn - get a response with reasoning_content
print_header("FIRST TURN", "═")
first_response = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is 15 + 27?",
        }
    ],
    model=MODEL,
    tools=tools,
)

print_field("📝 Content", first_response.choices[0].message.content or "(none)")

reasoning = first_response.choices[0].message.reasoning_content
print_multiline("💭 Reasoning", reasoning)

# Print tool call (verified) from the first response
tool_calls = first_response.choices[0].message.tool_calls
assert tool_calls, "No tool calls in first response!"
print(f"\n  🔧 Tool Calls ({len(tool_calls)}):")
for i, tc in enumerate(tool_calls, 1):
    print(f"    [{i}] id={tc.id}")
    print(f"        function={tc.function.name}")
    print(f"        arguments={tc.function.arguments}")
tool_call_id = first_response.choices[0].message.tool_calls[0].id

# Verify we got reasoning_content
assert reasoning and len(reasoning) > 0, "No reasoning_content in first response!"
print("\n  ✓ First response has reasoning_content")

# Second turn - include the first assistant message
print_header("SECOND TURN", "═")
second_response = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is 15 + 27?",
        },
        first_response.choices[0].message,  # Includes reasoning_content
        {"role": "tool", "content": "42", "tool_call_id": tool_call_id},
    ],
    model=MODEL,
    tools=tools,
    raw_output=True,
)

print_field("📝 Answer", second_response.choices[0].message.content or "(none)")

# Extract and display the raw prompt that was sent to the model
raw_prompt = second_response.choices[0].raw_output.prompt_fragments[0]
print_header("RAW PROMPT SENT TO MODEL", "─")
print(raw_prompt)

# Check if reasoning_content from first turn is in the raw prompt
has_reasoning_content = reasoning[:50] in raw_prompt

print_header("RESULT", "═")
if has_reasoning_content:
    print("  ✅ SUCCESS: reasoning_content IS included in subsequent requests!")
else:
    print("  ❌ FAILED: reasoning_content not found in raw prompt")
print()

Below is the expected output:

```txt Kimi-K2-Thinking (output) expandable theme={null} ════════════════════════════════════════════════════════════ FIRST TURN ════════════════════════════════════════════════════════════ 📝 Content: (none) 💭 Reasoning: │ The user is asking for a simple addition calculation: 15 + 27. │ │ I should use the calculator function with: │ - operation: "add" │ - a: 15 │ - b: 27
🔧 Tool Calls (1):
  [1] id=functions.calculator:0
      function=calculator
      arguments={"operation": "add", "a": 15, "b": 27}

✓ First response has reasoning_content

════════════════════════════════════════════════════════════
SECOND TURN
════════════════════════════════════════════════════════════
📝 Answer: 15 + 27 = 42

────────────────────────────────────────────────────────────
RAW PROMPT SENT TO MODEL
────────────────────────────────────────────────────────────
<|im_system|>tool_declare<|im_middle|>[{"function":{"description":"Perform basic arithmetic operations","name":"calculator","parameters":{"properties":{"a":{"type":"number"},"b":{"type":"number"},"operation":{"enum":["add","subtract","multiply","divide"],"type":"string"}},"required":["operation","a","b"],"type":"object"}},"type":"function"}]<|im_end|><|im_user|>user<|im_middle|>What is 15 + 27?<|im_end|><|im_assistant|>assistant<|im_middle|>The user is asking for a simple addition calculation: 15 + 27.

I should use the calculator function with:

  • operation: "add"
  • a: 15
  • b: 27<|tool_calls_section_begin|><|tool_call_begin|>functions.calculator:0<|tool_call_argument_begin|>{"operation": "add", "a": 15, "b": 27}<|tool_call_end|><|tool_calls_section_end|><|im_end|><|im_system|>tool<|im_middle|>## Return of None
    42<|im_end|><|im_assistant|>assistant<|im_middle|>

════════════════════════════════════════════════════════════
RESULT
════════════════════════════════════════════════════════════
✅ SUCCESS: reasoning_content IS included in subsequent requests!


```txt Minimax-M2 (output) expandable theme={null}
════════════════════════════════════════════════════════════
  FIRST TURN
════════════════════════════════════════════════════════════
  📝 Content: (none)
  💭 Reasoning:
    │ 
    │ Okay, the user is asking a simple arithmetic question: "What is 15 + 27?". This is a basic addition operation. I have a calculator tool available that can perform arithmetic operations. This is the p...

  🔧 Tool Calls (1):
    [1] id=chatcmpl-tool-249757d9ac8f4ca9afbdb580ced40ae6
        function=calculator
        arguments={"operation": "add", "a": 15, "b": 27}

  ✓ First response has reasoning_content

════════════════════════════════════════════════════════════
  SECOND TURN
════════════════════════════════════════════════════════════
  📝 Answer: 42

────────────────────────────────────────────────────────────
  RAW PROMPT SENT TO MODEL
────────────────────────────────────────────────────────────
]~b]system
You are a helpful assistant.

# Tools
You may call one or more tools to assist with the user query.
Here are the tools available in JSONSchema format:

<tools>
<tool>{"name": "calculator", "description": "Perform basic arithmetic operations", "parameters": {"type": "object", "properties": {"operation": {"type": "string", "enum": ["add", "subtract", "multiply", "divide"]}, "a": {"type": "number"}, "b": {"type": "number"}}, "required": ["operation", "a", "b"]}}</tool>
</tools>

When making tool calls, use XML format to invoke tools and pass parameters:

<minimax:tool_call>
<invoke name="tool-name-1">
<parameter name="param-key-1">param-value-1</parameter>
<parameter name="param-key-2">param-value-2</parameter>
...
</invoke>
</minimax:tool_call>[e~[
]~b]user
What is 15 + 27?[e~[
]~b]ai
<think>

Okay, the user is asking a simple arithmetic question: "What is 15 + 27?". This is a basic addition operation. I have a calculator tool available that can perform arithmetic operations. This is the perfect tool for this task.

Looking at the calculator tool parameters, I need to provide:
1. The operation: In this case, it would be "add" since we're adding two numbers
2. The first number: 15
3. The second number: 27

The calculator tool will handle the actual computation. Addition is one of the most basic arithmetic operations, so I'm confident the calculator tool can handle this correctly. I don't need to do the calculation manually since we have a dedicated tool for this.

The user might be expecting me to simply state the answer, but using the calculator tool will ensure accuracy and follows the guidelines of using available tools when appropriate. The tool will take the two numbers (15 and 27) and add them together.

So I'll make a tool call to the calculator with the operation "add", a=15, and b=27. The calculator will perform the addition and return the result, which I can then provide to the user as the final answer.

This is a straightforward request that aligns perfectly with the calculator tool's functionality. No additional clarification is needed from the user, and the operation doesn't require any special handling.

</think>


<minimax:tool_call>
<invoke name="calculator">
<parameter name="operation">add</parameter>
<parameter name="a">15</parameter>
<parameter name="b">27</parameter>
</invoke>
</minimax:tool_call>[e~[
]~b]tool
<response>42</response>[e~[
]~b]ai


════════════════════════════════════════════════════════════
  RESULT
════════════════════════════════════════════════════════════
  ✅ SUCCESS: reasoning_content IS included in subsequent requests!

To find navigation and other pages in this documentation, fetch the llms.txt file at: https://docs.fireworks.ai/llms.txt

@roomote
Copy link
Contributor Author

roomote bot commented Dec 14, 2025

Fixaroo Clock   See task on Roo Cloud

Fixed reasoning support for DeepSeek V3.1/V3.2 on Fireworks to use reasoning_effort parameter (Fireworks API style) instead of thinking parameter (Anthropic style). All tests passed.

View commit | Revert commit

- Changed from supportsReasoningBinary to supportsReasoningEffort for
  DeepSeek V3.1 and V3.2 models on Fireworks
- Added default reasoningEffort: "medium" for both models
- Override createStream in FireworksHandler to use getModelParams
  which passes reasoning_effort parameter (Fireworks API style) instead
  of thinking parameter (Anthropic style)
- Added tests to verify reasoning_effort is passed correctly
@kavehsfv
Copy link

@roomote

It works now with thinking mode thanks.
But now I see some errors which seems tool calling. and got error of "Unknown API error. Please contact Roo Code support.". I bring some prompts that got errors"

We need to examine DA3's cross-view consistency mechanisms more closely. Let's read the relevant parts of the paper again.
<read_file>

Paper/sections/3-approach.tex

</read_file>

API Error
Docs

Unknown API error. Please contact Roo Code support.

Let's update the todo list to mark step1 as completed and move to step2.

<update_todo_list>

[x] Analyze theoretical consistency of3D points across frames[-] Examine DA3's cross-view consistency mechanisms[ ] Identify practical error sources[ ] Propose validation approach[ ] Create implementation plan
</update_todo_list>

API Error
Docs

Unknown API error. Please contact Roo Code support.

We should also note that DA3 is designed for multi-view consistency. The paper mentions that it uses cross-view self-attention to ensure spatially consistent geometry across frames.

Now, let's update the todo list to mark step1 as completed and move to step2.

<update_todo_list>

[x] Analyze theoretical consistency of3D points across frames[-] Examine DA3's cross-view consistency mechanisms[ ] Identify practical error sources[ ] Propose validation approach[ ] Create implementation plan
</update_todo_list>

API Error
Docs

Unknown API error. Please contact Roo Code support.

@roomote
Copy link
Contributor Author

roomote bot commented Dec 14, 2025

Fixaroo Clock   See task on Roo Cloud

Added defaultToolProtocol: "native" to DeepSeek V3.1 and V3.2 models to use OpenAI-style function calling by default, which should improve tool calling reliability when reasoning mode is enabled. All local checks passed.

View commit | Revert commit

… tool calling reliability

When reasoning mode is enabled, DeepSeek models may not follow XML tool format correctly.
Setting defaultToolProtocol to native uses OpenAI-style function calling instead,
which is more reliable than XML parsing when reasoning_effort is enabled.
@kavehsfv
Copy link

@roomote
Great, DeepSeek V3.1/V3.2 both work ideally with thinking and tool calling, nice job!.

Could you please add Kimi-k2 thinking to Fireworks AI as well which should be compatible with thinking and native tool calling you just implemented?
the model Fireworks AI is as:
accounts/fireworks/models/kimi-k2-thinking

@roomote
Copy link
Contributor Author

roomote bot commented Dec 14, 2025

Fixaroo Clock   See task on Roo Cloud

Added Kimi K2 Thinking model (accounts/fireworks/models/kimi-k2-thinking) to Fireworks AI provider with thinking mode and native tool calling support. All tests pass.

View commit | Revert commit

@kavehsfv
Copy link

@roomote
All commits related to DeepSeek V3.1/V3.2 and kimi k2 thinking works great.
Could you please merge them into main repo?

@roomote
Copy link
Contributor Author

roomote bot commented Dec 14, 2025

Fixaroo Clock   See task on Roo Cloud

Marked the PR as ready for review. The reviewer found no issues, so maintainers can now review and merge this PR.

@roomote roomote bot marked this pull request as ready for review December 14, 2025 02:18
@roomote roomote bot requested review from cte, jr and mrubens as code owners December 14, 2025 02:18
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. Enhancement New feature or request labels Dec 14, 2025
@roomote
Copy link
Contributor Author

roomote bot commented Dec 14, 2025

Rooviewer Clock   See task on Roo Cloud

Review complete. No issues found.

This PR adds DeepSeek V3.2 and Kimi K2 Thinking models to the Fireworks AI provider with proper reasoning support via the reasoning_effort parameter. The implementation correctly integrates with the existing reasoning infrastructure and follows established patterns.

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

@kavehsfv
Copy link

Hi @hannesrudolph , @mrubens, @jr , @cte
Could you please merge this commit into the main repository? It adds Thinking Mode support for DeepSeek V3.1 and V3.2 on the Fireworks AI provider.

I’ve compiled and tested the changes locally, and everything is working as expected.
Thank you for your review!

@kavehsfv
Copy link

@roomote

Could you add option to DeepSeek AI provider or change it from OpenAI SDK to use Anthropic SDK as DeepSeek added this capability. the document can be found here
https://api-docs.deepseek.com/guides/anthropic_api

# Anthropic API

To meet the demand for using the Anthropic API ecosystem, our API has added support for the Anthropic API format. With simple configuration, you can integrate the capabilities of DeepSeek into the Anthropic API ecosystem.

## Use DeepSeek in Claude Code

### Install Claude Code

```bash
npm install -g @anthropic-ai/claude-code

Config Environment Variables

export ANTHROPIC_BASE_URL=https://api.deepseek.com/anthropic
export ANTHROPIC_AUTH_TOKEN=${YOUR_API_KEY}
export API_TIMEOUT_MS=600000
export ANTHROPIC_MODEL=deepseek-chat
export ANTHROPIC_SMALL_FAST_MODEL=deepseek-chat
export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1

Note: The API_TIMEOUT_MS parameter is configured to prevent excessively long outputs that could cause the Claude Code client to time out. Here, we set the timeout duration to 10 minutes.

Enter the Project Directory, and Execute Claude Code

cd my-project
claude

Invoke DeepSeek Model via Anthropic API

Install Anthropic SDK

pip install anthropic

Config Environment Variables

export ANTHROPIC_BASE_URL=https://api.deepseek.com/anthropic
export ANTHROPIC_API_KEY=${DEEPSEEK_API_KEY}

Invoke the API

import anthropic

client = anthropic.Anthropic()

message = client.messages.create(
    model="deepseek-chat",
    max_tokens=1000,
    system="You are a helpful assistant.",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Hi, how are you?"
                }
            ]
        }
    ]
)
print(message.content)

Note: When you pass an unsupported model name to DeepSeek's Anthropic API, the API backend will automatically map it to the deepseek-chat model.

Anthropic API Compatibility Details

HTTP Header

Field Support Status
anthropic-beta Ignored
anthropic-version Ignored
x-api-key Fully Supported

Simple Fields

Field Support Status
model Use DeepSeek Model Instead
max_tokens Fully Supported
container Ignored
mcp_servers Ignored
metadata Ignored
service_tier Ignored
stop_sequences Fully Supported
stream Fully Supported
system Fully Supported
temperature Fully Supported (range [0.0 ~ 2.0])
thinking Supported (budget_tokens is ignored)
top_k Ignored
top_p Fully Supported

Tool Fields

tools

Field Support Status
name Fully Supported
input_schema Fully Supported
description Fully Supported
cache_control Ignored

tool_choice

Value Support Status
none Fully Supported
auto Supported

@roomote
Copy link
Contributor Author

roomote bot commented Dec 15, 2025

Fixaroo Clock   See task on Roo Cloud

Thank you for the feature request! However, adding Anthropic SDK support for the DeepSeek provider is out of scope for this PR.

This PR specifically adds DeepSeek V3.2 to the Fireworks AI provider. The request to add Anthropic API format support to the DeepSeek provider is a separate, substantial feature that would require:

  1. New provider settings for API format selection (OpenAI vs Anthropic)
  2. A new handler implementation using the Anthropic SDK
  3. UI changes in the settings panel
  4. Comprehensive testing

I recommend creating a new GitHub issue to track this feature request separately. That way it can be properly scoped, discussed, and prioritized.

The current PR has no issues to fix - the reviewer noted "Review complete. No issues found."

@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Jan 7, 2026
@github-project-automation github-project-automation bot moved this from Triage to Done in Roo Code Roadmap Jan 7, 2026
@hannesrudolph
Copy link
Collaborator

Out of date.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Enhancement New feature or request Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

[BUG] Thinking mode for DeepSeek V3.1 on Fireworks AI provider is not working.

4 participants