Skip to content

Conversation

@aj-geddes
Copy link

Summary

Adds Quorum MCP - a Multi-LLM consensus system for improved AI accuracy and reliability through deliberative synthesis.

Server Details

Description

Quorum MCP orchestrates multiple AI providers (Claude, GPT-4, Gemini, Ollama, Cohere, Mistral, Novita) through multi-round deliberation to produce consensus-based responses with higher confidence than single-model outputs. This approach is valuable for critical decisions, complex questions requiring multiple perspectives, and validating AI outputs against each other.

Key Features

  • Multi-provider consensus: Query 7+ AI providers simultaneously
  • Three operational modes:
    • Quick Consensus: Single round parallel queries (fastest)
    • Full Deliberation: Three-round process with cross-review (most thorough)
    • Devil's Advocate: One provider critiques, others respond (explores edge cases)
  • Confidence scoring: 0.0-1.0 confidence scores based on consensus strength
  • Cost tracking: Per-session USD cost tracking across all providers
  • Session management: Retrieve past deliberations by session ID
  • Local LLM support: Ollama integration for cost-free experimentation

Tools Provided

  1. q_in: Submit a query to the quorum for consensus-based response

    • Parameters: query (required), context (optional), mode (optional: quick_consensus, full_deliberation, devils_advocate)
    • Returns: session_id, status, consensus, confidence score, cost, providers_used
  2. q_out: Retrieve consensus results from a quorum session

    • Parameters: session_id (required)
    • Returns: full session details including consensus, confidence, metadata, token usage, cost breakdown

Configuration

Requires at least one API key:

  • ANTHROPIC_API_KEY (Claude)
  • OPENAI_API_KEY (GPT-4)
  • GOOGLE_API_KEY (Gemini)
  • COHERE_API_KEY (Cohere)
  • MISTRAL_API_KEY (Mistral)
  • NOVITA_API_KEY (Novita)
  • OLLAMA_ENABLE (default: true, local LLM - no API key required)

Testing

  • Tested locally with Docker build
  • Verified stdio transport compatibility
  • Tools are discoverable without configuration
  • Documentation complete and comprehensive
  • All validation checks passed

Checklist

  • License is MIT or Apache-2.0 (MIT)
  • Dockerfile present and builds successfully
  • server.yaml follows required schema
  • tools.json is complete and valid JSON
  • readme.md is comprehensive with usage examples
  • Server supports stdio transport (FastMCP default)
  • No hardcoded secrets in configuration
  • Category is valid (ai)
  • Tags are descriptive (multi-llm, consensus, ai, llm-orchestration, deliberation)
  • Image name follows mcp/ namespace convention

Use Cases

  • Critical decision-making requiring high confidence
  • Technical architecture decisions
  • Complex problem analysis
  • AI output validation
  • Code review and security analysis
  • Multi-perspective research questions

Performance

  • Quick Consensus: ~3-5 seconds, ~$0.01-0.03
  • Full Deliberation: ~15-30 seconds, ~$0.05-0.10
  • Devil's Advocate: ~10-15 seconds, ~$0.03-0.05

Links

Adds terry-form-mcp - Enterprise Terraform operations with LSP integration
for intelligent Infrastructure as Code development.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@aj-geddes aj-geddes requested a review from a team as a code owner December 4, 2025 12:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant