A powerful TypeScript backend server that provides AI-powered code review services for VSCode extensions. CodeRabbit analyzes code changes, generates intelligent comments, and provides comprehensive review summaries using advanced AI models.
DISCLAIMER: This project is NOT affiliated with, endorsed by, or connected to CodeRabbit AI or any of its services.
- AI-Powered Code Reviews: Intelligent analysis using Google Gemini, OpenAI GPT models
- Real-time Communication: WebSocket-based real-time updates via tRPC
- Multi-Provider AI Support: Flexible AI provider configuration (Google, OpenAI )
- Comprehensive Analysis:
- Code review comments with categorization (issues, suggestions, nitpicks)
- PR title generation
- Review summaries and objectives
- Code walkthrough generation
- Robust Infrastructure:
- Rate limiting and request validation
- File size and count limits
- Comprehensive logging and monitoring
- Retry mechanisms with exponential backoff
- SSL support for secure connections
- Production Ready: Docker support, health checks, and metrics endpoints
- Node.js 20+
- npm or yarn
- AI Provider API Key (Google Generative AI, OpenAI)
-
Clone the repository
git clone https://github.com/creding/open-coderabbit-server cd open-coderabbit-server -
Install dependencies
npm install
-
Configure environment variables
cp .env.example .env
Edit
.envwith your configuration:# Server Configuration PORT=5353 HOST=localhost SSL=false # AI Provider Configuration AI_PROVIDER=google # google, openai GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key OPENAI_API_KEY=your_openai_api_key # AI Models GOOGLE_AI_MODEL=models/gemini-2.5-flash OPENAI_AI_MODEL=gpt-4o-mini # File Validation MAX_FILE_SIZE=10485760 # 10MB per file MAX_FILES_PER_REVIEW=50 # Max files per review MAX_TOTAL_SIZE=52428800 # 50MB total # Rate Limiting RATE_LIMIT_REQUESTS=10 # Requests per window RATE_LIMIT_WINDOW_MS=60000 # 1 minute window # Performance REVIEW_TIMEOUT_MS=300000 # 5 minute timeout # Logging LOG_LEVEL=info # error, warn, info, debug LOG_TO_FILE=false
npm run devnpm run build
npm startTo use your self-hosted Open CodeRabbit server with the CodeRabbit VSCode extension, follow these steps:
- CodeRabbit VSCode Extension: Install the CodeRabbit extension (version 0.12.1 or higher)
- Running Server: Ensure your Open CodeRabbit server is running and accessible
- Network Access: Make sure the server URL is reachable and WebSocket connections are allowed
-
Open VSCode and navigate to the CodeRabbit extension
-
Logout if previously logged in to CodeRabbit cloud service
-
Click "Self hosting CodeRabbit?" button (located below the "Use CodeRabbit for free" button)
-
Enter your server URL when prompted:
http://localhost:5353Or if running on a different host:
http://your-server-ip:5353 -
Select your Git provider:
- GitHub
- GitHub Enterprise
- GitLab
- Self-Hosted GitLab
-
Provide authentication (if using GitHub/GitHub Enterprise):
- Enter your GitHub Personal Access Token
- Required permissions:
repo,read:user
Once connected, you should see:
- β Connection status indicator in the extension
- Access to code review features in VSCode
- Real-time AI-powered code analysis
Extension can't connect to server:
# Check if server is running
docker ps
# or
curl http://localhost:5353/healthWebSocket connection failed:
- Ensure firewall allows connections on port 5353
- Check that server is binding to
0.0.0.0(not justlocalhost) - Verify no proxy is blocking WebSocket connections
Server URL not reachable:
- Test server accessibility:
curl http://your-server:5353/health - Check network connectivity between VSCode and server
- Ensure Docker port mapping is correct:
0.0.0.0:5353->5353/tcp
The Open CodeRabbit server is fully containerized and production-ready with Docker support.
# Start development environment with hot reload
./scripts/docker.sh dev
# Start production environment
./scripts/docker.sh start
# Stop all containers
./scripts/docker.sh stop
# View logs
./scripts/docker.sh logs
# Check health
./scripts/docker.sh healthBuild the image:
docker build -t open-coderabbit-server .Run production container:
docker run -d --name open-coderabbit -p 5353:5353 --env-file .env open-coderabbit-serverUsing Docker Compose:
# Production
docker-compose up -d
# Development with hot reload
docker-compose -f docker-compose.dev.yml up -d- Builder stage: Compiles TypeScript and installs all dependencies
- Production stage: Minimal runtime image with only production dependencies
- Base image:
node:20-alpinefor security and size optimization
The container uses the same environment variables as the local setup. Ensure your .env file is properly configured:
# Copy example environment file
cp .env.example .env
# Edit with your API keys and configurationBoth Docker Compose configurations include health checks:
- Endpoint:
GET /health - Interval: 30 seconds
- Timeout: 10 seconds
- Retries: 3
- Start period: 40 seconds
Production containers are configured with log rotation:
- Max file size: 10MB
- Max files: 3
- Format: JSON
| Feature | Development | Production |
|---|---|---|
| File | docker-compose.dev.yml |
docker-compose.yml |
| Hot Reload | β Volume mounted | β Built-in |
| Build Target | builder stage |
Final stage |
| Dependencies | All (dev + prod) | Production only |
| Restart Policy | unless-stopped |
unless-stopped |
| Health Checks | β Enabled | β Enabled |
The scripts/docker.sh script provides convenient commands:
./scripts/docker.sh [COMMAND]
Commands:
build Build the Docker image
dev Start development environment with hot reload
start Start production container
stop Stop running containers
restart Restart containers
logs Show container logs
shell Open shell in running container
clean Remove containers and images
health Check container health
help Show this help messageContainer won't start:
# Check logs
./scripts/docker.sh logs
# Or manually
docker-compose logs -fHealth check failing:
# Test health endpoint
curl http://localhost:5353/health
# Check container status
docker psPort already in use:
# Find process using port 5353
lsof -i :5353
# Or change port in docker-compose.yml
ports:
- "5354:5353" # Use different external portGET /health
Returns server health status:
- Healthy status: Returns plain text
"OK"with HTTP 200 (VSCode extension compatible) - Degraded/Unhealthy status: Returns detailed JSON with metrics and diagnostics
Example responses:
# Healthy server
curl http://localhost:5353/health
# Response: "OK" (Content-Type: text/plain)GET /metrics
Returns performance metrics and monitoring data.
WS /trpc
Real-time communication for code review requests and updates.
server.ts- Main HTTP/WebSocket server with SSL supportrouter.ts- tRPC router handling review requests and subscriptionsconstants.ts- Environment configuration and validation
services/reviewService.ts- Core review orchestration logicservices/ai/- AI integration layerindex.ts- Unified AI provider interfaceprompts.ts- AI prompt templatesschemas.ts- Zod validation schemastypes.ts- TypeScript type definitions
utils/config.ts- Configuration managementutils/fileValidator.ts- File validation and limitsutils/logger.ts- Structured loggingutils/monitor.ts- Performance monitoring and metricsutils/rateLimiter.ts- Request rate limitingutils/retry.ts- Retry logic with exponential backoff
types.ts- Shared TypeScript definitions and Zod schemas
The project includes comprehensive test coverage with Vitest:
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with coverage
npm run test:coveragetests/services/- Service layer teststests/utils/- Utility function teststests/fixtures/- Shared test datatests/helpers/- Test utilities and helpers
# Lint code
npm run lint
# Format code
npm run format
# Type checking
npx tsc --noEmitsrc/
βββ constants.ts # Environment configuration
βββ router.ts # tRPC routes
βββ server.ts # Main server
βββ types.ts # Shared types
βββ services/
β βββ ai/ # AI provider integration
β βββ reviewService.ts # Core review logic
βββ utils/ # Utility functions
tests/
βββ fixtures/ # Test data
βββ helpers/ # Test utilities
βββ services/ # Service tests
βββ utils/ # Utility tests
| Variable | Description | Default |
|---|---|---|
PORT |
Server port | 5353 |
HOST |
Server host | localhost |
SSL |
Enable SSL | false |
AI_PROVIDER |
AI provider (google/openai) | google |
GOOGLE_GENERATIVE_AI_API_KEY |
Google AI API key | Required |
OPENAI_API_KEY |
OpenAI API key | Optional |
MAX_FILE_SIZE |
Max file size in bytes | 10485760 (10MB) |
MAX_FILES_PER_REVIEW |
Max files per review | 50 |
RATE_LIMIT_REQUESTS |
Rate limit requests | 10 |
RATE_LIMIT_WINDOW_MS |
Rate limit window | 60000 (1 min) |
REVIEW_TIMEOUT_MS |
Review timeout | 300000 (5 min) |
LOG_LEVEL |
Logging level | info |
The included Dockerfile uses multi-stage builds for optimized production images:
- Builder stage: Installs dependencies and builds TypeScript
- Production stage: Creates minimal runtime image
The server provides built-in health checks and metrics:
- Health endpoint (
/health): Returns plain text "OK" for healthy status, detailed JSON for issues - Metrics endpoint (
/metrics): Provides comprehensive performance and system data - Structured logging with configurable levels (error, warn, info, debug)
- Request monitoring and error tracking with retry mechanisms
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
ISC License
This server is designed to work with the CodeRabbit VSCode extension for seamless code review integration.
π Ready to revolutionize your code reviews with AI!