A web-based management interface for LiteLLM Proxy, allowing you to create, manage, and run multiple litellm proxy configurations with an intuitive UI.
- Config Management: Create, edit, and manage litellm proxy configuration files
- YAML Validation: Real-time validation of configuration syntax
- Multi-Instance Support: Run multiple proxy instances simultaneously on different ports
- Process Management: Start, stop, and monitor proxy instances
- Real-time Logs: View and stream logs from running proxies
- Responsive UI: Modern, clean interface built with Next.js and Tailwind CSS
- FastAPI: High-performance Python web framework
- SQLite: Lightweight database for storing configurations
- SQLAlchemy: Async ORM for database operations
- Pydantic: Data validation and settings management
- psutil: Process and system monitoring
- Next.js 15: React framework with App Router
- TypeScript: Type-safe development
- Tailwind CSS: Utility-first CSS framework
- shadcn/ui: Beautiful, accessible UI components
- TanStack Query: Data fetching and caching
- Monaco Editor: Code editor for YAML editing
lite-proxy-studio/
βββ backend/ # FastAPI backend
β βββ api/ # API endpoints
β β βββ configs.py # Config CRUD operations
β β βββ proxies.py # Proxy management
β β βββ logs.py # Log retrieval
β βββ database/ # Database setup
β β βββ connection.py # SQLAlchemy setup
β β βββ models.py # Database models
β βββ services/ # Business logic
β β βββ config_service.py
β β βββ proxy_service.py
β βββ schemas/ # Pydantic schemas
β β βββ config.py
β β βββ proxy.py
β β βββ log.py
β βββ utils/ # Utilities
β β βββ yaml_handler.py
β β βββ process_manager.py
β βββ config.py # App configuration
β βββ main.py # FastAPI app entry
βββ frontend/ # Next.js frontend
β βββ src/
β β βββ app/ # App routes
β β βββ components/ # React components
β β βββ lib/ # Utilities and API client
β β βββ hooks/ # Custom React hooks
β βββ components.json # shadcn/ui config
βββ data/ # Application data
β βββ configs/ # Stored YAML configs
β βββ logs/ # Proxy logs
β βββ database.db # SQLite database
βββ README.md
- Python 3.11+
- Node.js 18+
- pnpm (for frontend package management)
- uv (for Python package management)
- litellm (install via pip:
pip install litellm)
git clone <repository-url>
cd lite-proxy-studiocd backend
# Install dependencies (uv will create a virtual environment automatically)
uv sync
# Or manually activate and install
source .venv/bin/activate # On Windows: .venv\Scripts\activatecd frontend
# Install dependencies
pnpm installpip install litellm
# Or with uv
uv pip install litellmcd backend
# Using uv
uv run python main.py
# Or with activated virtualenv
python main.py
# Or with uvicorn directly
uvicorn main:app --host 0.0.0.0 --port 8080 --reloadThe backend API will be available at:
- API: http://localhost:8080
- API Docs: http://localhost:8080/docs
- Health Check: http://localhost:8080/health
cd frontend
# Development mode
pnpm dev
# Or
pnpm run devThe frontend will be available at: http://localhost:3000
Create a .env file in the backend directory (optional):
# Database
DATABASE_URL=sqlite+aiosqlite:///./data/database.db
# CORS
CORS_ORIGINS=http://localhost:3000,http://localhost:3001Create a .env.local file in the frontend directory:
NEXT_PUBLIC_API_URL=http://localhost:8080/apiGET /api/configs- List all configurationsPOST /api/configs- Create a new configurationGET /api/configs/{id}- Get a specific configurationPUT /api/configs/{id}- Update a configurationDELETE /api/configs/{id}- Delete a configurationPOST /api/configs/validate- Validate YAML configuration
GET /api/proxies- List all proxy instancesPOST /api/proxies/start- Start a new proxy instancePOST /api/proxies/{id}/stop- Stop a proxy instanceGET /api/proxies/{id}- Get proxy instance detailsGET /api/proxies/{id}/status- Get detailed proxy status
GET /api/logs/{instance_id}- Get logs for a proxy instance
- Navigate to the Configs page
- Click "New Config"
- Enter a name and description
- Write or paste your litellm YAML configuration
- The editor will validate syntax in real-time
- Click "Save" to create the configuration
Example minimal config:
model_list:
- model_name: gpt-4
litellm_params:
model: gpt-4
api_key: your-openai-key
litellm_settings:
drop_params: true- Select a configuration from the list
- Choose an available port (8000-9999)
- Click "Start Proxy"
- Monitor the status and logs in real-time
- View all running proxies on the Proxies page
- Check status, uptime, and resource usage
- View real-time logs
- Stop proxies when no longer needed
cd backend
# Run with auto-reload
uvicorn main:app --reload --host 0.0.0.0 --port 8080
# Run tests (if added)
pytestcd frontend
# Development server with hot reload
pnpm dev
# Type checking
pnpm run type-check
# Linting
pnpm run lint
# Build for production
pnpm run buildThe application validates litellm configurations against common sections:
model_list: Array of model configurationslitellm_settings: LiteLLM-specific settingsgeneral_settings: General proxy settingsrouter_settings: Routing and load balancingenvironment_variables: Environment configuration
Unknown sections will generate warnings but won't prevent saving.
If you get a "port already in use" error:
- Check running proxies in the Proxies page
- Stop any conflicting proxy instances
- Or choose a different port
Make sure litellm is installed and available in your PATH:
which litellm
# or
litellm --versionThe SQLite database is created automatically in data/database.db. If you encounter issues:
rm data/database.db
# Restart the backend to recreate the database- WebSocket support for real-time log streaming
- Config import/export functionality
- Proxy health monitoring and alerts
- Rate limiting and quota management
- User authentication and authorization
- Docker containerization
- Proxy usage analytics and metrics
- Config templates and examples
- Batch operations on multiple proxies
MIT
Contributions are welcome! Please feel free to submit a Pull Request.
For issues and questions, please open an issue on GitHub.