A modular, Dockerized template repository for building custom AI API wrappers. Easily integrate and switch between providers like OpenAI (ChatGPT), Anthropic (Claude), and more. Configure with your API keys, define custom prompt logic, and deploy as a REST API for various use cases.
- π Modular Provider System: Switch between OpenAI, Anthropic, and more with a simple config change
- π³ Docker Support: Containerized setup for easy deployment and consistency
- π Secure Configuration: Support for environment variables and config files
- π Dual Mode: Run as a REST API or CLI script
- π― Customizable: Easy-to-modify wrapper functions for your specific use case
- π¦ Skeleton Template: Clean starting point for your AI integration projects
- Docker and Docker Compose (that's all you need!)
-
Clone and enter the repository
git clone https://github.com/mjospovich/ai-wrapper-skeleton.git cd ai-wrapper-skeleton -
Create your config file
cp config.yaml.example config.yaml
Then edit
config.yamland add your API key:provider: openai api_keys: openai: your-actual-api-key-here model: gpt-4o-mini
-
Start the API server
docker-compose up -d --build # or docker compose up -d --build
That's it! The API will be running at http://localhost:8000
Test it:
curl -X POST http://localhost:8000/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the capital of France?"}'Once it's working, customize it for your use case:
-
Edit
wrapper.py- This is the main file to customize:process_input()- Transform your input data into an AI promptprocess_output()- Parse and format the AI response
-
Restart Docker - Changes to
wrapper.pyare automatically mounted:docker-compose restart
See the Customization Example section below for detailed examples.
If you prefer environment variables for API keys (more secure), you still need to create config.yaml for provider/model settings, but you can skip adding the API key:
cp config.yaml.example config.yaml
# Edit config.yaml but leave api_keys as placeholders
# Then set environment variable:
export OPENAI_API_KEY="your-key-here"
docker-compose upThe environment variable will override the API key in the config file.
If you have Python 3.11+ installed locally:
pip install -r requirements.txt
python main.pyWith Docker:
docker-compose upWithout Docker:
python main.py --mode apiThe API will be available at http://localhost:8000
Endpoints:
GET /: Health check and infoPOST /generate: Generate AI response
Example API call:
curl -X POST http://localhost:8000/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the capital of France?"}'Response:
{
"output": "The capital of France is Paris."
}With Docker:
docker-compose run --rm ai-wrapper python main.py --mode cli --input inputs/example.jsonWithout Docker:
python main.py --mode cli --input inputs/example.jsonOr use the default input file from config:
python main.py --mode cliOutputs will be saved to the outputs/ directory.
Edit config.yaml to customize:
provider: openai # Options: openai, anthropic
model: gpt-4o-mini # Model name for the selected provider
api_keys:
openai: YOUR_OPENAI_API_KEY
anthropic: YOUR_ANTHROPIC_API_KEY
wrapper:
input_file: inputs/example.json
output_format: json
api:
port: 8000- OpenAI: Models like
gpt-4o-mini,gpt-4,gpt-3.5-turbo - Anthropic: Models like
claude-3-5-sonnet-20241022,claude-3-opus-20240229
To add a new provider, create a new file in providers/ following the BaseAIClient interface.
Here's how to customize wrapper.py for a calorie estimation use case:
def process_input(input_data: dict) -> str:
food_items = ', '.join(input_data.get('food_items', []))
prompt = f"Analyze these food items: {food_items}. Estimate total approximate calories for the day and return only the number (e.g., 1500)."
return prompt
def process_output(raw_output: str) -> dict:
try:
calories = int(raw_output.strip())
return {"total_calories": calories}
except ValueError:
return {"error": "Invalid output from AI"}Then use it with:
{
"food_items": ["apple", "burger", "salad"]
}ai-wrapper-skeleton/
βββ Dockerfile # Docker container definition
βββ docker-compose.yml # Docker Compose configuration
βββ requirements.txt # Python dependencies
βββ config.yaml.example # Example configuration (copy to config.yaml)
βββ config.yaml # Your configuration (gitignored)
βββ main.py # Entry point (API server or CLI)
βββ wrapper.py # Customizable wrapper logic
βββ providers/ # AI provider implementations
β βββ __init__.py
β βββ base.py # Base class interface
β βββ openai.py # OpenAI implementation
β βββ anthropic.py # Anthropic implementation
βββ inputs/ # Input files for CLI mode
β βββ example.json # Example input
βββ outputs/ # Output files (generated)
- Never commit
config.yamlwith real API keys (it's gitignored) - Prefer environment variables for API keys in production
- Use Docker secrets or environment variables in containerized deployments
- Review and customize
.gitignoreas needed
- Create a new file in
providers/(e.g.,providers/gemini.py) - Implement the
BaseAIClientinterface:from .base import BaseAIClient class GeminiClient(BaseAIClient): def __init__(self, api_key: str, model: str): # Initialize your client pass def generate_response(self, prompt: str) -> str: # Implement API call return response
- Add it to
providers/__init__.py - Add it to the
provider_mapinmain.py - Update
config.yaml.examplewith the new provider option
With Docker:
# Start the API
docker-compose up -d
# Test the API
curl -X POST http://localhost:8000/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello, world!"}'
# Test CLI mode
docker-compose run --rm ai-wrapper python main.py --mode cli
# Stop the API
docker-compose downWithout Docker:
# Test API mode
python main.py --mode api
# In another terminal
curl -X POST http://localhost:8000/generate -H "Content-Type: application/json" -d '{"prompt": "Hello, world!"}'
# Test CLI mode
python main.py --mode cli --input inputs/example.jsonMIT License - see LICENSE file for details
This is a skeleton template. Fork it, customize it, and make it your own!