Skip to content
/ Anni Public

A high-performance code assistant that is engineered for deep algorithmic reasoning and complex data structure implementation.

License

Notifications You must be signed in to change notification settings

CoderUni/Anni

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Anni Logo

Anni

Hugging Face ModelScope Build Status License


Anni is a high-performance code assistant built on the Qwen3 14B architecture.
Fine-tuned on OpenCodeReasoning-2, it is engineered to excel in deep algorithmic reasoning, complex data structure implementation, and competitive programming.

View DemoQuick StartBenchmarksTraining


✨ Key Features

Feature Description
🧠 Deep Reasoning Optimized for hard logic puzzles and algorithmic challenges.
High Efficiency Supports vLLM serving and GGUF for consumer hardware.
📚 Large Context 32k context window for processing extensive codebases.
🛠️ Dev Ready Comes with full training scripts, merging tools, and a web UI.

🎥 Demo

demo.mp4

Anni solving a hard-difficulty LeetCode problem in real-time (1x speed on a single L40 GPU)


🚀 Quick Start

Experience Anni immediately without local setup using Google Colab.

method link description
GGUF (Recommended) Open In Colab Run standard inference on free tier GPUs.
vLLM Serving Open In Colab High-throughput serving using vLLM.

📊 Benchmarks

Anni was evaluated on LiveCodeBench (LCB), demonstrating superior performance in code generation and reasoning tasks compared to base models.

LiveCodeBench Results

🛠️ Development Setup

If you wish to fine-tune or run Anni locally, follow these steps.

1. Prerequisites

Ensure tmux is installed.

pip install -r requirements.txt

2. Configuration

Set up your environment variables for WandB, HuggingFace, and ModelScope.

mv config/example.env config/.env
# Open config/.env and paste your API keys

Edit config/config.yaml to adjust hyperparameters.

Note: Specify the LOCAL_STORAGE_PATH in src/train.py before starting.

3. Training

Launch the training pipeline:

./scripts/train.sh

📂 Project Structure

Anni/
├── config/                 # Configuration files
│
├── scripts/                # Shell scripts for automation
│   ├── train.sh            # Start training pipeline
│   ├── eval.sh             # Run LiveCodeBench evaluation
│   ├── serve.sh            # Spin up vLLM server
│   └── terminate_train.sh  # Kill training processes
│
├── src/                    # Python source code
│   ├── preprocess.py       # Downloads & preps OpenCodeReasoning-2
│   ├── train.py            # Main fine-tuning logic
│   ├── save.py             # Merges LoRA adapters (BF16 & GGUF)
│   ├── inference.py        # Run inference with the fine-tuned model
│   ├── upload.py           # Pushes to HF/ModelScope
│   └── utils/              # Utility functions
│
└── web/                    # Frontend Interface

👉 View Frontend Documentation

⚖️ License & Disclaimer

License

  • Model Weights & Training Code: Released under the MIT License.

  • Trademarks: The project name (Anni), assets, and frontend code are trademarks of the owner (Hans) and may not be used without explicit permission.

Dataset Attribution

Disclaimer: This model may generate incorrect or unsafe code. Evaluate and verify outputs before using in production environments.

About

A high-performance code assistant that is engineered for deep algorithmic reasoning and complex data structure implementation.

Resources

License

Stars

Watchers

Forks