A VS Code extension that generates intelligent commit messages using Ollama AI, directly integrated into your source control workflow.
- 🤖 AI-Powered: Generate conventional commit messages using local Ollama models
- ⚡ Seamless Integration: Adds buttons directly to VS Code's Source Control tab
- 🎯 Smart Two-Stage Analysis: Processes files individually, then combines summaries for better context
- 📁 Individual File Processing: Analyzes each changed file separately for comprehensive understanding
- 🔧 Auto-Installation Detection: Detects Ollama installation and offers guided setup
- 📦 Model Management: Automatically pull and manage Ollama models
- ⚙️ Highly Configurable: Customize models, context windows, and generation parameters
- 🔒 Privacy-First: Uses local Ollama instance - no data sent to external services
- 📝 Conventional Commits: Follows conventional commit format (feat, fix, docs, etc.)
- Ollama (automatically detected and guided installation if missing)
- VS Code 1.74.0 or higher
- Git repository
- Ensure you have
vsceinstalled globally:npm install -g @vscode/vsce - Navigate to the extension's root directory in your terminal.
- Package the extension:
vsce package - This will generate a
.vsixfile (e.g.,commitify-0.0.1.vsix). - In VS Code, go to the Extensions view (
Ctrl+Shift+XorCmd+Shift+X). - Click on the
...(More Actions) menu at the top right of the Extensions sidebar. - Select "Install from VSIX..." and choose the
.vsixfile you just created. - Reload VS Code when prompted.
- Clone this repository
- Run
npm install - Run
npm run compile - Press F5 to open a new Extension Development Host window
- Open Commitify Settings: Click the ⚙️ button in the Source Control tab
- Install Ollama (if needed): Follow the download links provided in settings
- Pull Recommended Model: Click "Pull Recommended Model" to get qwen3:4b
- Stage your changes: Use
git addor VS Code's Source Control view - Generate commit message: Click the ✨ button in the Source Control tab
- Review and commit: The generated message will appear in the commit input box
- File Analysis: Each changed file is analyzed individually
- Summary Generation: AI creates concise summaries for each file's changes
- Message Synthesis: Final commit message is generated from all file summaries
- Result: More accurate and contextual commit messages
- Analyzes all changes as a single diff (can be enabled in settings)
Access via the ⚙️ Settings button in the Source Control tab:
- Installation Detection: Automatic detection with download links
- Service Management: Start/stop Ollama service
- Connection Testing: Verify Ollama connectivity
- Recommended Model: qwen3:4b (optimized for code analysis)
- Model Pulling: Automatic model download with progress tracking
- Available Models: View and select from installed models
- Subject Line Length: Maximum characters (default: 72)
- Include Body Text: Toggle detailed explanations
- Individual Processing: Enable/disable file-by-file analysis
- Context Window: Token limit (default: 8K)
- Temperature: Creativity level (0.0-1.0, default: 0.7)
- Custom Prompts: Override default prompt templates
| Model | Size | Best For | Speed |
|---|---|---|---|
| qwen3:4b ⭐ | ~2.3GB | Code analysis (recommended) | Fast |
| qwen3:1.7b | ~1.0GB | Code analysis (faster alternative) | Very Fast |
| deepseek-r1:8b | ~4.7GB | Code-specific tasks | Medium |
| deepseek-r1:7b | ~4.1GB | Code-specific tasks | Medium |
| deepseek-r1:1.5b | ~0.9GB | Code-specific tasks (lightweight) | Very Fast |
| gemma3:4b | ~2.5GB | General purpose | Fast |
| gemma3:1b | ~0.6GB | General purpose (lightweight) | Very Fast |
| mistral | ~4.1GB | Efficient alternative | Medium |
| Setting | Default | Description |
|---|---|---|
commitify.ollamaUrl |
http://localhost:11434 |
Ollama server URL |
commitify.model |
qwen3:4b |
Model to use for generation |
commitify.maxLength |
72 |
Maximum subject line length |
commitify.includeBody |
true |
Include detailed body text |
commitify.processFilesIndividually |
true |
Process files separately |
commitify.contextWindow |
8192 |
Token context window size |
commitify.temperature |
0.7 |
Generation creativity level |
commitify.commitPromptTemplate |
"" |
Custom prompt template for commit message generation. Use {summaries}, {maxLength}, {includeBody} as placeholders. |
commitify.summaryPromptTemplate |
"" |
Custom prompt template for file change summarization. Use {filename}, {diff} as placeholders. |
Changed Files:
src/auth.ts(modified): Added JWT validationsrc/login.tsx(modified): Updated login formREADME.md(modified): Added installation instructions
Generated Commit:
feat(auth): implement JWT validation and update login UI
- Add JWT token validation with expiration checking in auth service
- Update login form component with better error handling
- Add comprehensive installation instructions to README
Bug Fix:
fix(api): resolve user authentication timeout
- Increase token refresh interval from 5min to 15min
- Add retry logic for failed authentication requests
- Update error messages for better user guidance
New Feature:
feat(ui): add dark mode support
- Implement theme switcher component with toggle animation
- Add dark/light theme CSS variables
- Persist theme preference in localStorage
Documentation:
docs: update API reference and examples
- Add new endpoint documentation for v2.0
- Include request/response examples
- Fix typos in configuration section
"Ollama Not Found"
- Use the settings panel to download Ollama for your OS
- Restart VS Code after installation
- Click "Check Ollama Status" to verify
"Failed to Pull Model"
- Ensure Ollama is running (
ollama serve) - Check internet connection
- Try pulling manually:
ollama pull qwen3:4b
"No Staged Changes"
- Stage files using
git addor VS Code Source Control - Ensure you're in a git repository
- Check that files have actual changes
"Connection Failed"
- Verify Ollama URL in settings (default:
http://localhost:11434) - Check if Ollama service is running
- Test connection using the settings panel
"Poor Quality Messages"
- Try the recommended qwen3:4b model
- Enable individual file processing
- Adjust temperature for more/less creativity
- Check that changes are meaningful and well-structured
- Use qwen3:4b: Optimized balance of speed and quality
- Enable File Processing: Individual analysis provides better context
- Limit Staged Changes: Process related changes together
- Adjust Context Window: Increase for complex changes, decrease for speed
git clone <repository-url>
cd commitify
npm install
npm run compilesrc/
├── extension.ts # Main extension logic and command handlers
├── ollama.ts # Ollama service with installation detection
├── git.ts # Git operations and file processing
└── settings.ts # Settings webview provider
media/
└── settings.html # Settings UI (separate from TypeScript)
- Two-stage processing: File summaries → final message
- Installation detection: Automatic Ollama setup guidance
- Model management: Pull and list models programmatically
- Progress tracking: Real-time feedback during generation
- Error handling: Comprehensive error messages and recovery
- 100% Local: All processing happens on your machine
- No Telemetry: Zero data collection or tracking
- Open Source: Full source code available for inspection
- Offline Capable: Works without internet (after model download)
- No External APIs: Never sends code to external services
MIT License - see LICENSE file for details.
- 🐛 Issues: Report bugs and request features
- 💬 Discussions: Community support and ideas
- 📖 Wiki: Detailed documentation and guides
- Initial release with qwen3:4b model support
- Two-stage commit message generation
- Automatic Ollama installation detection
- Model management with automatic pulling
- Individual file processing for better analysis
- Comprehensive settings UI with real-time testing
- 8K token context window support
- Source Control integration with progress indicators
Made with ❤️ for developers who value meaningful commit messages