All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog,
and this project adheres to Semantic Versioning.
First stable release of LLaMeSIMD, the benchmarking suite for SIMD intrinsic and function translation using LLMs.
- Multi-architecture support: SSE4.2, NEON, VSX
- Dual test modes:
- 1-to-1 intrinsic translation
- Full function translation
- Evaluation pipeline supporting:
- Levenshtein similarity
- AST similarity
- Token overlap analysis
- Weighted scoring system (50/30/20)
- Support for local, open, and proprietary LLMs (e.g., Ollama, OpenAI, Claude, DeepSeek)
- Visualization tools for results:
- Interactive plots
- CSV reports
- Manual cleanup of generated results required before evaluation
- Limited to three architectures (SSE4.2, NEON, VSX)