diff --git a/.gitignore b/.gitignore index 2e933d553..883221c2d 100644 --- a/.gitignore +++ b/.gitignore @@ -73,6 +73,7 @@ GTAGS cpp/external/httplib/cpp-httplib/ cpp/external/nlohmann_json/nlohmann_json +cpp/external/onnxruntime-win-x64-openvino gtp.cfg katago_contribute/ tmpsgf/ diff --git a/Compiling.md b/Compiling.md index 74932a1f4..36f7057b1 100644 --- a/Compiling.md +++ b/Compiling.md @@ -34,6 +34,7 @@ As also mentioned in the instructions below but repeated here for visibility, if * If using the CUDA backend, CUDA 11 or later and a compatible version of CUDNN based on your CUDA version (https://developer.nvidia.com/cuda-toolkit) (https://developer.nvidia.com/cudnn) and a GPU capable of supporting them. * If using the TensorRT backend, in addition to a compatible CUDA Toolkit (https://developer.nvidia.com/cuda-toolkit), you also need TensorRT (https://developer.nvidia.com/tensorrt) that is at least version 8.5. * If using the Eigen backend, Eigen3. With Debian packages, (i.e. apt or apt-get), this should be `libeigen3-dev`. + * If using the ONNX backend, ONNX Runtime headers/libs and ONNX protobuf dependencies (`onnx/onnx_pb.h`, `onnx_proto`, `protobuf-lite`) for `.bin.gz` model conversion support. * zlib, libzip. With Debian packages (i.e. apt or apt-get), these should be `zlib1g-dev`, `libzip-dev`. * If you want to do self-play training and research, probably Google perftools `libgoogle-perftools-dev` for TCMalloc or some other better malloc implementation. For unknown reasons, the allocation pattern in self-play with large numbers of threads and parallel games causes a lot of memory fragmentation under glibc malloc that will eventually run your machine out of memory, but better mallocs handle it fine. * If compiling to contribute to public distributed training runs, OpenSSL is required (`libssl-dev`). @@ -41,7 +42,7 @@ As also mentioned in the instructions below but repeated here for visibility, if * `git clone https://github.com/lightvector/KataGo.git` * Compile using CMake and make in the cpp directory: * `cd KataGo/cpp` - * `cmake . -DUSE_BACKEND=OPENCL` or `cmake . -DUSE_BACKEND=CUDA` or `cmake . -DUSE_BACKEND=TENSORRT` or `cmake . -DUSE_BACKEND=EIGEN` depending on which backend you want. + * `cmake . -DUSE_BACKEND=OPENCL` or `cmake . -DUSE_BACKEND=CUDA` or `cmake . -DUSE_BACKEND=TENSORRT` or `cmake . -DUSE_BACKEND=EIGEN` or `cmake . -DUSE_BACKEND=ONNX` depending on which backend you want. * Specify also `-DUSE_TCMALLOC=1` if using TCMalloc. * Compiling will also call git commands to embed the git hash into the compiled executable, specify also `-DNO_GIT_REVISION=1` to disable it if this is causing issues for you. * Specify `-DUSE_AVX2=1` to also compile Eigen with AVX2 and FMA support, which will make it incompatible with old CPUs but much faster. (If you want to go further, you can also add `-DCMAKE_CXX_FLAGS='-march=native'` which will specialize to precisely your machine's CPU, but the exe might not run on other machines at all). @@ -54,6 +55,46 @@ As also mentioned in the instructions below but repeated here for visibility, if * You will probably want to edit `configs/gtp_example.cfg` (see "Tuning for Performance" above). * If using OpenCL, you will want to verify that KataGo is picking up the correct device when you run it (e.g. some systems may have both an Intel CPU OpenCL and GPU OpenCL, if KataGo appears to pick the wrong one, you can correct this by specifying `openclGpuToUse` in `configs/gtp_example.cfg`). +##### ONNX Runtime Backend (Linux) +The ONNX backend uses ONNX Runtime for inference, and supports both: +* `.onnx` models loaded directly. +* `.bin.gz` KataGo models via internal conversion to ONNX graph (requires ONNX protobuf dependencies in CMake). + +##### Linux Intel NPU (OpenVINO EP) Setup +1. Install Intel NPU driver on Linux: + * https://github.com/intel/linux-npu-driver +2. Install OpenVINO via system package manager (APT example): + * https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-apt.html +3. Build ONNX Runtime with OpenVINO EP for NPU (same ORT flow as Windows): + * https://onnxruntime.ai/docs/build/eps.html#openvino + * Set OpenVINO EP build option so `use_openvino` is `NPU` (for example `--use_openvino NPU` in ORT build.py). + * For instance, `./build.sh --config Release --use_openvino NPU --build_shared_lib --skip_tests --parallel --cmake_extra_defines CMAKE_INSTALL_PREFIX=/cpp/external/onnxruntime-win-x64-openvino`. + +##### Prepare `ONNXRUNTIME_ROOT` in KataGo (Linux) +Install package root: +* `cpp/external/onnxruntime-linux-x64-openvino` by running `cmake --install build\Linux\Release --config Release`. + +##### Minimal KataGo Build Commands (Linux, ONNX backend) +On Linux, `KATAGO_AUTO_FETCH_DEPS=ON` can auto-fetch missing `zlib`, `onnx`, and `protobuf` dependencies via vcpkg into `cpp/build/deps/vcpkg`. + +```bash +cmake -S cpp -B cpp/build -G Ninja -DUSE_BACKEND=ONNX -DONNXRUNTIME_ROOT=cpp/external/onnxruntime-linux-x64-openvino +cmake --build cpp/build -j +``` + +If you want to disable auto-fetch and provide dependencies manually: +* `-DKATAGO_AUTO_FETCH_DEPS=OFF` +* plus `-DONNX_INCLUDE_DIR=... -DONNX_PROTO_LIB=... -DPROTOBUF_INCLUDE_DIR=... -DPROTOBUF_LIB=... -DZLIB_INCLUDE_DIR=... -DZLIB_LIBRARY=...` + +Typical run config for Intel NPU: +* `onnxProvider = openvino` +* `onnxOpenVINODeviceType = NPU` +* `onnxOpenVINOEnableNPUFastCompile = true` (optional; may be ignored on ORT builds that do not support this key) + +Multi-device assignment is mainly for `onnxProvider=cuda/tensorrt` (`onnxDeviceToUseThread*`). +For `onnxProvider=openvino` on Intel NPU, a single device is typically used. + + ## Windows * TLDR: * Building from source on Windows is actually a bit tricky, depending on what version you're building, there's not necessarily a super-fast way. @@ -64,13 +105,8 @@ As also mentioned in the instructions below but repeated here for visibility, if * If using the CUDA backend, CUDA 11 or later and a compatible version of CUDNN based on your CUDA version (https://developer.nvidia.com/cuda-toolkit) (https://developer.nvidia.com/cudnn) and a GPU capable of supporting them. I'm unsure how version compatibility works with CUDA, there's a good chance that later versions than these work just as well, but they have not been tested. * If using the TensorRT backend, in addition to a compatible CUDA Toolkit (https://developer.nvidia.com/cuda-toolkit), you also need TensorRT (https://developer.nvidia.com/tensorrt) that is at least version 8.5. * If using the Eigen backend, Eigen3, version 3.3.x. (http://eigen.tuxfamily.org/index.php?title=Main_Page#Download). - * zlib. Easy way to build zlib on Windows is to use vcpkg. Run in Powershell: - * git clone https://github.com/microsoft/vcpkg.git - * cd .\vcpkg\ - * .\bootstrap-vcpkg.bat - * .\vcpkg.exe install zlib:x64-windows - * Set CMake ZLIB_LIBRARY to vcpkg\installed\x64-windows\lib\zlib.lib and ZLIB_INCLUDE_DIRECTORY to vcpkg\installed\x64-windows\include. - * Copy zlib1.dll from vcpkg\installed\x64-windows\bin to Katago folder after you've built Katago executable. + * If using the ONNX backend, ONNX Runtime package (headers + import libs + runtime DLLs). + * On Windows, missing `zlib` and ONNX model-conversion dependencies (`onnx`, `protobuf`) can be auto-fetched by CMake into `cpp/build/deps/vcpkg` (default `KATAGO_AUTO_FETCH_DEPS=ON`). * libzip (optional, needed only for self-play training) - for example https://github.com/kiyolee/libzip-win-build * For MinGW it's recommended to use [MSYS2](https://www.msys2.org/) building platform to get necessary zlib and libzip dependencies: * Install MSYS2 according to the instruction on the official site @@ -97,7 +133,7 @@ As also mentioned in the instructions below but repeated here for visibility, if -DLIBZIP_INCLUDE_DIR_ZIPCONF:PATH="C:/msys64/mingw64/include" -DLIBZIP_LIBRARY:FILEPATH="C:/msys64/mingw64/lib/libzip.dll.a" ``` - * Also set `USE_BACKEND` to `OPENCL`, or `CUDA`, or `TENSORRT`, or `EIGEN` depending on what backend you want to use. + * Also set `USE_BACKEND` to `OPENCL`, or `CUDA`, or `TENSORRT`, or `EIGEN`, or `ONNX` depending on what backend you want to use. * Set any other options you want and re-run "Configure" again as needed after setting them. Such as: * `NO_GIT_REVISION` if you don't have Git or if cmake is not finding it. * `NO_LIBZIP` if you don't care about running self-play training and you don't have libzip. @@ -117,6 +153,52 @@ As also mentioned in the instructions below but repeated here for visibility, if * You will probably want to edit `configs/gtp_example.cfg` (see "Tuning for Performance" above). * If using OpenCL, you will want to verify that KataGo is picking up the correct device (e.g. some systems may have both an Intel CPU OpenCL and GPU OpenCL, if KataGo appears to pick the wrong one, you can correct this by specifying `openclGpuToUse` in `configs/gtp_example.cfg`). +##### ONNX Runtime Backend +The ONNX backend uses ONNX Runtime for inference, and supports both: +* `.onnx` models loaded directly. +* `.bin.gz` KataGo models via internal conversion to ONNX graph (requires ONNX protobuf dependencies in CMake). + +##### Windows Intel NPU (OpenVINO EP) Setup +1. Install Visual Studio 2026 Community or Visual Studio 2026 Build Tools: + * https://visualstudio.microsoft.com/zh-hans/downloads/ + * In installer workloads, select **Desktop development with C++**. +2. Install Intel NPU driver: + * https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html +3. Install OpenVINO 2026 archive package on Windows: + * https://docs.openvino.ai/2026/get-started/install-openvino/install-openvino-archive-windows.html + * Typical install root looks like: `C:\Program Files (x86)\Intel\openvino` +4. Add these to System PATH: + * `C:\Program Files (x86)\Intel\openvino\runtime\bin\intel64\Release` + * `C:\Program Files (x86)\Intel\openvino\runtime\3rdparty\tbb\bin` +5. Build ONNX Runtime with OpenVINO EP for NPU (follow official docs): + * https://onnxruntime.ai/docs/build/eps.html#openvino + * Set OpenVINO EP build option so `use_openvino` is `NPU` (for example `--use_openvino NPU` in ORT build.py). + * For example, `.\build.bat --config Release --use_openvino NPU --build_shared_lib --skip_tests --parallel --cmake_extra_defines CMAKE_INSTALL_PREFIX=\cpp\external\onnxruntime-win-x64-openvino` + +##### Prepare `ONNXRUNTIME_ROOT` in KataGo (Windows) +Install package root: +* `cpp/external/onnxruntime-win-x64-openvino` by running `cmake --install build\Windows\Release --config Release` + +##### Minimal KataGo Build Commands (Windows, ONNX backend) +On Windows, `KATAGO_AUTO_FETCH_DEPS=ON` by default, so missing `zlib`, `onnx`, and `protobuf` dependencies are auto-fetched via vcpkg into `cpp/build/deps/vcpkg`. + +``` +cmake -S cpp -B cpp/build -G "Visual Studio 18 2026" -A x64 -DUSE_BACKEND=ONNX -DONNXRUNTIME_ROOT=cpp/external/onnxruntime-win-x64-openvino +cmake --build cpp/build --config Release -j +``` + +If you want to disable auto-fetch and provide dependencies manually: +* `-DKATAGO_AUTO_FETCH_DEPS=OFF` +* plus `-DONNX_INCLUDE_DIR=... -DONNX_PROTO_LIB=... -DPROTOBUF_INCLUDE_DIR=... -DPROTOBUF_LIB=... -DZLIB_INCLUDE_DIR=... -DZLIB_LIBRARY=...` + +Typical run config for Intel NPU: +* `onnxProvider = openvino` +* `onnxOpenVINODeviceType = NPU` +* `onnxOpenVINOEnableNPUFastCompile = true` (optional; may be ignored on ORT builds that do not support this key) + +Multi-device assignment is mainly for `onnxProvider=cuda/tensorrt` (`onnxDeviceToUseThread*`). +For `onnxProvider=openvino` on Intel NPU, a single device is typically used. + ## MacOS * TLDR: ``` diff --git a/README.md b/README.md index ce7e87b97..23448b659 100644 --- a/README.md +++ b/README.md @@ -1,27 +1,32 @@ # KataGo -* [Overview](#overview) -* [Training History and Research](#training-history-and-research) -* [Where To Download Stuff](#where-to-download-stuff) -* [Setting Up and Running KataGo](#setting-up-and-running-katago) - * [GUIs](#guis) - * [Windows and Linux](#windows-and-linux) - * [MacOS](#macos) - * [OpenCL vs CUDA vs TensorRT vs Eigen](#opencl-vs-cuda-vs-tensorrt-vs-eigen) - * [How To Use](#how-to-use) - * [Tuning for Performance](#tuning-for-performance) - * [Common Questions and Issues](#common-questions-and-issues) - * [Issues with specific GPUs or GPU drivers](#issues-with-specific-gpus-or-gpu-drivers) - * [Common Problems](#common-problems) - * [Other Questions](#other-questions) -* [Features for Developers](#features-for-developers) - * [GTP Extensions](#gtp-extensions) - * [Analysis Engine](#analysis-engine) -* [Compiling KataGo](#compiling-katago) -* [Source Code Overview](#source-code-overview) -* [Selfplay Training](#selfplay-training) -* [Contributors](#contributors) -* [License](#license) +- [KataGo](#katago) + - [Overview](#overview) + - [Training History and Research and Docs](#training-history-and-research-and-docs) + - [Where To Download Stuff](#where-to-download-stuff) + - [Setting Up and Running KataGo](#setting-up-and-running-katago) + - [GUIs](#guis) + - [Windows and Linux](#windows-and-linux) + - [MacOS](#macos) + - [OpenCL vs CUDA vs TensorRT vs Eigen vs ONNX](#opencl-vs-cuda-vs-tensorrt-vs-eigen-vs-onnx) + - [How To Use](#how-to-use) + - [ONNX/OpenVINO Intel NPU Quick Start (Windows)](#onnxopenvino-intel-npu-quick-start-windows) + - [ONNX/OpenVINO Intel NPU Quick Start (Linux)](#onnxopenvino-intel-npu-quick-start-linux) + - [Human-style Play and Analysis](#human-style-play-and-analysis) + - [Other Commands:](#other-commands) + - [Tuning for Performance](#tuning-for-performance) + - [Common Questions and Issues](#common-questions-and-issues) + - [Issues with specific GPUs or GPU drivers](#issues-with-specific-gpus-or-gpu-drivers) + - [Common Problems](#common-problems) + - [Other Questions](#other-questions) + - [Features for Developers](#features-for-developers) + - [GTP Extensions:](#gtp-extensions) + - [Analysis Engine:](#analysis-engine) + - [Compiling KataGo](#compiling-katago) + - [Source Code Overview:](#source-code-overview) + - [Selfplay Training:](#selfplay-training) + - [Contributors](#contributors) + - [License](#license) ## Overview @@ -84,8 +89,8 @@ The community also provides KataGo packages for [Homebrew](https://brew.sh) on M Use `brew install katago`. The latest config files and networks are installed in KataGo's `share` directory. Find them via `brew list --verbose katago`. A basic way to run katago will be `katago gtp -config $(brew list --verbose katago | grep 'gtp.*\.cfg') -model $(brew list --verbose katago | grep .gz | head -1)`. You should choose the Network according to the release notes here and customize the provided example config as with every other way of installing KataGo. -### OpenCL vs CUDA vs TensorRT vs Eigen -KataGo has four backends, OpenCL (GPU), CUDA (GPU), TensorRT (GPU), and Eigen (CPU). +### OpenCL vs CUDA vs TensorRT vs Eigen vs ONNX +KataGo has five backends, OpenCL (GPU), CUDA (GPU), TensorRT (GPU), Eigen (CPU), and ONNX (CPU/GPU/NPU via providers). The quick summary is: * **To easily get something working, try OpenCL if you have any good or decent GPU.** @@ -93,12 +98,14 @@ The quick summary is: * Use Eigen with AVX2 if you don't have a GPU or if your GPU is too old/weak to work with OpenCL, and you just want a plain CPU KataGo. * Use Eigen without AVX2 if your CPU is old or on a low-end device that doesn't support AVX2. * The CUDA backend can work for NVIDIA GPUs with CUDA+CUDNN installed but is likely worse than TensorRT. + * ONNX backend uses ONNX Runtime execution providers (CPU/OpenVINO/CUDA/TensorRT/MIGraphX/CoreML). It is useful for Intel NPU (OpenVINO) and raw `.onnx` models. More in detail: * OpenCL is a general GPU backend should be able to run with any GPUs or accelerators that support [OpenCL](https://en.wikipedia.org/wiki/OpenCL), including NVIDIA GPUs, AMD GPUs, as well CPU-based OpenCL implementations or things like Intel Integrated Graphics. This is the most general GPU version of KataGo and doesn't require a complicated install like CUDA does, so is most likely to work out of the box as long as you have a fairly modern GPU. **However, it also need to take some time when run for the very first time to tune itself.** For many systems, this will take 5-30 seconds, but on a few older/slower systems, may take many minutes or longer. Also, the quality of OpenCL implementations is sometimes inconsistent, particularly for Intel Integrated Graphics and for AMD GPUs that are older than several years, so it might not work for very old machines, as well as specific buggy newer AMD GPUs, see also [Issues with specific GPUs or GPU drivers](#issues-with-specific-gpus-or-gpu-drivers). * CUDA is a GPU backend specific to NVIDIA GPUs (it will not work with AMD or Intel or any other GPUs) and requires installing [CUDA](https://developer.nvidia.com/cuda-zone) and [CUDNN](https://developer.nvidia.com/cudnn) and a modern NVIDIA GPU. On most GPUs, the OpenCL implementation will actually beat NVIDIA's own CUDA/CUDNN at performance. The exception is for top-end NVIDIA GPUs that support FP16 and tensor cores, in which case sometimes one is better and sometimes the other is better. * TensorRT is similar to CUDA, but only uses NVIDIA's TensorRT framework to run the neural network with more optimized kernels. For modern NVIDIA GPUs, it should work whenever CUDA does and will usually be faster than CUDA or any other backend. * Eigen is a *CPU* backend that should work widely *without* needing a GPU or fancy drivers. Use this if you don't have a good GPU or really any GPU at all. It will be quite significantly slower than OpenCL or CUDA, but on a good CPU can still often get 10 to 20 playouts per second if using the smaller (15 or 20) block neural nets. Eigen can also be compiled with AVX2 and FMA support, which can provide a big performance boost for Intel and AMD CPUs from the last few years. However, it will not run at all on older CPUs (and possibly even some recent but low-power modern CPUs) that don't support these fancy vector instructions. + * ONNX backend uses [ONNX Runtime](https://onnxruntime.ai/). It can use CPU by default, OpenVINO for Intel hardware (including NPU on supported systems), CUDA/TensorRT for NVIDIA GPUs, MIGraphX for AMD GPUs, and CoreML on macOS. Multi-device assignment via `onnxDeviceToUseThread*` is mainly for CUDA/TensorRT providers, while OpenVINO NPU setups are typically single-device. For **any** implementation, it's recommended that you also tune the number of threads used if you care about optimal performance, as it can make a factor of 2-3 difference in the speed. See "Tuning for Performance" below. However, if you mostly just want to get it working, then the default untuned settings should also be still reasonable. @@ -132,6 +139,50 @@ path/to/katago.exe gtp -model path/to/.bin.gz path/to/katago.exe gtp -model path/to/.bin.gz -config path/to/gtp_custom.cfg ``` +#### ONNX/OpenVINO Intel NPU Quick Start (Windows) + +If you want to use ONNX Runtime + OpenVINO on Intel NPU: +* Install Intel NPU driver: https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html +* Install OpenVINO archive package (Windows): https://docs.openvino.ai/2026/get-started/install-openvino/install-openvino-archive-windows.html +* Typical install root looks like: `C:\Program Files (x86)\Intel\openvino_2026.0` +* Add `C:\Program Files (x86)\Intel\openvino_2026.0\runtime\bin\intel64\Release` and `C:\Program Files (x86)\Intel\openvino_2026.0\runtime\3rdparty\tbb\bin` to System PATH + +Minimal commands: +``` +# 1) Export .bin/.bin.gz to ONNX (default export size is 19x19) +./katago.exe exportonnx -model .bin.gz -output .onnx + +# 2) Benchmark on Intel NPU (OpenVINO provider) +./katago.exe benchmark -config cpp/configs/gtp_example.cfg -model .onnx + +# 3) Run GTP for GUI tools (Sabaki/Lizzie/q5Go/etc) +./katago.exe gtp -config cpp/configs/gtp_example.cfg -model .onnx + +If you don't prepare config file, then use -override-config args, like: +./katago.exe gtp -config cpp/configs/gtp_example.cfg -model .onnx -override-config onnxProvider=openvino,onnxOpenVINODeviceType=NPU +``` + +#### ONNX/OpenVINO Intel NPU Quick Start (Linux) + +If you want to use ONNX Runtime + OpenVINO on Intel NPU: +* Install Intel NPU driver (Linux): https://github.com/intel/linux-npu-driver +* Install OpenVINO via system package manager (APT example): https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-apt.html + +Minimal commands: +```bash +# 1) Export .bin/.bin.gz to ONNX (default export size is 19x19) +./katago exportonnx -model .bin.gz -output .onnx + +# 2) Benchmark on Intel NPU (OpenVINO provider) +./katago benchmark -config cpp/configs/gtp_example.cfg -model .onnx + +# 3) Run GTP for GUI tools (Sabaki/Lizzie/q5Go/etc) +./katago gtp -config cpp/configs/gtp_example.cfg -model .onnx + +# If you don't prepare config file, use -override-config: +./katago gtp -config cpp/configs/gtp_example.cfg -model .onnx -override-config onnxProvider=openvino,onnxOpenVINODeviceType=NPU +``` + #### Human-style Play and Analysis You can also have KataGo imitate human play if you download the human SL model b18c384nbt-humanv0.bin.gz from https://github.com/lightvector/KataGo/releases/tag/v1.15.0, and run a command like the following, providing both the normal model and the human SL model: diff --git a/cpp/CMakeLists.txt b/cpp/CMakeLists.txt index 8db79ca73..cd8dc0114 100644 --- a/cpp/CMakeLists.txt +++ b/cpp/CMakeLists.txt @@ -32,7 +32,7 @@ endif() set(BUILD_DISTRIBUTED 0 CACHE BOOL "Build with http support for contributing to distributed training") set(USE_BACKEND CACHE STRING "Neural net backend") string(TOUPPER "${USE_BACKEND}" USE_BACKEND) -set_property(CACHE USE_BACKEND PROPERTY STRINGS "" CUDA TENSORRT OPENCL EIGEN) +set_property(CACHE USE_BACKEND PROPERTY STRINGS "" CUDA TENSORRT OPENCL EIGEN ONNX) set(USE_TCMALLOC 0 CACHE BOOL "Use TCMalloc") set(NO_GIT_REVISION 0 CACHE BOOL "Disable embedding the git revision into the compiled exe") @@ -42,6 +42,107 @@ set(USE_BIGGER_BOARDS_EXPENSIVE 0 CACHE BOOL "Allow boards up to size 50. Compil set(USE_CACHE_TENSORRT_PLAN 0 CACHE BOOL "Use TENSORRT plan cache. May use a lot of disk space. Only applies when USE_BACKEND is TENSORRT.") mark_as_advanced(USE_CACHE_TENSORRT_PLAN) +if(WIN32 OR (UNIX AND NOT APPLE)) + set(_katago_auto_fetch_default ON) +else() + set(_katago_auto_fetch_default OFF) +endif() +option(KATAGO_AUTO_FETCH_DEPS "Automatically fetch missing dependencies into build/deps (Windows/Linux use vcpkg)." ${_katago_auto_fetch_default}) +set(KATAGO_DEPS_DIR "${CMAKE_SOURCE_DIR}/build/deps" CACHE PATH "Directory for auto-fetched third-party dependencies") +if(WIN32) + set(_katago_vcpkg_triplet_default "x64-windows") +elseif(UNIX AND NOT APPLE) + if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|arm64)$") + set(_katago_vcpkg_triplet_default "arm64-linux") + else() + set(_katago_vcpkg_triplet_default "x64-linux") + endif() +else() + set(_katago_vcpkg_triplet_default "x64-windows") +endif() +set(KATAGO_VCPKG_TRIPLET "${_katago_vcpkg_triplet_default}" CACHE STRING "vcpkg triplet used by KATAGO_AUTO_FETCH_DEPS") +set(KATAGO_VCPKG_ROOT "${KATAGO_DEPS_DIR}/vcpkg" CACHE PATH "Path to local vcpkg clone used by KATAGO_AUTO_FETCH_DEPS") +mark_as_advanced(KATAGO_VCPKG_TRIPLET KATAGO_VCPKG_ROOT) + +function(katago_vcpkg_bootstrap_if_needed) + if(NOT WIN32 AND NOT (UNIX AND NOT APPLE)) + message(FATAL_ERROR "katago_vcpkg_bootstrap_if_needed is only supported on Windows and Linux") + endif() + + if(NOT KATAGO_AUTO_FETCH_DEPS) + message(FATAL_ERROR "KATAGO_AUTO_FETCH_DEPS is OFF, cannot auto-fetch missing dependency") + endif() + + file(MAKE_DIRECTORY "${KATAGO_DEPS_DIR}") + + if(WIN32) + set(_katago_vcpkg_exe "${KATAGO_VCPKG_ROOT}/vcpkg.exe") + else() + set(_katago_vcpkg_exe "${KATAGO_VCPKG_ROOT}/vcpkg") + endif() + + if(NOT EXISTS "${_katago_vcpkg_exe}") + if(NOT EXISTS "${KATAGO_VCPKG_ROOT}/.git") + find_package(Git QUIET) + if(NOT GIT_FOUND) + message(FATAL_ERROR "KATAGO_AUTO_FETCH_DEPS requires git to clone vcpkg") + endif() + message(STATUS "Auto-fetch deps: cloning vcpkg into ${KATAGO_VCPKG_ROOT}") + execute_process( + COMMAND "${GIT_EXECUTABLE}" clone --depth=1 https://github.com/microsoft/vcpkg.git "${KATAGO_VCPKG_ROOT}" + RESULT_VARIABLE _clone_result + OUTPUT_VARIABLE _clone_out + ERROR_VARIABLE _clone_err + ) + if(NOT _clone_result EQUAL 0) + message(FATAL_ERROR "Failed to clone vcpkg.\n${_clone_out}\n${_clone_err}") + endif() + endif() + + message(STATUS "Auto-fetch deps: bootstrapping vcpkg") + if(WIN32) + execute_process( + COMMAND "${KATAGO_VCPKG_ROOT}/bootstrap-vcpkg.bat" -disableMetrics + WORKING_DIRECTORY "${KATAGO_VCPKG_ROOT}" + RESULT_VARIABLE _bootstrap_result + ) + else() + execute_process( + COMMAND sh "${KATAGO_VCPKG_ROOT}/bootstrap-vcpkg.sh" -disableMetrics + WORKING_DIRECTORY "${KATAGO_VCPKG_ROOT}" + RESULT_VARIABLE _bootstrap_result + ) + endif() + if(NOT _bootstrap_result EQUAL 0) + message(FATAL_ERROR "Failed to bootstrap vcpkg") + endif() + endif() +endfunction() + +function(katago_vcpkg_install_if_needed package_name) + katago_vcpkg_bootstrap_if_needed() + + if(WIN32) + set(_katago_vcpkg_exe "${KATAGO_VCPKG_ROOT}/vcpkg.exe") + else() + set(_katago_vcpkg_exe "${KATAGO_VCPKG_ROOT}/vcpkg") + endif() + if(NOT EXISTS "${_katago_vcpkg_exe}") + message(FATAL_ERROR "vcpkg executable not found after bootstrap: ${_katago_vcpkg_exe}") + endif() + + set(_spec "${package_name}:${KATAGO_VCPKG_TRIPLET}") + message(STATUS "Auto-fetch deps: ensuring ${_spec} via vcpkg") + execute_process( + COMMAND "${_katago_vcpkg_exe}" install "${_spec}" --disable-metrics + WORKING_DIRECTORY "${KATAGO_VCPKG_ROOT}" + RESULT_VARIABLE _install_result + ) + if(NOT _install_result EQUAL 0) + message(FATAL_ERROR "Failed to install ${_spec} via vcpkg") + endif() +endfunction() + #--------------------------- NEURAL NET BACKEND ------------------------------------------------------------------------ message(STATUS "Building 'katago' executable for GTP engine and other tools.") @@ -145,8 +246,14 @@ elseif(USE_BACKEND STREQUAL "EIGEN") set(NEURALNET_BACKEND_SOURCES neuralnet/eigenbackend.cpp ) +elseif(USE_BACKEND STREQUAL "ONNX") + message(STATUS "-DUSE_BACKEND=ONNX, using ONNX Runtime backend.") + set(NEURALNET_BACKEND_SOURCES + neuralnet/onnxbackend.cpp + neuralnet/onnxmodelbuilder.cpp + ) elseif(USE_BACKEND STREQUAL "") - message(WARNING "${ColorBoldRed}WARNING: Using dummy neural net backend, intended for non-neural-net testing only, will fail on any code path requiring a neural net. To use neural net, specify -DUSE_BACKEND=CUDA or -DUSE_BACKEND=TENSORRT or -DUSE_BACKEND=OPENCL or -DUSE_BACKEND=EIGEN to compile with the respective backend.${ColorReset}") + message(WARNING "${ColorBoldRed}WARNING: Using dummy neural net backend, intended for non-neural-net testing only, will fail on any code path requiring a neural net. To use neural net, specify -DUSE_BACKEND=CUDA or -DUSE_BACKEND=TENSORRT or -DUSE_BACKEND=OPENCL or -DUSE_BACKEND=EIGEN or -DUSE_BACKEND=ONNX to compile with the respective backend.${ColorReset}") set(NEURALNET_BACKEND_SOURCES neuralnet/dummybackend.cpp) else() message(FATAL_ERROR "Unrecognized backend: " ${USE_BACKEND}) @@ -428,6 +535,150 @@ elseif(USE_BACKEND STREQUAL "OPENCL") link_directories(${OpenCL_LIBRARY}) target_link_libraries(katago ${OpenCL_LIBRARY}) endif() +elseif(USE_BACKEND STREQUAL "ONNX") + target_compile_definitions(katago PRIVATE USE_ONNX_BACKEND) + + if(WIN32) + set(_onnx_default_root "${CMAKE_CURRENT_SOURCE_DIR}/external/onnxruntime-win-x64-openvino") + elseif(UNIX AND NOT APPLE) + set(_onnx_default_root "${CMAKE_CURRENT_SOURCE_DIR}/external/onnxruntime-linux-x64-openvino") + else() + set(_onnx_default_root "") + endif() + set(ONNXRUNTIME_ROOT "${_onnx_default_root}" CACHE PATH "Path to ONNX Runtime package root") + + if(NOT IS_DIRECTORY "${ONNXRUNTIME_ROOT}") + message(FATAL_ERROR "ONNXRUNTIME_ROOT does not exist: ${ONNXRUNTIME_ROOT}") + endif() + + set(ONNXRUNTIME_INCLUDE_DIR "${ONNXRUNTIME_ROOT}/include/onnxruntime") + if(NOT IS_DIRECTORY "${ONNXRUNTIME_INCLUDE_DIR}") + message(FATAL_ERROR "ONNX Runtime include directory not found: ${ONNXRUNTIME_INCLUDE_DIR}") + endif() + target_include_directories(katago SYSTEM PRIVATE "${ONNXRUNTIME_INCLUDE_DIR}") + + if(WIN32) + set(ONNXRUNTIME_LIB "${ONNXRUNTIME_ROOT}/lib/onnxruntime.lib") + file(GLOB ONNXRUNTIME_DLLS "${ONNXRUNTIME_ROOT}/lib/*.dll" "${ONNXRUNTIME_ROOT}/bin/*.dll") + else() + find_library(ONNXRUNTIME_LIB onnxruntime HINTS "${ONNXRUNTIME_ROOT}/lib" "${ONNXRUNTIME_ROOT}/bin" "${ONNXRUNTIME_ROOT}") + endif() + if(NOT EXISTS "${ONNXRUNTIME_LIB}" AND NOT ONNXRUNTIME_LIB) + message(FATAL_ERROR "Could not find onnxruntime library under ${ONNXRUNTIME_ROOT}") + endif() + target_link_libraries(katago ${ONNXRUNTIME_LIB}) + + # Required by onnxmodelbuilder.cpp for building ONNX graphs from .bin.gz. + # These are intentionally configurable because package layouts vary. + set(ONNX_INCLUDE_DIR "" CACHE PATH "Directory containing onnx/onnx_pb.h (required for .bin.gz -> ONNX conversion)") + set(ONNX_PROTO_LIB "" CACHE FILEPATH "Path to onnx_proto library (required for .bin.gz -> ONNX conversion)") + set(PROTOBUF_INCLUDE_DIR "" CACHE PATH "Directory containing google/protobuf/message.h (required for .bin.gz -> ONNX conversion)") + set(PROTOBUF_LIB "" CACHE FILEPATH "Path to protobuf library (protobuf-lite or libprotobuf, required for .bin.gz -> ONNX conversion)") + set(ONNX_PROTO_TARGET "" CACHE STRING "Imported CMake target for onnx_proto (optional, preferred over ONNX_PROTO_LIB)") + set(PROTOBUF_TARGET "" CACHE STRING "Imported CMake target for protobuf (optional, preferred over PROTOBUF_LIB)") + mark_as_advanced(CLEAR ONNX_INCLUDE_DIR ONNX_PROTO_LIB PROTOBUF_INCLUDE_DIR PROTOBUF_LIB) + + # Backward compatibility with older cache variable name. + if(NOT PROTOBUF_LIB AND PROTOBUF_LITE_LIB) + set(PROTOBUF_LIB "${PROTOBUF_LITE_LIB}") + endif() + + if(KATAGO_AUTO_FETCH_DEPS) + set(_katago_vcpkg_installed_root "${KATAGO_VCPKG_ROOT}/installed/${KATAGO_VCPKG_TRIPLET}") + endif() + + if(KATAGO_AUTO_FETCH_DEPS) + set(_need_onnx_proto_deps FALSE) + if(NOT ONNX_INCLUDE_DIR OR NOT ONNX_PROTO_LIB OR NOT PROTOBUF_INCLUDE_DIR OR NOT PROTOBUF_LIB) + set(_need_onnx_proto_deps TRUE) + endif() + if(_need_onnx_proto_deps) + katago_vcpkg_install_if_needed("onnx") + katago_vcpkg_install_if_needed("protobuf") + set(_katago_vcpkg_installed_root "${KATAGO_VCPKG_ROOT}/installed/${KATAGO_VCPKG_TRIPLET}") + if(EXISTS "${_katago_vcpkg_installed_root}/include/onnx/onnx_pb.h") + set(ONNX_INCLUDE_DIR "${_katago_vcpkg_installed_root}/include" CACHE PATH "Directory containing onnx/onnx_pb.h (required for .bin.gz -> ONNX conversion)" FORCE) + endif() + find_library(_katago_onnx_proto_lib NAMES onnx_proto HINTS "${_katago_vcpkg_installed_root}/lib" "${_katago_vcpkg_installed_root}/debug/lib" NO_DEFAULT_PATH) + if(_katago_onnx_proto_lib) + set(ONNX_PROTO_LIB "${_katago_onnx_proto_lib}" CACHE FILEPATH "Path to onnx_proto library (required for .bin.gz -> ONNX conversion)" FORCE) + endif() + if(EXISTS "${_katago_vcpkg_installed_root}/include/google/protobuf/message.h") + set(PROTOBUF_INCLUDE_DIR "${_katago_vcpkg_installed_root}/include" CACHE PATH "Directory containing google/protobuf/message.h (required for .bin.gz -> ONNX conversion)" FORCE) + endif() + find_library(_katago_protobuf_lib NAMES protobuf-lite libprotobuf-lite protobuf libprotobuf HINTS "${_katago_vcpkg_installed_root}/lib" "${_katago_vcpkg_installed_root}/debug/lib" NO_DEFAULT_PATH) + if(_katago_protobuf_lib) + set(PROTOBUF_LIB "${_katago_protobuf_lib}" CACHE FILEPATH "Path to protobuf library (protobuf-lite or libprotobuf, required for .bin.gz -> ONNX conversion)" FORCE) + endif() + endif() + endif() + + # Prefer config-mode packages (vcpkg provides these) so transitive dependencies + # like absl/utf8_range are linked automatically. + if(_katago_vcpkg_installed_root) + list(APPEND CMAKE_PREFIX_PATH "${_katago_vcpkg_installed_root}") + endif() + if(NOT ONNX_PROTO_TARGET) + find_package(ONNX CONFIG QUIET) + if(TARGET ONNX::onnx_proto) + set(ONNX_PROTO_TARGET "ONNX::onnx_proto") + endif() + endif() + if(NOT PROTOBUF_TARGET) + if(_katago_vcpkg_installed_root AND EXISTS "${_katago_vcpkg_installed_root}/share/protobuf/protobuf-config.cmake") + set(protobuf_DIR "${_katago_vcpkg_installed_root}/share/protobuf") + endif() + find_package(protobuf CONFIG QUIET) + if(NOT TARGET protobuf::libprotobuf AND NOT TARGET protobuf::libprotobuf-lite) + find_package(Protobuf CONFIG QUIET) + endif() + if(TARGET protobuf::libprotobuf) + set(PROTOBUF_TARGET "protobuf::libprotobuf") + elseif(TARGET protobuf::libprotobuf-lite) + set(PROTOBUF_TARGET "protobuf::libprotobuf-lite") + endif() + endif() + + if(NOT ONNX_INCLUDE_DIR) + find_path(ONNX_INCLUDE_DIR onnx/onnx_pb.h) + endif() + if(NOT ONNX_PROTO_LIB AND NOT ONNX_PROTO_TARGET) + find_library(ONNX_PROTO_LIB onnx_proto) + endif() + if(NOT PROTOBUF_INCLUDE_DIR) + find_path(PROTOBUF_INCLUDE_DIR google/protobuf/message.h) + endif() + if(NOT PROTOBUF_LIB AND NOT PROTOBUF_TARGET) + find_library(PROTOBUF_LIB libprotobuf protobuf protobuf-lite protobuf-lite32) + endif() + + if(NOT ONNX_INCLUDE_DIR OR NOT PROTOBUF_INCLUDE_DIR OR (NOT ONNX_PROTO_TARGET AND NOT ONNX_PROTO_LIB) OR (NOT PROTOBUF_TARGET AND NOT PROTOBUF_LIB)) + message(FATAL_ERROR + "ONNX backend requires ONNX protobuf dependencies for .bin.gz model conversion. " + "Set ONNX_INCLUDE_DIR (contains onnx/onnx_pb.h), ONNX_PROTO_LIB, PROTOBUF_INCLUDE_DIR, and PROTOBUF_LIB.") + endif() + target_include_directories(katago SYSTEM PRIVATE "${ONNX_INCLUDE_DIR}") + target_include_directories(katago SYSTEM PRIVATE "${PROTOBUF_INCLUDE_DIR}") + if(ONNX_PROTO_TARGET) + target_link_libraries(katago ${ONNX_PROTO_TARGET}) + else() + target_link_libraries(katago ${ONNX_PROTO_LIB}) + endif() + if(PROTOBUF_TARGET) + target_link_libraries(katago ${PROTOBUF_TARGET}) + else() + target_link_libraries(katago ${PROTOBUF_LIB}) + endif() + + if(WIN32 AND ONNXRUNTIME_DLLS) + foreach(_onnxruntime_dll IN LISTS ONNXRUNTIME_DLLS) + add_custom_command(TARGET katago POST_BUILD + COMMAND ${CMAKE_COMMAND} -E copy_if_different + "${_onnxruntime_dll}" + $ + ) + endforeach() + endif() elseif(USE_BACKEND STREQUAL "EIGEN") target_compile_definitions(katago PRIVATE USE_EIGEN_BACKEND) if(NOT (MSVC)) @@ -459,6 +710,24 @@ if(NO_GIT_REVISION AND (NOT BUILD_DISTRIBUTED)) target_compile_definitions(katago PRIVATE NO_GIT_REVISION) endif() +if(KATAGO_AUTO_FETCH_DEPS) + set(_need_zlib_deps FALSE) + if(NOT ZLIB_INCLUDE_DIR OR NOT ZLIB_LIBRARY) + set(_need_zlib_deps TRUE) + endif() + if(_need_zlib_deps) + katago_vcpkg_install_if_needed("zlib") + set(_katago_vcpkg_installed_root "${KATAGO_VCPKG_ROOT}/installed/${KATAGO_VCPKG_TRIPLET}") + if(EXISTS "${_katago_vcpkg_installed_root}/include/zlib.h") + set(ZLIB_INCLUDE_DIR "${_katago_vcpkg_installed_root}/include" CACHE PATH "Path to directory with zlib.h and other header files" FORCE) + endif() + find_library(_katago_zlib_lib NAMES zlib z HINTS "${_katago_vcpkg_installed_root}/lib" "${_katago_vcpkg_installed_root}/debug/lib" NO_DEFAULT_PATH) + if(_katago_zlib_lib) + set(ZLIB_LIBRARY "${_katago_zlib_lib}" CACHE FILEPATH "Path to 'libz.so' on Linux or 'libz.lib' on Windows" FORCE) + endif() + endif() +endif() + find_package(ZLIB) if(ZLIB_FOUND) include_directories(${ZLIB_INCLUDE_DIRS}) diff --git a/cpp/README.md b/cpp/README.md index 1f5d8d21f..dabcef5b5 100644 --- a/cpp/README.md +++ b/cpp/README.md @@ -9,13 +9,14 @@ Summary of source folders, in approximate dependency order, from lowest level to * `board.{cpp,h}` - Raw board implementation, without move history. Helper functions for Benson's algorithm and ladder search. * `boardhistory.{cpp,h}` - Datastructure that does include move history - handles superko, passing, game end, final scoring, komi, handicap detection, etc. * `graphhash.{cpp,h}` - History-sensitive hash used for [monte-carlo graph search](https://github.com/lightvector/KataGo/blob/master/docs/GraphSearch.md). -* `neuralnet` - Neural net GPU implementation and interface. Contains OpenCL, CUDA, Eigen, TensorRT backends along with common interfaces and model data structures. +* `neuralnet` - Neural net GPU implementation and interface. Contains OpenCL, CUDA, TensorRT, Metal, Eigen, and ONNX backends along with common interfaces and model data structures. * `desc.{cpp,h}` - Data structure holding neural net structure and weights. * `modelversion.{cpp,h}` - Enumerates the various versions of neural net features and models. * `nninputs.{cpp,h}` - Implements the input features for the neural net. * `sgfmetadata.{cpp,h}` - Implements the input features for the [HumanSL neural net](https://github.com/lightvector/KataGo/blob/master/docs/Analysis_Engine.md#human-sl-analysis-guide), for conditioning on various SGF metadata about human players from training data. * `nninterface.h` - Common interface that is implemented by every low-level neural net backend. - * `{cuda,opencl,eigen,trt,dummy}backend.cpp` - Various backends. + * `{cuda,opencl,eigen,trt,metal,onnx,dummy}backend.cpp` - Various backends. + * `onnxmodelbuilder.{cpp,h}` - Builds ONNX graphs from KataGo model weights for ONNX Runtime. * `nneval.{cpp,h}` - Top-level handle to the neural net used by the rest of the engine, implements thread-safe batching of queries. * `search` - The main search engine. * `timecontrols.cpp` - Basic handling of a few possible time controls. diff --git a/cpp/command/benchmark.cpp b/cpp/command/benchmark.cpp index 3100fb1b1..daff8c30f 100644 --- a/cpp/command/benchmark.cpp +++ b/cpp/command/benchmark.cpp @@ -267,6 +267,21 @@ int MainCmds::benchmark(const vector& args) { #endif #ifdef USE_EIGEN_BACKEND cout << "You are currently using the Eigen (CPU) version of KataGo. Due to having no GPU, it may be slow." << endl; +#endif +#ifdef USE_ONNX_BACKEND + string onnxProvider = cfg.contains("onnxProvider") ? cfg.getString("onnxProvider") : "cpu"; + string onnxProviderLower = Global::toLower(onnxProvider); + cout << "You are currently using the ONNX Runtime version of KataGo." << endl; + cout << "Your GTP config is currently set to onnxProvider = " << onnxProvider << endl; + if(onnxProviderLower == "openvino") { + string deviceType = cfg.contains("onnxOpenVINODeviceType") ? cfg.getString("onnxOpenVINODeviceType") : "CPU"; + cout << "OpenVINO device type = " << deviceType << endl; + cout << "For Intel NPU, typically set onnxOpenVINODeviceType = NPU." << endl; + cout << "OpenVINO/NPU usually uses a single device; onnxDeviceToUseThread* is typically for cuda/trt/migraphx providers." << endl; + } + else if(onnxProviderLower == "cuda" || onnxProviderLower == "tensorrt" || onnxProviderLower == "migraphx") { + cout << "For ONNX Runtime multi-GPU, use numNNServerThreadsPerModel + onnxDeviceToUseThreadX." << endl; + } #endif cout << endl; cout << "Your GTP config is currently set to use numSearchThreads = " << params.numThreads << endl; @@ -633,6 +648,9 @@ int MainCmds::genconfig(const vector& args, const string& firstCommand) int configNNCacheSizePowerOfTwo = 20; int configNNMutexPoolSizePowerOfTwo = 16; int configNumSearchThreads = 6; +#ifdef USE_ONNX_BACKEND + string configOnnxProvider = "openvino"; +#endif cout << endl; cout << "=========================================================================" << endl; @@ -758,30 +776,72 @@ int MainCmds::genconfig(const vector& args, const string& firstCommand) }); } +#ifdef USE_ONNX_BACKEND + { + cout << endl; + string prompt = + "Select ONNX Runtime execution provider in the generated config\n" + "(cpu, openvino, cuda, tensorrt, migraphx, coreml), default openvino:\n"; + promptAndParseInput(prompt, [&](const string& line) { + string provider = Global::toLower(Global::trim(line)); + if(provider == "") + provider = "openvino"; + if( + provider != "cpu" && + provider != "openvino" && + provider != "cuda" && + provider != "tensorrt" && + provider != "migraphx" && + provider != "coreml" + ) + throw StringError("Must be one of: cpu, openvino, cuda, tensorrt, migraphx, coreml"); + configOnnxProvider = provider; + }); + } +#endif + cout << endl; cout << "=========================================================================" << endl; cout << "GPUS AND RAM" << endl; #ifndef USE_EIGEN_BACKEND { - cout << endl; - cout << "Finding available GPU-like devices..." << endl; - NeuralNet::printDevices(); - cout << endl; + bool askForDeviceIdxs = true; +#ifdef USE_ONNX_BACKEND + bool onnxProviderSupportsThreadDeviceMap = + configOnnxProvider == "cuda" || + configOnnxProvider == "tensorrt" || + configOnnxProvider == "migraphx"; + askForDeviceIdxs = onnxProviderSupportsThreadDeviceMap; +#endif + if(askForDeviceIdxs) { + cout << endl; + cout << "Finding available GPU-like devices..." << endl; + NeuralNet::printDevices(); + cout << endl; - string prompt = - "Specify devices/GPUs to use (for example \"0,1,2\" to use devices 0, 1, and 2). Leave blank for a default SINGLE-GPU config:\n"; - promptAndParseInput(prompt, [&](const string& line) { - vector pieces = Global::split(line,','); - configDeviceIdxs.clear(); - for(size_t i = 0; i 10000) - throw StringError("Invalid device idx: " + Global::intToString(idx)); - configDeviceIdxs.push_back(idx); - } - }); + string prompt = + "Specify devices/GPUs to use (for example \"0,1,2\" to use devices 0, 1, and 2). Leave blank for a default SINGLE-GPU config:\n"; + promptAndParseInput(prompt, [&](const string& line) { + vector pieces = Global::split(line,','); + configDeviceIdxs.clear(); + for(size_t i = 0; i 10000) + throw StringError("Invalid device idx: " + Global::intToString(idx)); + configDeviceIdxs.push_back(idx); + } + }); + } +#ifdef USE_ONNX_BACKEND + else { + cout << endl; + cout << "onnxProvider = " << configOnnxProvider << " selected." << endl; + cout << "Skipping per-thread multi-device mapping (mainly used by cuda/tensorrt/migraphx providers)." << endl; + configDeviceIdxs.clear(); + } +#endif } #endif @@ -858,6 +918,9 @@ int MainCmds::genconfig(const vector& args, const string& firstCommand) configNNCacheSizePowerOfTwo, configNNMutexPoolSizePowerOfTwo, configNumSearchThreads +#ifdef USE_ONNX_BACKEND + ,configOnnxProvider +#endif ); }; updateConfigContents(); diff --git a/cpp/command/misc.cpp b/cpp/command/misc.cpp index 6975946fe..0eeb69d4f 100644 --- a/cpp/command/misc.cpp +++ b/cpp/command/misc.cpp @@ -13,6 +13,10 @@ #include "../program/setup.h" #include "../program/playutils.h" #include "../program/play.h" +#include "../neuralnet/nninterface.h" +#ifdef USE_ONNX_BACKEND +#include "../neuralnet/onnxmodelbuilder.h" +#endif #include "../command/commandline.h" #include "../tests/tests.h" #include "../main.h" @@ -20,6 +24,7 @@ #include #include #include +#include using namespace std; @@ -35,6 +40,58 @@ int MainCmds::printclockinfo(const vector& args) { return 0; } +int MainCmds::exportonnx(const vector& args) { +#ifndef USE_ONNX_BACKEND + (void)args; + cerr << "exportonnx is only available in ONNX backend builds (USE_BACKEND=ONNX)." << endl; + return 1; +#else + string modelFile; + string outputFile; + int nnXLen; + int nnYLen; + try { + KataGoCommandLine cmd("Export KataGo .bin/.bin.gz model to ONNX file."); + cmd.addModelFileArg(); + TCLAP::ValueArg outputArg("o","output","Output ONNX file path",true,string(),"FILE"); + TCLAP::ValueArg xLenArg("x","xlen","Board x size baked into exported model",false,19,"N"); + TCLAP::ValueArg yLenArg("y","ylen","Board y size baked into exported model",false,19,"N"); + cmd.add(outputArg); + cmd.add(xLenArg); + cmd.add(yLenArg); + cmd.parseArgs(args); + + modelFile = cmd.getModelFile(); + outputFile = outputArg.getValue(); + nnXLen = xLenArg.getValue(); + nnYLen = yLenArg.getValue(); + } + catch(TCLAP::ArgException& e) { + cerr << "Error: " << e.error() << " for argument " << e.argId() << endl; + return 1; + } + + if(nnXLen < 2 || nnXLen > NNPos::MAX_BOARD_LEN || nnYLen < 2 || nnYLen > NNPos::MAX_BOARD_LEN) + throw StringError("Invalid board size for exportonnx"); + + const string expectedSha256 = ""; + std::unique_ptr loadedModel( + NeuralNet::loadModelFile(modelFile, expectedSha256), + NeuralNet::freeLoadedModel + ); + const ModelDesc& modelDesc = NeuralNet::getModelDesc(loadedModel.get()); + string onnxBytes = OnnxModelBuilder::buildOnnxModel(modelDesc, nnXLen, nnYLen); + + ofstream out; + FileUtils::open(out, outputFile, std::ios::binary | std::ios::out); + out.write(onnxBytes.data(), onnxBytes.size()); + out.close(); + + cout << "Exported ONNX model to " << outputFile << " (" << onnxBytes.size() << " bytes)" << endl; + return 0; +#endif +} + int MainCmds::sampleinitializations(const vector& args) { Board::initHash(); ScoreValue::initTables(); diff --git a/cpp/configs/analysis_example.cfg b/cpp/configs/analysis_example.cfg index edc5e8726..a87713482 100644 --- a/cpp/configs/analysis_example.cfg +++ b/cpp/configs/analysis_example.cfg @@ -219,9 +219,7 @@ nnRandomize = true # cudaUseNHWC = auto -# ------------------------------ -# Metal GPU settings -# ------------------------------ +# Metal GPU settings-------------------------------------- # These only apply when using the METAL version of KataGo. # For one Metal instance: KataGo will automatically use the default device. @@ -235,6 +233,31 @@ nnRandomize = true # The pattern continues for additional Metal instances. +# ROCm GPU settings-------------------------------------- +# These only apply when using the ROCm version of KataGo. + +# IF USING ONE GPU: optionally uncomment and change this if the GPU you want to use turns out to be not device 0 +# rocmDeviceToUse = 0 + +# IF USING TWO GPUS: Uncomment these two lines (AND set numNNServerThreadsPerModel above): +# rocmDeviceToUseThread0 = 0 # change this if the first GPU you want to use turns out to be not device 0 +# rocmDeviceToUseThread1 = 1 # change this if the second GPU you want to use turns out to be not device 1 + +# IF USING THREE GPUS: Uncomment these three lines (AND set numNNServerThreadsPerModel above): +# rocmDeviceToUseThread0 = 0 # change this if the first GPU you want to use turns out to be not device 0 +# rocmDeviceToUseThread1 = 1 # change this if the second GPU you want to use turns out to be not device 1 +# rocmDeviceToUseThread2 = 2 # change this if the third GPU you want to use turns out to be not device 2 + +# You can probably guess the pattern if you have four, five, etc. GPUs. + +# KataGo will automatically use FP16 or not based on the compute capability of your AMD GPU. If you +# want to try to force a particular behavior though you can uncomment these lines and change them +# to "true" or "false". E.g. it's using FP16 but on your card that's giving an error, or it's not using +# FP16 but you think it should. +# rocmUseFP16 = auto +# ROCm does not support NHWC, so this is always false. + + # OpenCL-specific GPU settings-------------------------------------- # These only apply when using the OpenCL version of KataGo. @@ -269,6 +292,34 @@ nnRandomize = true # openclUseFP16 = auto +# ONNX Runtime settings-------------------------------------- +# These only apply when using the ONNX version of KataGo. + +# Execution provider: cpu (default), openvino, cuda, tensorrt, migraphx, coreml(macOS only) +# onnxProvider = openvino + +# Multi-device assignment is mainly for onnxProvider = cuda / tensorrt / migraphx: +# onnxDeviceToUse = 0 +# onnxDeviceToUseThread0 = 0 +# onnxDeviceToUseThread1 = 1 + +# OpenVINO EP options for Intel NPU (typically single device): +# onnxOpenVINODeviceType = NPU +# onnxOpenVINODeviceId = 0 +# onnxOpenVINOEnableNPUFastCompile = true # may be ignored if unsupported by your ORT/OpenVINO build +# onnxOpenVINOCacheDir = C:\\temp\\katago_ov_cache + +# Optional overrides for raw .onnx I/O tensor names and model version: +# onnxInputSpatial = input_spatial +# onnxInputGlobal = input_global +# onnxInputMeta = input_meta +# onnxOutputPolicy = out_policy +# onnxOutputValue = out_value +# onnxOutputMiscvalue = out_miscvalue +# onnxOutputOwnership = out_ownership +# onnxModelVersion = 15 + + # Eigen-specific settings-------------------------------------- # These only apply when using the Eigen (pure CPU) version of KataGo. diff --git a/cpp/configs/contribute_example.cfg b/cpp/configs/contribute_example.cfg index 6ca039f11..ecaac3057 100644 --- a/cpp/configs/contribute_example.cfg +++ b/cpp/configs/contribute_example.cfg @@ -83,9 +83,8 @@ watchOngoingGameInFileName = watchgame.txt # cudaUseNHWC = auto -# ------------------------------ -# Metal GPU settings -# ------------------------------ +# Metal GPU settings-------------------------------------- + # These only apply when using the METAL version of KataGo. # For one Metal instance: KataGo will automatically use the default device. @@ -99,6 +98,31 @@ watchOngoingGameInFileName = watchgame.txt # The pattern continues for additional Metal instances. +# ROCm GPU settings-------------------------------------- +# These only apply when using the ROCm version of KataGo. + +# IF USING ONE GPU: optionally uncomment and change this if the GPU you want to use turns out to be not device 0 +# rocmDeviceToUse = 0 + +# IF USING TWO GPUS: Uncomment these two lines (AND set numNNServerThreadsPerModel above): +# rocmDeviceToUseThread0 = 0 # change this if the first GPU you want to use turns out to be not device 0 +# rocmDeviceToUseThread1 = 1 # change this if the second GPU you want to use turns out to be not device 1 + +# IF USING THREE GPUS: Uncomment these three lines (AND set numNNServerThreadsPerModel above): +# rocmDeviceToUseThread0 = 0 # change this if the first GPU you want to use turns out to be not device 0 +# rocmDeviceToUseThread1 = 1 # change this if the second GPU you want to use turns out to be not device 1 +# rocmDeviceToUseThread2 = 2 # change this if the third GPU you want to use turns out to be not device 2 + +# You can probably guess the pattern if you have four, five, etc. GPUs. + +# KataGo will automatically use FP16 or not based on the compute capability of your AMD GPU. If you +# want to try to force a particular behavior though you can uncomment these lines and change them +# to "true" or "false". E.g. it's using FP16 but on your card that's giving an error, or it's not using +# FP16 but you think it should. +# rocmUseFP16 = auto +# ROCm does not support NHWC, so this is always false. + + # OpenCL GPU settings-------------------------------------- # These only apply when using the OpenCL version of KataGo. @@ -133,6 +157,34 @@ watchOngoingGameInFileName = watchgame.txt # openclUseFP16 = auto +# ONNX Runtime settings-------------------------------------- +# These only apply when using the ONNX version of KataGo. + +# Execution provider: cpu (default), openvino, cuda, tensorrt, migraphx, coreml(macOS only) +# onnxProvider = openvino + +# Multi-device assignment (for onnxProvider = cuda / tensorrt / migraphx): +# onnxDeviceToUse = 0 +# onnxDeviceToUseThread0 = 0 +# onnxDeviceToUseThread1 = 1 + +# OpenVINO EP options for Intel NPU (typically single device): +# onnxOpenVINODeviceType = NPU +# onnxOpenVINODeviceId = 0 +# onnxOpenVINOEnableNPUFastCompile = true # may be ignored if unsupported by your ORT/OpenVINO build +# onnxOpenVINOCacheDir = C:\\temp\\katago_ov_cache + +# Optional overrides for raw .onnx I/O tensor names and model version: +# onnxInputSpatial = input_spatial +# onnxInputGlobal = input_global +# onnxInputMeta = input_meta +# onnxOutputPolicy = out_policy +# onnxOutputValue = out_value +# onnxOutputMiscvalue = out_miscvalue +# onnxOutputOwnership = out_ownership +# onnxModelVersion = 15 + + # Eigen-specific settings-------------------------------------- # These only apply when using the Eigen (pure CPU) version of KataGo. diff --git a/cpp/configs/gtp_example.cfg b/cpp/configs/gtp_example.cfg index cfa720bf3..9787e11e8 100644 --- a/cpp/configs/gtp_example.cfg +++ b/cpp/configs/gtp_example.cfg @@ -455,9 +455,9 @@ searchFactorWhenWinningThreshold = 0.95 # cudaUseFP16 = auto # cudaUseNHWC = auto -# ------------------------------ -# Metal GPU settings -# ------------------------------ + +# Metal GPU settings-------------------------------------- + # These only apply when using the METAL version of KataGo. # For one Metal instance: KataGo will automatically use the default device. @@ -470,6 +470,32 @@ searchFactorWhenWinningThreshold = 0.95 # The pattern continues for additional Metal instances. + +# ROCm GPU settings-------------------------------------- +# These only apply when using the ROCm version of KataGo. + +# IF USING ONE GPU: optionally uncomment and change this if the GPU you want to use turns out to be not device 0 +# rocmDeviceToUse = 0 + +# IF USING TWO GPUS: Uncomment these two lines (AND set numNNServerThreadsPerModel above): +# rocmDeviceToUseThread0 = 0 # change this if the first GPU you want to use turns out to be not device 0 +# rocmDeviceToUseThread1 = 1 # change this if the second GPU you want to use turns out to be not device 1 + +# IF USING THREE GPUS: Uncomment these three lines (AND set numNNServerThreadsPerModel above): +# rocmDeviceToUseThread0 = 0 # change this if the first GPU you want to use turns out to be not device 0 +# rocmDeviceToUseThread1 = 1 # change this if the second GPU you want to use turns out to be not device 1 +# rocmDeviceToUseThread2 = 2 # change this if the third GPU you want to use turns out to be not device 2 + +# You can probably guess the pattern if you have four, five, etc. GPUs. + +# KataGo will automatically use FP16 or not based on the compute capability of your AMD GPU. If you +# want to try to force a particular behavior though you can uncomment these lines and change them +# to "true" or "false". E.g. it's using FP16 but on your card that's giving an error, or it's not using +# FP16 but you think it should. +# rocmUseFP16 = auto +# ROCm does not support NHWC, so this is always false. + + # ------------------------------ # OpenCL GPU settings # ------------------------------ @@ -517,6 +543,43 @@ searchFactorWhenWinningThreshold = 0.95 # Default: numSearchThreads # numEigenThreadsPerModel = X +# ------------------------------ +# ONNX backend settings +# ------------------------------ +# These only apply when using the ONNX version of KataGo. + +# Execution provider: +# cpu (default), openvino, cuda, tensorrt, migraphx, coreml(macOS only). +# onnxProvider = cpu + +# Provider-specific device selection for multi-server-thread setups. +# Primarily for onnxProvider = cuda / tensorrt / migraphx. +# onnxDeviceToUse = 0 +# onnxDeviceToUseThread0 = 0 +# onnxDeviceToUseThread1 = 1 + +# OpenVINO EP options (useful for Intel NPU on Windows): +# NPU, CPU, GPU, AUTO:NPU,CPU, MULTI:NPU.0,NPU.1, etc. +# onnxOpenVINODeviceType = NPU +# Optional explicit OpenVINO device id (usually unnecessary for single NPU setups) +# onnxOpenVINODeviceId = 0 +# Optional fast compile mode for NPU +# onnxOpenVINOEnableNPUFastCompile = true # may be ignored if unsupported by your ORT/OpenVINO build +# Optional cache directory for compiled OpenVINO blobs +# onnxOpenVINOCacheDir = C:\\temp\\katago_ov_cache + +# Override input/output tensor names for raw .onnx models: +# onnxInputSpatial = input_spatial +# onnxInputGlobal = input_global +# onnxInputMeta = input_meta +# onnxOutputPolicy = out_policy +# onnxOutputValue = out_value +# onnxOutputMiscvalue = out_miscvalue +# onnxOutputOwnership = out_ownership + +# Override auto-detected model version for raw .onnx model files. +# onnxModelVersion = 15 + # =========================================================================== # Root move selection and biases # =========================================================================== diff --git a/cpp/configs/match_example.cfg b/cpp/configs/match_example.cfg index 7e5b4fc09..11b271ef0 100644 --- a/cpp/configs/match_example.cfg +++ b/cpp/configs/match_example.cfg @@ -156,9 +156,8 @@ numNNServerThreadsPerModel = 1 # cudaUseNHWC = auto -# ------------------------------ -# Metal GPU settings -# ------------------------------ +# Metal GPU settings-------------------------------------- + # These only apply when using the METAL version of KataGo. # For one Metal instance: KataGo will automatically use the default device. @@ -172,6 +171,31 @@ numNNServerThreadsPerModel = 1 # The pattern continues for additional Metal instances. +# ROCm GPU settings-------------------------------------- +# These only apply when using the ROCm version of KataGo. + +# IF USING ONE GPU: optionally uncomment and change this if the GPU you want to use turns out to be not device 0 +# rocmDeviceToUse = 0 + +# IF USING TWO GPUS: Uncomment these two lines (AND set numNNServerThreadsPerModel above): +# rocmDeviceToUseThread0 = 0 # change this if the first GPU you want to use turns out to be not device 0 +# rocmDeviceToUseThread1 = 1 # change this if the second GPU you want to use turns out to be not device 1 + +# IF USING THREE GPUS: Uncomment these three lines (AND set numNNServerThreadsPerModel above): +# rocmDeviceToUseThread0 = 0 # change this if the first GPU you want to use turns out to be not device 0 +# rocmDeviceToUseThread1 = 1 # change this if the second GPU you want to use turns out to be not device 1 +# rocmDeviceToUseThread2 = 2 # change this if the third GPU you want to use turns out to be not device 2 + +# You can probably guess the pattern if you have four, five, etc. GPUs. + +# KataGo will automatically use FP16 or not based on the compute capability of your AMD GPU. If you +# want to try to force a particular behavior though you can uncomment these lines and change them +# to "true" or "false". E.g. it's using FP16 but on your card that's giving an error, or it's not using +# FP16 but you think it should. +# rocmUseFP16 = auto +# ROCm does not support NHWC, so this is always false. + + # OpenCL GPU settings-------------------------------------- # These only apply when using OpenCL as the backend for inference. # (For GTP, we only ever have one model, when playing matches, we might have more than one, see match_example.cfg) @@ -190,6 +214,36 @@ numNNServerThreadsPerModel = 1 # openclUseFP16 = auto +# ONNX Runtime settings-------------------------------------- +# These only apply when using the ONNX version of KataGo. + +# Execution provider: cpu (default), openvino, cuda, tensorrt, migraphx, coreml(macOS only) +# onnxProvider = openvino + +# Multi-device assignment is mainly for onnxProvider = cuda / tensorrt / migraphx: +# onnxDeviceToUse = 0 +# onnxDeviceToUseModel0 = 0 +# onnxDeviceToUseModel1 = 1 +# onnxDeviceToUseModel0Thread0 = 0 +# onnxDeviceToUseModel0Thread1 = 1 + +# OpenVINO EP options for Intel NPU (typically single device): +# onnxOpenVINODeviceType = NPU +# onnxOpenVINODeviceId = 0 +# onnxOpenVINOEnableNPUFastCompile = true # may be ignored if unsupported by your ORT/OpenVINO build +# onnxOpenVINOCacheDir = C:\\temp\\katago_ov_cache + +# Optional overrides for raw .onnx I/O tensor names and model version: +# onnxInputSpatial = input_spatial +# onnxInputGlobal = input_global +# onnxInputMeta = input_meta +# onnxOutputPolicy = out_policy +# onnxOutputValue = out_value +# onnxOutputMiscvalue = out_miscvalue +# onnxOutputOwnership = out_ownership +# onnxModelVersion = 15 + + # Eigen-specific settings-------------------------------------- # These only apply when using the Eigen (pure CPU) version of KataGo. diff --git a/cpp/dataio/loadmodel.cpp b/cpp/dataio/loadmodel.cpp index 81483b170..71d3addf3 100644 --- a/cpp/dataio/loadmodel.cpp +++ b/cpp/dataio/loadmodel.cpp @@ -20,30 +20,34 @@ std::time_t to_time_t(TP tp) static const vector ACCEPTABLE_MODEL_SUFFIXES { ".bin.gz", ".bin", + ".onnx", "model.txt.gz", "model.txt" }; static const vector GENERIC_MODEL_NAMES { "model.bin.gz", "model.bin", + "model.onnx", "model.txt.gz", - "model.txt" + "model.txt", "Model.bin.gz", "Model.bin", + "Model.onnx", "Model.txt.gz", - "Model.txt" + "Model.txt", "MODEL.bin.gz", "MODEL.bin", + "MODEL.onnx", "MODEL.txt.gz", - "MODEL.txt" + "MODEL.txt", "model.ckpt", - "Model.ckpt" + "Model.ckpt", "MODEL.ckpt", "model.checkpoint", - "Model.checkpoint" + "Model.checkpoint", "MODEL.checkpoint", "model", - "Model" + "Model", "MODEL", }; @@ -115,7 +119,8 @@ void LoadModel::deleteModelsOlderThan(const string& modelsDir, Logger& logger, c if(Global::isSuffix(filePathStr,".bin.gz") || Global::isSuffix(filePathStr,".txt.gz") || Global::isSuffix(filePathStr,".bin") || - Global::isSuffix(filePathStr,".txt")) { + Global::isSuffix(filePathStr,".txt") || + Global::isSuffix(filePathStr,".onnx")) { time_t thisTime = to_time_t(gfs::last_write_time(filePath)); if(thisTime < time) { pathsToRemove.push_back(filePath); diff --git a/cpp/main.cpp b/cpp/main.cpp index 0fcc36dea..801e36183 100644 --- a/cpp/main.cpp +++ b/cpp/main.cpp @@ -29,6 +29,7 @@ static void printHelp(const vector& args) { gtp : Runs GTP engine that can be plugged into any standard Go GUI for play/analysis. benchmark : Test speed with different numbers of search threads. genconfig : User-friendly interface to generate a config with rules and automatic performance tuning. +exportonnx : Export KataGo .bin/.bin.gz model to a fixed-size .onnx model. contribute : Connect to online distributed KataGo training and run perpetually contributing selfplay games. @@ -169,6 +170,8 @@ static int handleSubcommand(const string& subcommand, const vector& args return MainCmds::runsleeptest(subArgs); else if(subcommand == "printclockinfo") return MainCmds::printclockinfo(subArgs); + else if(subcommand == "exportonnx") + return MainCmds::exportonnx(subArgs); else if(subcommand == "sandbox") return MainCmds::sandbox(); else if(subcommand == "version") { @@ -248,6 +251,8 @@ string Version::getKataGoVersionFullInfo() { out << "Using OpenCL backend" << endl; #elif defined(USE_EIGEN_BACKEND) out << "Using Eigen(CPU) backend" << endl; +#elif defined(USE_ONNX_BACKEND) + out << "Using ONNX backend" << endl; #else out << "Using dummy backend" << endl; #endif @@ -284,6 +289,8 @@ string Version::getGitRevisionWithBackend() { s += "-opencl"; #elif defined(USE_EIGEN_BACKEND) s += "-eigen"; +#elif defined(USE_ONNX_BACKEND) + s += "-onnx"; #else s += "-dummy"; #endif diff --git a/cpp/main.h b/cpp/main.h index 3f8ad78d4..4f03f418e 100644 --- a/cpp/main.h +++ b/cpp/main.h @@ -53,6 +53,7 @@ namespace MainCmds { int demoplay(const std::vector& args); int printclockinfo(const std::vector& args); + int exportonnx(const std::vector& args); int sampleinitializations(const std::vector& args); int evalrandominits(const std::vector& args); int searchentropyanalysis(const std::vector& args); diff --git a/cpp/neuralnet/onnxbackend.cpp b/cpp/neuralnet/onnxbackend.cpp new file mode 100644 index 000000000..4a537a977 --- /dev/null +++ b/cpp/neuralnet/onnxbackend.cpp @@ -0,0 +1,867 @@ +// ONNX Runtime backend for KataGo. +// Loads standard .bin.gz model files (builds ONNX graph from ModelDesc) or +// raw .onnx model files directly, and runs inference via ONNX Runtime with a +// configurable execution provider (CPU, OpenVINO, CUDA, TensorRT, MIGraphX, CoreML) +// selected at +// runtime via the onnxProvider config key. + +#include "../neuralnet/nninterface.h" +#include "../neuralnet/nneval.h" +#include "../neuralnet/nninputs.h" +#include "../neuralnet/modelversion.h" +#include "../neuralnet/onnxmodelbuilder.h" + +#include +#ifdef __APPLE__ +#include +#endif + +#include +#include + +using namespace std; + +//-------------------------------------------------------------- + +// Auto-detect modelVersion from introspected channel counts. +// +// Detection is based on channel-count heuristics for raw .onnx files where the +// model version is not encoded in the file. The mapping assumes V7 inputs +// (22 spatial + 19 global channels) and distinguishes versions by the number of +// score-value and policy output channels: +// - 4 score-value channels -> version 8 +// - 6 score-value channels, 1 policy channel -> version 10 +// - 6 score-value channels, 2 policy channels -> version 15 +// +// If the heuristic picks the wrong version, set the `onnxModelVersion` config +// key to the correct value (>= 0) to override auto-detection. +static int detectModelVersion( + int numInputChannels, int numInputGlobalChannels, + int numPolicyChannels, int numScoreValueChannels, + int configModelVersion +) { + if(configModelVersion >= 0) + return configModelVersion; + + // inputsVersion 7 -> models 8-16: 22 spatial + 19 global + if(numInputChannels == NNInputs::NUM_FEATURES_SPATIAL_V7 && + numInputGlobalChannels == NNInputs::NUM_FEATURES_GLOBAL_V7) { + if(numScoreValueChannels == 6 && numPolicyChannels == 2) + return 15; + if(numScoreValueChannels == 6 && numPolicyChannels == 1) + return 10; + if(numScoreValueChannels == 4) + return 8; + // Default for V7 inputs + return 15; + } + // Older input versions -- fall back to a reasonable default + return NNModelVersion::defaultModelVersion; +} + +struct LoadedModel { + ModelDesc modelDesc; + bool isRawOnnx; + string rawOnnxBytes; + + // Constructor for .bin.gz files + LoadedModel(const string& fileName, const string& expectedSha256, bool rawOnnx) + : isRawOnnx(rawOnnx) + { + if(!rawOnnx) { + ModelDesc::loadFromFileMaybeGZipped(fileName, modelDesc, expectedSha256); + return; + } + + // Read raw .onnx file bytes + { + std::ifstream in(fileName, std::ios::binary | std::ios::ate); + if(!in.good()) + throw StringError("ONNX backend: could not open raw ONNX file: " + fileName); + std::streamsize size = in.tellg(); + if(size < 0) + throw StringError("ONNX backend: could not determine size of ONNX file: " + fileName); + in.seekg(0, std::ios::beg); + rawOnnxBytes.resize(size); + if(!in.read(rawOnnxBytes.data(), size)) + throw StringError("ONNX backend: failed to read raw ONNX file: " + fileName); + } + + // Create a temporary CPU session to introspect shapes + Ort::Env tmpEnv(ORT_LOGGING_LEVEL_WARNING, "KataGoOnnxIntrospect"); + Ort::SessionOptions tmpOpts; + tmpOpts.SetIntraOpNumThreads(1); + Ort::Session tmpSession(tmpEnv, rawOnnxBytes.data(), rawOnnxBytes.size(), tmpOpts); + + Ort::AllocatorWithDefaultOptions allocator; + + // Introspect inputs by name first, falling back to shape-based heuristic + int numInputChannels = 0; + int numInputGlobalChannels = 0; + int numInputMetaChannels = 0; + size_t numInputs = tmpSession.GetInputCount(); + for(size_t i = 0; i < numInputs; i++) { + Ort::AllocatedStringPtr namePtr = tmpSession.GetInputNameAllocated(i, allocator); + string name = namePtr.get(); + auto typeInfo = tmpSession.GetInputTypeInfo(i); + auto tensorInfo = typeInfo.GetTensorTypeAndShapeInfo(); + auto shape = tensorInfo.GetShape(); + if(name.find("spatial") != string::npos) { + if(shape.size() >= 2) + numInputChannels = (int)shape[1]; + } else if(name.find("global") != string::npos) { + if(shape.size() >= 2) + numInputGlobalChannels = (int)shape[1]; + } else if(name.find("meta") != string::npos) { + if(shape.size() >= 2) + numInputMetaChannels = (int)shape[1]; + } else if(shape.size() == 4) { + // Shape-based fallback: [N, C, H, W] -- spatial input + numInputChannels = (int)shape[1]; + } else if(shape.size() == 2) { + // Shape-based fallback: [N, C] -- first 2D is global, second is meta + if(numInputGlobalChannels == 0) + numInputGlobalChannels = (int)shape[1]; + else + numInputMetaChannels = (int)shape[1]; + } else { + cerr << "ONNX backend warning: unrecognized input tensor '" << name + << "' with " << shape.size() << "D shape, ignoring" << "\n"; + } + } + + // Introspect outputs + int numPolicyChannels = 0; + int numValueChannels = 0; + int numScoreValueChannels = 0; + int numOwnershipChannels = 0; + size_t numOutputs = tmpSession.GetOutputCount(); + for(size_t i = 0; i < numOutputs; i++) { + Ort::AllocatedStringPtr namePtr = tmpSession.GetOutputNameAllocated(i, allocator); + string name = namePtr.get(); + auto typeInfo = tmpSession.GetOutputTypeInfo(i); + auto tensorInfo = typeInfo.GetTensorTypeAndShapeInfo(); + auto shape = tensorInfo.GetShape(); + + if(name.find("policy") != string::npos) { + // Policy: [N, C, H*W+1] -> dim 1 is policy channels + if(shape.size() >= 2) + numPolicyChannels = (int)shape[1]; + } else if(name.find("miscvalue") != string::npos) { + // MiscValue: [N, numScoreValueChannels] -- check before "value" since "miscvalue" contains "value" + if(shape.size() >= 2) + numScoreValueChannels = (int)shape[1]; + } else if(name.find("value") != string::npos) { + // Value: [N, 3] + if(shape.size() >= 2) + numValueChannels = (int)shape[1]; + } else if(name.find("ownership") != string::npos) { + // Ownership: [N, 1, H, W] + if(shape.size() >= 2) + numOwnershipChannels = (int)shape[1]; + } + } + + // Populate ModelDesc metadata (weights are in the ONNX graph, not in modelDesc) + modelDesc.numInputChannels = numInputChannels; + modelDesc.numInputGlobalChannels = numInputGlobalChannels; + modelDesc.numInputMetaChannels = numInputMetaChannels; + modelDesc.numPolicyChannels = numPolicyChannels; + modelDesc.numValueChannels = numValueChannels; + modelDesc.numScoreValueChannels = numScoreValueChannels; + modelDesc.numOwnershipChannels = numOwnershipChannels; + + // Extract filename stem as model name + { + size_t lastSlash = fileName.find_last_of("/\\"); + string basename = (lastSlash != string::npos) ? fileName.substr(lastSlash + 1) : fileName; + size_t dotPos = basename.find('.'); + modelDesc.name = (dotPos != string::npos) ? basename.substr(0, dotPos) : basename; + } + + // Model version: auto-detect with possible config override (applied later) + modelDesc.modelVersion = detectModelVersion( + numInputChannels, numInputGlobalChannels, + numPolicyChannels, numScoreValueChannels, + -1 // No config override at load time; applied in createComputeHandle if needed + ); + + // postProcessParams gets default values from its constructor (already set) + } + + LoadedModel() = delete; + LoadedModel(const LoadedModel&) = delete; + LoadedModel& operator=(const LoadedModel&) = delete; +}; + +LoadedModel* NeuralNet::loadModelFile(const string& file, const string& expectedSha256) { + bool isRawOnnx = Global::isSuffix(file, ".onnx"); + return new LoadedModel(file, expectedSha256, isRawOnnx); +} + +void NeuralNet::freeLoadedModel(LoadedModel* loadedModel) { + delete loadedModel; +} + +const ModelDesc& NeuralNet::getModelDesc(const LoadedModel* loadedModel) { + return loadedModel->modelDesc; +} + +//-------------------------------------------------------------- + +struct ComputeContext { + Ort::Env env; + int nnXLen; + int nnYLen; + string providerName; + string openvinoDeviceType; + string openvinoDeviceId; + bool openvinoEnableNPUFastCompile; + string openvinoCacheDir; + + // Configurable input/output node names + string inputSpatialName; + string inputGlobalName; + string inputMetaName; + string outputPolicyName; + string outputValueName; + string outputMiscvalueName; + string outputOwnershipName; + + // Config override for model version (-1 means auto-detect) + int configModelVersion; + + ComputeContext(int xLen, int yLen, const string& provider) + : env(ORT_LOGGING_LEVEL_WARNING, "KataGoOnnx"), + nnXLen(xLen), + nnYLen(yLen), + providerName(provider), + openvinoDeviceType("NPU"), + openvinoDeviceId(""), + openvinoEnableNPUFastCompile(false), + openvinoCacheDir(""), + inputSpatialName("input_spatial"), + inputGlobalName("input_global"), + inputMetaName("input_meta"), + outputPolicyName("out_policy"), + outputValueName("out_value"), + outputMiscvalueName("out_miscvalue"), + outputOwnershipName("out_ownership"), + configModelVersion(-1) + {} +}; + +//-------------------------------------------------------------- + +struct ComputeHandle { + ComputeContext* context; + std::unique_ptr session; + int modelVersion; + int numInputChannels; + int numInputGlobalChannels; + int numPolicyChannels; + int numValueChannels; + int numScoreValueChannels; + int numOwnershipChannels; + int numInputMetaChannels; + int policyResultLen; // H*W+1 + + // Input/output names (stored for session->Run) + vector inputNames; + vector outputNames; + vector inputNamePtrs; + vector outputNamePtrs; + + ComputeHandle(ComputeContext* ctx, const LoadedModel& loadedModel, Logger* logger, int deviceIdxForThread) + : context(ctx), + modelVersion(loadedModel.modelDesc.modelVersion), + numInputChannels(loadedModel.modelDesc.numInputChannels), + numInputGlobalChannels(loadedModel.modelDesc.numInputGlobalChannels), + numPolicyChannels(loadedModel.modelDesc.numPolicyChannels), + numValueChannels(loadedModel.modelDesc.numValueChannels), + numScoreValueChannels(loadedModel.modelDesc.numScoreValueChannels), + numOwnershipChannels(loadedModel.modelDesc.numOwnershipChannels), + numInputMetaChannels(loadedModel.modelDesc.numInputMetaChannels), + policyResultLen(ctx->nnXLen * ctx->nnYLen + 1) + { + // Apply config model version override if set + if(ctx->configModelVersion >= 0) + modelVersion = ctx->configModelVersion; + + const char* onnxData; + size_t onnxSize; + string builtOnnxBytes; + if(loadedModel.isRawOnnx) { + if(logger != NULL) + logger->write("ONNX backend: using raw ONNX model (" + + Global::uint64ToString(loadedModel.rawOnnxBytes.size()) + " bytes)"); + onnxData = loadedModel.rawOnnxBytes.data(); + onnxSize = loadedModel.rawOnnxBytes.size(); + } else { + if(logger != NULL) + logger->write("ONNX backend: building ONNX graph from model weights..."); + builtOnnxBytes = OnnxModelBuilder::buildOnnxModel(loadedModel.modelDesc, ctx->nnXLen, ctx->nnYLen); + if(logger != NULL) + logger->write("ONNX backend: ONNX graph built (" + Global::uint64ToString(builtOnnxBytes.size()) + " bytes)"); + onnxData = builtOnnxBytes.data(); + onnxSize = builtOnnxBytes.size(); + } + + if(logger != NULL) + logger->write("ONNX backend: creating session..."); + + Ort::SessionOptions sessionOpts; + sessionOpts.SetIntraOpNumThreads(1); + + // Select execution provider based on providerName + const string& provider = ctx->providerName; + if(provider == "coreml") { +#ifdef __APPLE__ + uint32_t coremlFlags = COREML_FLAG_CREATE_MLPROGRAM; + Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_CoreML(sessionOpts, coremlFlags)); + if(logger != NULL) + logger->write("ONNX backend: CoreML execution provider enabled (MLProgram mode)"); +#else + throw StringError("ONNX backend: CoreML is only available on Apple platforms"); +#endif + } else if(provider == "cuda") { + OrtCUDAProviderOptions cudaOpts{}; + cudaOpts.device_id = deviceIdxForThread >= 0 ? deviceIdxForThread : 0; + sessionOpts.AppendExecutionProvider_CUDA(cudaOpts); + if(logger != NULL) + logger->write("ONNX backend: CUDA execution provider enabled, device_id=" + Global::intToString(cudaOpts.device_id)); + } else if(provider == "tensorrt") { + OrtTensorRTProviderOptions trtOpts{}; + trtOpts.device_id = deviceIdxForThread >= 0 ? deviceIdxForThread : 0; + sessionOpts.AppendExecutionProvider_TensorRT(trtOpts); + if(logger != NULL) + logger->write("ONNX backend: TensorRT execution provider enabled, device_id=" + Global::intToString(trtOpts.device_id)); + } else if(provider == "migraphx") { + OrtMIGraphXProviderOptions migraphxOpts{}; + migraphxOpts.device_id = deviceIdxForThread >= 0 ? deviceIdxForThread : 0; + sessionOpts.AppendExecutionProvider_MIGraphX(migraphxOpts); + if(logger != NULL) + logger->write("ONNX backend: MIGraphX execution provider enabled, device_id=" + Global::intToString(migraphxOpts.device_id)); + } else if(provider == "openvino") { + std::unordered_map openvinoOpts; + openvinoOpts["device_type"] = ctx->openvinoDeviceType; + if(!ctx->openvinoDeviceId.empty()) + openvinoOpts["device_id"] = ctx->openvinoDeviceId; + if(!ctx->openvinoCacheDir.empty()) + openvinoOpts["cache_dir"] = ctx->openvinoCacheDir; + + if(ctx->openvinoEnableNPUFastCompile && logger != NULL) { + logger->write( + "ONNX backend: onnxOpenVINOEnableNPUFastCompile requested, but this ORT build may not " + "accept 'enable_npu_fast_compile'; currently ignoring this option for compatibility." + ); + } + + // Some ORT OpenVINO builds may not accept optional keys like cache_dir. + // Retry with only core device keys if optional keys are rejected. + try { + sessionOpts.AppendExecutionProvider_OpenVINO_V2(openvinoOpts); + } + catch(const Ort::Exception& e) { + bool hadOptionalKeys = openvinoOpts.count("cache_dir") > 0; + if(!hadOptionalKeys) + throw; + + if(logger != NULL) { + logger->write( + string("ONNX backend: OpenVINO optional provider options rejected, retrying without optional keys. Error: ") + + e.what() + ); + } + openvinoOpts.erase("cache_dir"); + sessionOpts.AppendExecutionProvider_OpenVINO_V2(openvinoOpts); + } + + if(logger != NULL) { + string deviceId = openvinoOpts.count("device_id") > 0 ? openvinoOpts["device_id"] : ""; + logger->write( + "ONNX backend: OpenVINO execution provider enabled, device_type=" + ctx->openvinoDeviceType + + (deviceId.empty() ? "" : (", device_id=" + deviceId)) + ); + } + } else if(provider == "cpu" || provider.empty()) { + if(logger != NULL) + logger->write("ONNX backend: using CPU execution provider"); + } else { + throw StringError("ONNX backend: unknown onnxProvider '" + provider + "', expected 'cpu', 'coreml', 'cuda', 'tensorrt', 'migraphx', or 'openvino'"); + } + + // Create session from in-memory bytes + session = std::make_unique(ctx->env, onnxData, onnxSize, sessionOpts); + + // Query and store input names + Ort::AllocatorWithDefaultOptions allocator; + size_t numInputs = session->GetInputCount(); + for(size_t i = 0; i < numInputs; i++) { + Ort::AllocatedStringPtr name = session->GetInputNameAllocated(i, allocator); + inputNames.push_back(name.get()); + } + for(auto& n : inputNames) + inputNamePtrs.push_back(n.c_str()); + + // Query and store output names + size_t numOutputs = session->GetOutputCount(); + for(size_t i = 0; i < numOutputs; i++) { + Ort::AllocatedStringPtr name = session->GetOutputNameAllocated(i, allocator); + outputNames.push_back(name.get()); + } + for(auto& n : outputNames) + outputNamePtrs.push_back(n.c_str()); + + if(logger != NULL) + logger->write("ONNX backend: session created, inputs=" + Global::uint64ToString(numInputs) + + " outputs=" + Global::uint64ToString(numOutputs)); + } + + ComputeHandle() = delete; + ComputeHandle(const ComputeHandle&) = delete; + ComputeHandle& operator=(const ComputeHandle&) = delete; +}; + +//-------------------------------------------------------------- + +struct InputBuffers { + int maxBatchSize; + + size_t singleInputElts; + size_t singleInputGlobalElts; + size_t singleInputMetaElts; + + vector spatialInput; + vector globalInput; + vector metaInput; + + InputBuffers(const LoadedModel* loadedModel, int maxBatchSz, int nnXLen, int nnYLen) { + const ModelDesc& m = loadedModel->modelDesc; + maxBatchSize = maxBatchSz; + singleInputElts = (size_t)m.numInputChannels * nnXLen * nnYLen; + singleInputGlobalElts = (size_t)m.numInputGlobalChannels; + singleInputMetaElts = (size_t)m.numInputMetaChannels; + spatialInput.resize(singleInputElts * maxBatchSize, 0.0f); + globalInput.resize(singleInputGlobalElts * maxBatchSize, 0.0f); + if(m.numInputMetaChannels > 0) + metaInput.resize(singleInputMetaElts * maxBatchSize, 0.0f); + } + + ~InputBuffers() {} + + InputBuffers() = delete; + InputBuffers(const InputBuffers&) = delete; + InputBuffers& operator=(const InputBuffers&) = delete; +}; + +InputBuffers* NeuralNet::createInputBuffers(const LoadedModel* loadedModel, int maxBatchSize, int nnXLen, int nnYLen) { + return new InputBuffers(loadedModel, maxBatchSize, nnXLen, nnYLen); +} +void NeuralNet::freeInputBuffers(InputBuffers* inputBuffers) { + delete inputBuffers; +} + +//-------------------------------------------------------------- + +void NeuralNet::globalInitialize() { +} + +void NeuralNet::globalCleanup() { +} + +//-------------------------------------------------------------- + +ComputeContext* NeuralNet::createComputeContext( + const std::vector& gpuIdxs, + Logger* logger, + int nnXLen, + int nnYLen, + const string& backendExtraParam, + const string& homeDataDirOverride, + bool openCLReTunePerBoardSize, + enabled_t useFP16Mode, + enabled_t useNHWCMode, + const LoadedModel* loadedModel +) { + (void)gpuIdxs; + (void)homeDataDirOverride; + (void)openCLReTunePerBoardSize; + (void)useFP16Mode; + (void)useNHWCMode; + (void)loadedModel; + + // Parse backendExtraParam as "key=value;key=value;..." + string providerName = "cpu"; + map params; + if(!backendExtraParam.empty()) { + vector parts = Global::split(backendExtraParam, ';'); + for(const string& part : parts) { + size_t eq = part.find('='); + if(eq != string::npos) { + string key = Global::trim(part.substr(0, eq)); + string val = Global::trim(part.substr(eq + 1)); + params[key] = val; + } else { + // Legacy: bare string is provider name + string trimmed = Global::trim(part); + if(!trimmed.empty()) + providerName = trimmed; + } + } + if(params.count("provider")) + providerName = params["provider"]; + } + providerName = Global::toLower(providerName); + + if(logger != NULL) + logger->write("ONNX backend: creating compute context for " + + Global::intToString(nnXLen) + "x" + Global::intToString(nnYLen) + + " with provider '" + providerName + "'"); + + ComputeContext* ctx = new ComputeContext(nnXLen, nnYLen, providerName); + + // Apply configured node names + if(params.count("inputSpatial")) ctx->inputSpatialName = params["inputSpatial"]; + if(params.count("inputGlobal")) ctx->inputGlobalName = params["inputGlobal"]; + if(params.count("inputMeta")) ctx->inputMetaName = params["inputMeta"]; + if(params.count("outputPolicy")) ctx->outputPolicyName = params["outputPolicy"]; + if(params.count("outputValue")) ctx->outputValueName = params["outputValue"]; + if(params.count("outputMiscvalue")) ctx->outputMiscvalueName = params["outputMiscvalue"]; + if(params.count("outputOwnership")) ctx->outputOwnershipName = params["outputOwnership"]; + if(params.count("openvinoDeviceType")) ctx->openvinoDeviceType = params["openvinoDeviceType"]; + if(params.count("openvinoDeviceId")) ctx->openvinoDeviceId = params["openvinoDeviceId"]; + if(params.count("openvinoEnableNPUFastCompile")) { + string v = Global::toLower(params["openvinoEnableNPUFastCompile"]); + ctx->openvinoEnableNPUFastCompile = (v == "1" || v == "true" || v == "yes" || v == "on"); + } + if(params.count("openvinoCacheDir")) ctx->openvinoCacheDir = params["openvinoCacheDir"]; + if(params.count("modelVersion")) { + int v = Global::stringToInt(params["modelVersion"]); + if(v >= 0) + ctx->configModelVersion = v; + } + + return ctx; +} + +void NeuralNet::freeComputeContext(ComputeContext* computeContext) { + delete computeContext; +} + +//-------------------------------------------------------------- + +ComputeHandle* NeuralNet::createComputeHandle( + ComputeContext* context, + const LoadedModel* loadedModel, + Logger* logger, + int maxBatchSize, + bool requireExactNNLen, + bool inputsUseNHWC, + int gpuIdxForThisThread, + int serverThreadIdx +) { + (void)maxBatchSize; + (void)requireExactNNLen; + if(inputsUseNHWC) + throw StringError("ONNX backend: inputsUseNHWC = true not supported, must use NCHW"); + + if(logger != NULL) { + logger->write("ONNX backend thread " + Global::intToString(serverThreadIdx) + + ": Model version " + Global::intToString(loadedModel->modelDesc.modelVersion)); + logger->write("ONNX backend thread " + Global::intToString(serverThreadIdx) + + ": Model name: " + loadedModel->modelDesc.name); + string deviceInfo = + context->providerName == "openvino" + ? "n/a (use onnxOpenVINODeviceType/onnxOpenVINODeviceId)" + : Global::intToString(gpuIdxForThisThread); + logger->write("ONNX backend thread " + Global::intToString(serverThreadIdx) + + ": provider=" + context->providerName + + " deviceIdx=" + deviceInfo); + } + + return new ComputeHandle(context, *loadedModel, logger, gpuIdxForThisThread); +} + +void NeuralNet::freeComputeHandle(ComputeHandle* computeHandle) { + delete computeHandle; +} + +bool NeuralNet::isUsingFP16(const ComputeHandle* handle) { + (void)handle; + return false; +} + +//-------------------------------------------------------------- + +// Helper to find the index of a name in a vector, checking multiple alternatives. +static int findNameIndex(const vector& names, const vector& targets) { + for(size_t i = 0; i < names.size(); i++) { + for(const auto& t : targets) { + if(names[i] == t) + return (int)i; + } + } + return -1; +} + +void NeuralNet::getOutput( + ComputeHandle* computeHandle, + InputBuffers* inputBuffers, + int numBatchEltsFilled, + NNResultBuf** inputBufs, + vector& outputs +) { + assert(numBatchEltsFilled <= inputBuffers->maxBatchSize); + assert(numBatchEltsFilled > 0); + const int batchSize = numBatchEltsFilled; + const int nnXLen = computeHandle->context->nnXLen; + const int nnYLen = computeHandle->context->nnYLen; + const int numSpatialFeatures = computeHandle->numInputChannels; + const int numGlobalFeatures = computeHandle->numInputGlobalChannels; + const int numPolicyChannels = computeHandle->numPolicyChannels; + + // Fill input buffers + for(int nIdx = 0; nIdx < batchSize; nIdx++) { + float* rowSpatialInput = inputBuffers->spatialInput.data() + (inputBuffers->singleInputElts * nIdx); + float* rowGlobalInput = inputBuffers->globalInput.data() + (inputBuffers->singleInputGlobalElts * nIdx); + + const float* rowGlobal = inputBufs[nIdx]->rowGlobalBuf.data(); + const float* rowSpatial = inputBufs[nIdx]->rowSpatialBuf.data(); + std::copy(rowGlobal, rowGlobal + numGlobalFeatures, rowGlobalInput); + SymmetryHelpers::copyInputsWithSymmetry(rowSpatial, rowSpatialInput, 1, nnYLen, nnXLen, numSpatialFeatures, false, inputBufs[nIdx]->symmetry); + + if(computeHandle->numInputMetaChannels > 0) { + float* rowMetaInput = inputBuffers->metaInput.data() + (inputBuffers->singleInputMetaElts * nIdx); + const float* rowMeta = inputBufs[nIdx]->rowMetaBuf.data(); + std::copy(rowMeta, rowMeta + computeHandle->numInputMetaChannels, rowMetaInput); + } + } + + // Create ONNX tensors + Ort::MemoryInfo memInfo = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault); + + std::array spatialShape = {batchSize, numSpatialFeatures, nnYLen, nnXLen}; + Ort::Value spatialTensor = Ort::Value::CreateTensor( + memInfo, inputBuffers->spatialInput.data(), inputBuffers->singleInputElts * batchSize, + spatialShape.data(), spatialShape.size() + ); + + std::array globalShape = {batchSize, numGlobalFeatures}; + Ort::Value globalTensor = Ort::Value::CreateTensor( + memInfo, inputBuffers->globalInput.data(), inputBuffers->singleInputGlobalElts * batchSize, + globalShape.data(), globalShape.size() + ); + + // Match input ordering using configured node names + const ComputeContext* ctx = computeHandle->context; + int spatialIdx = findNameIndex(computeHandle->inputNames, {ctx->inputSpatialName}); + int globalIdx = findNameIndex(computeHandle->inputNames, {ctx->inputGlobalName}); + if(spatialIdx < 0 || globalIdx < 0) + throw StringError("ONNX backend: could not find expected input names"); + + int metaIdx = -1; + Ort::Value metaTensor(nullptr); + if(computeHandle->numInputMetaChannels > 0) { + metaIdx = findNameIndex(computeHandle->inputNames, {ctx->inputMetaName}); + if(metaIdx < 0) + throw StringError("ONNX backend: model has metadata channels but could not find input_meta"); + std::array metaShape = {batchSize, computeHandle->numInputMetaChannels}; + metaTensor = Ort::Value::CreateTensor( + memInfo, inputBuffers->metaInput.data(), inputBuffers->singleInputMetaElts * batchSize, + metaShape.data(), metaShape.size() + ); + } + + vector inputTensors; + inputTensors.reserve(computeHandle->inputNames.size()); + for(size_t i = 0; i < computeHandle->inputNames.size(); i++) { + if((int)i == spatialIdx) + inputTensors.push_back(std::move(spatialTensor)); + else if((int)i == globalIdx) + inputTensors.push_back(std::move(globalTensor)); + else if((int)i == metaIdx) + inputTensors.push_back(std::move(metaTensor)); + else { + throw StringError("ONNX backend: unexpected input node '" + computeHandle->inputNames[i] + + "' -- only spatial, global, and meta inputs are supported"); + } + } + + // Run inference + auto outputTensors = computeHandle->session->Run( + Ort::RunOptions{nullptr}, + computeHandle->inputNamePtrs.data(), + inputTensors.data(), + inputTensors.size(), + computeHandle->outputNamePtrs.data(), + computeHandle->outputNamePtrs.size() + ); + + // Find output indices using configured node names + int policyOutputIdx = findNameIndex(computeHandle->outputNames, {ctx->outputPolicyName}); + int valueOutputIdx = findNameIndex(computeHandle->outputNames, {ctx->outputValueName}); + int miscvalueOutputIdx = findNameIndex(computeHandle->outputNames, {ctx->outputMiscvalueName}); + int ownershipOutputIdx = findNameIndex(computeHandle->outputNames, {ctx->outputOwnershipName}); + + if(policyOutputIdx < 0) + throw StringError("ONNX backend: could not find policy output node '" + ctx->outputPolicyName + "'"); + if(valueOutputIdx < 0) + throw StringError("ONNX backend: could not find value output node '" + ctx->outputValueName + "'"); + if(miscvalueOutputIdx < 0) + throw StringError("ONNX backend: could not find miscvalue output node '" + ctx->outputMiscvalueName + "'"); + if(ownershipOutputIdx < 0) + throw StringError("ONNX backend: could not find ownership output node '" + ctx->outputOwnershipName + "'"); + + const float* policyData = outputTensors[policyOutputIdx].GetTensorData(); + const float* valueData = outputTensors[valueOutputIdx].GetTensorData(); + const float* miscvalueData = outputTensors[miscvalueOutputIdx].GetTensorData(); + const float* ownershipData = outputTensors[ownershipOutputIdx].GetTensorData(); + + assert(policyData != nullptr); + assert(valueData != nullptr); + assert(miscvalueData != nullptr); + assert(ownershipData != nullptr); + assert((int)outputs.size() == batchSize); + + const int policyResultLen = computeHandle->policyResultLen; + const int spatialPolicyLen = nnXLen * nnYLen; + float policyProbsTmp[NNPos::MAX_NN_POLICY_SIZE]; + + for(int row = 0; row < batchSize; row++) { + NNOutput* output = outputs[row]; + assert(output->nnXLen == nnXLen); + assert(output->nnYLen == nnYLen); + float policyOptimism = (float)inputBufs[row]->policyOptimism; + + // Policy: [N, C, H*W+1] + { + const float* policyRowBase = policyData + row * numPolicyChannels * policyResultLen; + float* policyProbs = output->policyProbs; + + if(numPolicyChannels >= 2) { + const float* ch0 = policyRowBase; + const float* ch1 = policyRowBase + policyResultLen; + for(int i = 0; i < spatialPolicyLen; i++) { + float p = ch0[i]; + float pOpt = ch1[i]; + policyProbsTmp[i] = p + (pOpt - p) * policyOptimism; + } + SymmetryHelpers::copyOutputsWithSymmetry(policyProbsTmp, policyProbs, 1, nnYLen, nnXLen, inputBufs[row]->symmetry); + policyProbs[spatialPolicyLen] = ch0[spatialPolicyLen] + (ch1[spatialPolicyLen] - ch0[spatialPolicyLen]) * policyOptimism; + } else { + assert(numPolicyChannels == 1); + const float* ch0 = policyRowBase; + SymmetryHelpers::copyOutputsWithSymmetry(ch0, policyProbs, 1, nnYLen, nnXLen, inputBufs[row]->symmetry); + policyProbs[spatialPolicyLen] = ch0[spatialPolicyLen]; + } + } + + // Value: [N, 3] + { + int numVC = computeHandle->numValueChannels; + assert(numVC == 3); + output->whiteWinProb = valueData[row * numVC]; + output->whiteLossProb = valueData[row * numVC + 1]; + output->whiteNoResultProb = valueData[row * numVC + 2]; + } + + // MiscValue: [N, numScoreValueChannels] -- version-dependent interpretation + { + int numScoreValueChannels = computeHandle->numScoreValueChannels; + if(computeHandle->modelVersion >= 9) { + assert(numScoreValueChannels >= 6); + output->whiteScoreMean = miscvalueData[row * numScoreValueChannels]; + output->whiteScoreMeanSq = miscvalueData[row * numScoreValueChannels + 1]; + output->whiteLead = miscvalueData[row * numScoreValueChannels + 2]; + output->varTimeLeft = miscvalueData[row * numScoreValueChannels + 3]; + output->shorttermWinlossError = miscvalueData[row * numScoreValueChannels + 4]; + output->shorttermScoreError = miscvalueData[row * numScoreValueChannels + 5]; + } + else if(computeHandle->modelVersion >= 8) { + assert(numScoreValueChannels >= 4); + output->whiteScoreMean = miscvalueData[row * numScoreValueChannels]; + output->whiteScoreMeanSq = miscvalueData[row * numScoreValueChannels + 1]; + output->whiteLead = miscvalueData[row * numScoreValueChannels + 2]; + output->varTimeLeft = miscvalueData[row * numScoreValueChannels + 3]; + output->shorttermWinlossError = 0; + output->shorttermScoreError = 0; + } + else if(computeHandle->modelVersion >= 4) { + assert(numScoreValueChannels >= 2); + output->whiteScoreMean = miscvalueData[row * numScoreValueChannels]; + output->whiteScoreMeanSq = miscvalueData[row * numScoreValueChannels + 1]; + output->whiteLead = output->whiteScoreMean; + output->varTimeLeft = 0; + output->shorttermWinlossError = 0; + output->shorttermScoreError = 0; + } + else if(computeHandle->modelVersion >= 3) { + assert(numScoreValueChannels >= 1); + output->whiteScoreMean = miscvalueData[row * numScoreValueChannels]; + output->whiteScoreMeanSq = output->whiteScoreMean * output->whiteScoreMean; + output->whiteLead = output->whiteScoreMean; + output->varTimeLeft = 0; + output->shorttermWinlossError = 0; + output->shorttermScoreError = 0; + } + else { + ASSERT_UNREACHABLE; + } + } + + // Ownership: [N, 1, H, W] + if(output->whiteOwnerMap != NULL) { + assert(computeHandle->numOwnershipChannels == 1); + const float* ownershipRowBuf = ownershipData + row * nnXLen * nnYLen; + SymmetryHelpers::copyOutputsWithSymmetry(ownershipRowBuf, output->whiteOwnerMap, 1, nnYLen, nnXLen, inputBufs[row]->symmetry); + } + } +} + +void NeuralNet::printDevices() { + cout << "ONNX backend: device enumeration is provider-specific." << endl; + cout << "Use onnxProvider plus provider-specific settings in config." << endl; +} + +//-------------------------------------------------------------- +// FOR TESTING -- all return false (not implemented for this backend) + +bool NeuralNet::testEvaluateConv( + const ConvLayerDesc* desc, int batchSize, int nnXLen, int nnYLen, + bool useFP16, bool useNHWC, const std::vector& inputBuffer, std::vector& outputBuffer +) { + (void)desc; (void)batchSize; (void)nnXLen; (void)nnYLen; + (void)useFP16; (void)useNHWC; (void)inputBuffer; (void)outputBuffer; + return false; +} + +bool NeuralNet::testEvaluateBatchNorm( + const BatchNormLayerDesc* desc, int batchSize, int nnXLen, int nnYLen, + bool useFP16, bool useNHWC, const std::vector& inputBuffer, + const std::vector& maskBuffer, std::vector& outputBuffer +) { + (void)desc; (void)batchSize; (void)nnXLen; (void)nnYLen; + (void)useFP16; (void)useNHWC; (void)inputBuffer; (void)maskBuffer; (void)outputBuffer; + return false; +} + +bool NeuralNet::testEvaluateResidualBlock( + const ResidualBlockDesc* desc, int batchSize, int nnXLen, int nnYLen, + bool useFP16, bool useNHWC, const std::vector& inputBuffer, + const std::vector& maskBuffer, std::vector& outputBuffer +) { + (void)desc; (void)batchSize; (void)nnXLen; (void)nnYLen; + (void)useFP16; (void)useNHWC; (void)inputBuffer; (void)maskBuffer; (void)outputBuffer; + return false; +} + +bool NeuralNet::testEvaluateGlobalPoolingResidualBlock( + const GlobalPoolingResidualBlockDesc* desc, int batchSize, int nnXLen, int nnYLen, + bool useFP16, bool useNHWC, const std::vector& inputBuffer, + const std::vector& maskBuffer, std::vector& outputBuffer +) { + (void)desc; (void)batchSize; (void)nnXLen; (void)nnYLen; + (void)useFP16; (void)useNHWC; (void)inputBuffer; (void)maskBuffer; (void)outputBuffer; + return false; +} diff --git a/cpp/neuralnet/onnxmodelbuilder.cpp b/cpp/neuralnet/onnxmodelbuilder.cpp new file mode 100644 index 000000000..d0a2ddb39 --- /dev/null +++ b/cpp/neuralnet/onnxmodelbuilder.cpp @@ -0,0 +1,774 @@ +// Builds an ONNX computational graph from a KataGo ModelDesc. +// Uses the ONNX protobuf API (onnx_pb.h) to construct a ModelProto +// that can be loaded directly by ONNX Runtime. + +#include "../neuralnet/onnxmodelbuilder.h" +#include "../neuralnet/activations.h" +#include "../core/global.h" + +#include + +#include +#include + +using namespace std; + +static string uniqueName(int& nameCounter, const string& prefix) { + return prefix + "_" + to_string(nameCounter++); +} + +// ===================================================================== +// Helper: Add a float tensor initializer to the graph +// ===================================================================== +static string addInitializer( + onnx::GraphProto* graph, + const string& name, + const vector& shape, + const float* data, + size_t numElements +) { + onnx::TensorProto* tensor = graph->add_initializer(); + tensor->set_name(name); + tensor->set_data_type(onnx::TensorProto_DataType_FLOAT); + for(int64_t d : shape) + tensor->add_dims(d); + tensor->set_raw_data(data, numElements * sizeof(float)); + return name; +} + +static string addInitializer( + onnx::GraphProto* graph, + const string& name, + const vector& shape, + const vector& data +) { + return addInitializer(graph, name, shape, data.data(), data.size()); +} + +// Add a scalar float constant +static string addScalarInitializer(onnx::GraphProto* graph, const string& name, float value) { + return addInitializer(graph, name, {}, &value, 1); +} + +// Add a 1D int64 constant tensor +static string addInt64Initializer( + onnx::GraphProto* graph, + const string& name, + const vector& data +) { + onnx::TensorProto* tensor = graph->add_initializer(); + tensor->set_name(name); + tensor->set_data_type(onnx::TensorProto_DataType_INT64); + tensor->add_dims((int64_t)data.size()); + tensor->set_raw_data(data.data(), data.size() * sizeof(int64_t)); + return name; +} + +// ===================================================================== +// Helper: Add ONNX graph node +// ===================================================================== + +// Generic node with n inputs, 1 output +static onnx::NodeProto* addNode( + onnx::GraphProto* graph, + const string& opType, + const vector& inputs, + const string& outputName +) { + onnx::NodeProto* node = graph->add_node(); + node->set_op_type(opType); + for(const auto& inp : inputs) + node->add_input(inp); + node->add_output(outputName); + return node; +} + +// Add an attribute (int) to a node +static void setAttrInt(onnx::NodeProto* node, const string& attrName, int64_t value) { + onnx::AttributeProto* attr = node->add_attribute(); + attr->set_name(attrName); + attr->set_type(onnx::AttributeProto_AttributeType_INT); + attr->set_i(value); +} + +// Add an attribute (ints) to a node +static void setAttrInts(onnx::NodeProto* node, const string& attrName, const vector& values) { + onnx::AttributeProto* attr = node->add_attribute(); + attr->set_name(attrName); + attr->set_type(onnx::AttributeProto_AttributeType_INTS); + for(int64_t v : values) + attr->add_ints(v); +} + +// ===================================================================== +// Convolution: Conv with zero-padding +// ===================================================================== +static string addConvNode( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const ConvLayerDesc& desc, + const string& prefix +) { + string weightsName = addInitializer( + graph, prefix + "/w", + {desc.outChannels, desc.inChannels, desc.convYSize, desc.convXSize}, + desc.weights + ); + + int padY = desc.convYSize / 2; + int padX = desc.convXSize / 2; + string output = uniqueName(nameCounter, prefix + "/out"); + + onnx::NodeProto* convNode = addNode(graph, "Conv", {input, weightsName}, output); + setAttrInts(convNode, "kernel_shape", {desc.convYSize, desc.convXSize}); + setAttrInts(convNode, "pads", {padY, padX, padY, padX}); + setAttrInts(convNode, "dilations", {desc.dilationY, desc.dilationX}); + setAttrInts(convNode, "strides", {1, 1}); + + return output; +} + +// ===================================================================== +// Merged Batch Norm: output = input * mergedScale + mergedBias +// Applied channel-wise, broadcasting over [N, C, H, W] +// ===================================================================== +static string addMergedBNNode( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const BatchNormLayerDesc& desc, + const string& prefix +) { + int C = desc.numChannels; + string scaleName = addInitializer(graph, prefix + "/scale", {C, 1, 1}, desc.mergedScale); + string biasName = addInitializer(graph, prefix + "/bias", {C, 1, 1}, desc.mergedBias); + + string scaled = uniqueName(nameCounter, prefix + "/scaled"); + addNode(graph, "Mul", {input, scaleName}, scaled); + + string output = uniqueName(nameCounter, prefix + "/bn_out"); + addNode(graph, "Add", {scaled, biasName}, output); + + return output; +} + +// ===================================================================== +// Activation: ReLU, Mish (softplus->tanh->mul), or Identity +// ===================================================================== +static string addActivationNode( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + int activationType, + const string& prefix +) { + if(activationType == ACTIVATION_RELU) { + string output = uniqueName(nameCounter, prefix + "/relu"); + addNode(graph, "Relu", {input}, output); + return output; + } else if(activationType == ACTIVATION_MISH) { + // Mish = x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x))) + string sp = uniqueName(nameCounter, prefix + "/softplus"); + addNode(graph, "Softplus", {input}, sp); + + string th = uniqueName(nameCounter, prefix + "/tanh"); + addNode(graph, "Tanh", {sp}, th); + + string output = uniqueName(nameCounter, prefix + "/mish"); + addNode(graph, "Mul", {input, th}, output); + return output; + } else { + // ACTIVATION_IDENTITY -- pass through + return input; + } +} + +// ===================================================================== +// BN + Activation + Mask multiply +// output = activation(input * scale + bias) * mask +// ===================================================================== +static string addBNActivationMask( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const BatchNormLayerDesc& bnDesc, + const ActivationLayerDesc& actDesc, + const string& mask, + const string& prefix +) { + string bn = addMergedBNNode(graph, nameCounter, input, bnDesc, prefix + "/bn"); + string act = addActivationNode(graph, nameCounter, bn, actDesc.activation, prefix + "/act"); + string output = uniqueName(nameCounter, prefix + "/masked"); + addNode(graph, "Mul", {act, mask}, output); + return output; +} + +// ===================================================================== +// MatMul: output = input @ W +// W is [inC, outC] +// ===================================================================== +static string addMatMulNode( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const MatMulLayerDesc& desc, + const string& prefix +) { + string weightsName = addInitializer(graph, prefix + "/w", {desc.inChannels, desc.outChannels}, desc.weights); + string output = uniqueName(nameCounter, prefix + "/matmul"); + addNode(graph, "MatMul", {input, weightsName}, output); + return output; +} + +// ===================================================================== +// Bias addition: output = input + bias +// bias is [C], broadcast over [N, C] or [N, C, H, W] +// ===================================================================== +static string addBiasNode( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const MatBiasLayerDesc& desc, + const string& prefix +) { + string biasName = addInitializer(graph, prefix + "/b", {desc.numChannels}, desc.weights); + string output = uniqueName(nameCounter, prefix + "/biased"); + addNode(graph, "Add", {input, biasName}, output); + return output; +} + +// ===================================================================== +// KataGPool: Global pooling producing 3 values per channel +// Pool 1: mean = ReduceSum(x * mask, [2,3]) / maskSum +// Pool 2: mean * (sqrt(maskSum) - 14.0) * 0.1 +// Pool 3: ReduceMax(x + (mask - 1.0), [2,3]) +// Output: [N, 3*C] +// ===================================================================== +static string addGlobalPool( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const string& mask, + const string& maskSumHW, + const string& prefix +) { + // x_masked = input * mask (already masked, but let's be safe) + string xMasked = uniqueName(nameCounter, prefix + "/gpool_xm"); + addNode(graph, "Mul", {input, mask}, xMasked); + + // sum = ReduceSum(xMasked, axes=[2,3]) + string axesName = addInt64Initializer(graph, uniqueName(nameCounter, prefix + "/axes23"), {2, 3}); + string sumOut = uniqueName(nameCounter, prefix + "/gpool_sum"); + onnx::NodeProto* sumNode = addNode(graph, "ReduceSum", {xMasked, axesName}, sumOut); + setAttrInt(sumNode, "keepdims", 0); + + // mean = sum / maskSumFlat + // maskSumHW is [N,1,1,1], we need [N,1] for division + string maskSumFlat = uniqueName(nameCounter, prefix + "/gpool_msf"); + string reshapeShape = addInt64Initializer(graph, uniqueName(nameCounter, prefix + "/shape_n1"), {0, 1}); + addNode(graph, "Reshape", {maskSumHW, reshapeShape}, maskSumFlat); + + string mean = uniqueName(nameCounter, prefix + "/gpool_mean"); + addNode(graph, "Div", {sumOut, maskSumFlat}, mean); + + // sqrtMaskSum = sqrt(maskSumFlat) + string sqrtMs = uniqueName(nameCounter, prefix + "/gpool_sqrt"); + addNode(graph, "Sqrt", {maskSumFlat}, sqrtMs); + + // sqrtMs - 14.0 + string const14 = addScalarInitializer(graph, uniqueName(nameCounter, prefix + "/c14"), 14.0f); + string sqrtMsSub = uniqueName(nameCounter, prefix + "/gpool_sqrtsub"); + addNode(graph, "Sub", {sqrtMs, const14}, sqrtMsSub); + + // * 0.1 + string const01 = addScalarInitializer(graph, uniqueName(nameCounter, prefix + "/c01"), 0.1f); + string scaledSqrt = uniqueName(nameCounter, prefix + "/gpool_ssm"); + addNode(graph, "Mul", {sqrtMsSub, const01}, scaledSqrt); + + // pool2 = mean * scaledSqrt + string pool2 = uniqueName(nameCounter, prefix + "/gpool_p2"); + addNode(graph, "Mul", {mean, scaledSqrt}, pool2); + + // Pool3: max over (x + mask - 1) + string constNeg1 = addScalarInitializer(graph, uniqueName(nameCounter, prefix + "/cn1"), -1.0f); + string maskBias = uniqueName(nameCounter, prefix + "/gpool_mb"); + addNode(graph, "Add", {mask, constNeg1}, maskBias); + + string xShifted = uniqueName(nameCounter, prefix + "/gpool_xs"); + addNode(graph, "Add", {input, maskBias}, xShifted); + + // ReduceMax over [2,3] + string axesName2 = addInt64Initializer(graph, uniqueName(nameCounter, prefix + "/axes23b"), {2, 3}); + string pool3 = uniqueName(nameCounter, prefix + "/gpool_max"); + onnx::NodeProto* maxNode = addNode(graph, "ReduceMax", {xShifted, axesName2}, pool3); + setAttrInt(maxNode, "keepdims", 0); + + // Concat [mean, pool2, pool3] along axis=1 + string output = uniqueName(nameCounter, prefix + "/gpool_out"); + onnx::NodeProto* concatNode = addNode(graph, "Concat", {mean, pool2, pool3}, output); + setAttrInt(concatNode, "axis", 1); + + return output; +} + +// ===================================================================== +// KataValueHeadGPool: Different third pool from KataGPool +// Pool 3: mean * ((sqrt(maskSum) - 14.0)^2 * 0.01 - 0.1) +// ===================================================================== +static string addValueHeadGPool( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const string& mask, + const string& maskSumHW, + const string& prefix +) { + // x for value head already has activation applied + // sum = ReduceSum(input * mask, [2,3]) + string xMasked = uniqueName(nameCounter, prefix + "/vgpool_xm"); + addNode(graph, "Mul", {input, mask}, xMasked); + + string axesName = addInt64Initializer(graph, uniqueName(nameCounter, prefix + "/axes23"), {2, 3}); + string sumOut = uniqueName(nameCounter, prefix + "/vgpool_sum"); + onnx::NodeProto* sumNode = addNode(graph, "ReduceSum", {xMasked, axesName}, sumOut); + setAttrInt(sumNode, "keepdims", 0); + + // mean + string maskSumFlat = uniqueName(nameCounter, prefix + "/vgpool_msf"); + string reshapeShape = addInt64Initializer(graph, uniqueName(nameCounter, prefix + "/shape_n1"), {0, 1}); + addNode(graph, "Reshape", {maskSumHW, reshapeShape}, maskSumFlat); + + string mean = uniqueName(nameCounter, prefix + "/vgpool_mean"); + addNode(graph, "Div", {sumOut, maskSumFlat}, mean); + + // sqrt(maskSum) + string sqrtMs = uniqueName(nameCounter, prefix + "/vgpool_sqrt"); + addNode(graph, "Sqrt", {maskSumFlat}, sqrtMs); + + // (sqrt(maskSum) - 14.0) + string const14 = addScalarInitializer(graph, uniqueName(nameCounter, prefix + "/c14"), 14.0f); + string sqrtMsSub = uniqueName(nameCounter, prefix + "/vgpool_ss"); + addNode(graph, "Sub", {sqrtMs, const14}, sqrtMsSub); + + // pool2 = mean * (sqrtMsSub) * 0.1 + string const01 = addScalarInitializer(graph, uniqueName(nameCounter, prefix + "/c01"), 0.1f); + string scaledSqrt = uniqueName(nameCounter, prefix + "/vgpool_ssm"); + addNode(graph, "Mul", {sqrtMsSub, const01}, scaledSqrt); + string pool2 = uniqueName(nameCounter, prefix + "/vgpool_p2"); + addNode(graph, "Mul", {mean, scaledSqrt}, pool2); + + // pool3 = mean * ((sqrtMsSub)^2 * 0.01 - 0.1) + string sqrtMsSubSq = uniqueName(nameCounter, prefix + "/vgpool_sq"); + addNode(graph, "Mul", {sqrtMsSub, sqrtMsSub}, sqrtMsSubSq); + + string constP01 = addScalarInitializer(graph, uniqueName(nameCounter, prefix + "/cp01"), 0.01f); + string sqScaled = uniqueName(nameCounter, prefix + "/vgpool_sqs"); + addNode(graph, "Mul", {sqrtMsSubSq, constP01}, sqScaled); + + string constN01 = addScalarInitializer(graph, uniqueName(nameCounter, prefix + "/cn01"), -0.1f); + string sqShifted = uniqueName(nameCounter, prefix + "/vgpool_sqsh"); + addNode(graph, "Add", {sqScaled, constN01}, sqShifted); + + string pool3 = uniqueName(nameCounter, prefix + "/vgpool_p3"); + addNode(graph, "Mul", {mean, sqShifted}, pool3); + + // Concat [mean, pool2, pool3] along axis=1 + string output = uniqueName(nameCounter, prefix + "/vgpool_out"); + onnx::NodeProto* concatNode = addNode(graph, "Concat", {mean, pool2, pool3}, output); + setAttrInt(concatNode, "axis", 1); + + return output; +} + +// ===================================================================== +// Residual Block: BN->Act->Conv->BN->Act->Conv + skip +// ===================================================================== +static string addResidualBlock( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const string& mask, + const ResidualBlockDesc& desc, + const string& prefix +) { + string pre = addBNActivationMask(graph, nameCounter, input, desc.preBN, desc.preActivation, mask, prefix + "/pre"); + string mid = addConvNode(graph, nameCounter, pre, desc.regularConv, prefix + "/conv1"); + string midAct = addBNActivationMask(graph, nameCounter, mid, desc.midBN, desc.midActivation, mask, prefix + "/mid"); + string final_ = addConvNode(graph, nameCounter, midAct, desc.finalConv, prefix + "/conv2"); + + // Residual add + string output = uniqueName(nameCounter, prefix + "/resadd"); + addNode(graph, "Add", {input, final_}, output); + return output; +} + +// ===================================================================== +// Global Pooling Residual Block +// ===================================================================== +static string addGPoolResidualBlock( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const string& mask, + const string& maskSumHW, + const GlobalPoolingResidualBlockDesc& desc, + const string& prefix +) { + string pre = addBNActivationMask(graph, nameCounter, input, desc.preBN, desc.preActivation, mask, prefix + "/pre"); + + // Regular path + string regOut = addConvNode(graph, nameCounter, pre, desc.regularConv, prefix + "/reg"); + + // Global pooling path + string gpoolConvOut = addConvNode(graph, nameCounter, pre, desc.gpoolConv, prefix + "/gconv"); + string gpoolBNAct = addBNActivationMask(graph, nameCounter, gpoolConvOut, desc.gpoolBN, desc.gpoolActivation, mask, prefix + "/gbn"); + string gpoolResult = addGlobalPool(graph, nameCounter, gpoolBNAct, mask, maskSumHW, prefix + "/gpool"); + + // gpoolToBiasMul: [N, 3*gpoolC] -> [N, regC] + string gpoolBias = addMatMulNode(graph, nameCounter, gpoolResult, desc.gpoolToBiasMul, prefix + "/g2b"); + + // Reshape bias to [N, C, 1, 1] for broadcasting + string biasShape = addInt64Initializer(graph, uniqueName(nameCounter, prefix + "/shape_nc11"), {0, -1, 1, 1}); + string gpoolBiasReshaped = uniqueName(nameCounter, prefix + "/gbr"); + addNode(graph, "Reshape", {gpoolBias, biasShape}, gpoolBiasReshaped); + + // Add bias to regular conv output + string regPlusBias = uniqueName(nameCounter, prefix + "/rpb"); + addNode(graph, "Add", {regOut, gpoolBiasReshaped}, regPlusBias); + + // Second half: BN->Act->Conv + string midAct = addBNActivationMask(graph, nameCounter, regPlusBias, desc.midBN, desc.midActivation, mask, prefix + "/mid"); + string final_ = addConvNode(graph, nameCounter, midAct, desc.finalConv, prefix + "/conv2"); + + // Residual add + string output = uniqueName(nameCounter, prefix + "/resadd"); + addNode(graph, "Add", {input, final_}, output); + return output; +} + +// ===================================================================== +// Nested Bottleneck Residual Block +// Pre: BN->Act->Mask->1x1Conv (c_main->c_mid) +// Inner: sequence of ordinary/gpool/nested_bottleneck sub-blocks at c_mid +// Post: BN->Act->Mask->1x1Conv (c_mid->c_main) + residual add +// ===================================================================== +static string addNestedBottleneckResidualBlock( + onnx::GraphProto* graph, + int& nameCounter, + const string& input, + const string& mask, + const string& maskSumHW, + const NestedBottleneckResidualBlockDesc& desc, + const string& prefix +) { + // Pre: BN -> Act -> Mask -> 1x1 Conv (c_main -> c_mid) + string pre = addBNActivationMask(graph, nameCounter, input, desc.preBN, desc.preActivation, mask, prefix + "/pre"); + string midOut = addConvNode(graph, nameCounter, pre, desc.preConv, prefix + "/preconv"); + + // Inner sub-blocks at c_mid channels + for(int i = 0; i < desc.numBlocks; i++) { + int kind = desc.blocks[i].first; + string sub = prefix + "/sub" + to_string(i); + if(kind == ORDINARY_BLOCK_KIND) { + midOut = addResidualBlock(graph, nameCounter, midOut, mask, + *((const ResidualBlockDesc*)desc.blocks[i].second.get()), sub); + } else if(kind == GLOBAL_POOLING_BLOCK_KIND) { + midOut = addGPoolResidualBlock(graph, nameCounter, midOut, mask, maskSumHW, + *((const GlobalPoolingResidualBlockDesc*)desc.blocks[i].second.get()), sub); + } else if(kind == NESTED_BOTTLENECK_BLOCK_KIND) { + midOut = addNestedBottleneckResidualBlock(graph, nameCounter, midOut, mask, maskSumHW, + *((const NestedBottleneckResidualBlockDesc*)desc.blocks[i].second.get()), sub); + } else { + throw StringError("ONNX backend: unknown sub-block kind " + to_string(kind)); + } + } + + // Post: BN -> Act -> Mask -> 1x1 Conv (c_mid -> c_main) + string post = addBNActivationMask(graph, nameCounter, midOut, desc.postBN, desc.postActivation, mask, prefix + "/post"); + string postOut = addConvNode(graph, nameCounter, post, desc.postConv, prefix + "/postconv"); + + // Residual add: input + postOut + string output = uniqueName(nameCounter, prefix + "/resadd"); + addNode(graph, "Add", {input, postOut}, output); + return output; +} + +// ===================================================================== +// Add ValueInfo for graph input/output +// ===================================================================== +static void addGraphInput( + onnx::GraphProto* graph, + const string& name, + const vector& shape +) { + onnx::ValueInfoProto* input = graph->add_input(); + input->set_name(name); + onnx::TypeProto* type = input->mutable_type(); + onnx::TypeProto_Tensor* tensorType = type->mutable_tensor_type(); + tensorType->set_elem_type(onnx::TensorProto_DataType_FLOAT); + onnx::TensorShapeProto* shapeProto = tensorType->mutable_shape(); + for(int64_t d : shape) { + auto* dim = shapeProto->add_dim(); + if(d < 0) + dim->set_dim_param("N"); + else + dim->set_dim_value(d); + } +} + +static void addGraphOutput( + onnx::GraphProto* graph, + const string& name, + const vector& shape +) { + onnx::ValueInfoProto* output = graph->add_output(); + output->set_name(name); + onnx::TypeProto* type = output->mutable_type(); + onnx::TypeProto_Tensor* tensorType = type->mutable_tensor_type(); + tensorType->set_elem_type(onnx::TensorProto_DataType_FLOAT); + onnx::TensorShapeProto* shapeProto = tensorType->mutable_shape(); + for(int64_t d : shape) { + auto* dim = shapeProto->add_dim(); + if(d < 0) + dim->set_dim_param("N"); + else + dim->set_dim_value(d); + } +} + +// ===================================================================== +// Main: Build the full ONNX model from ModelDesc +// ===================================================================== +string OnnxModelBuilder::buildOnnxModel(const ModelDesc& modelDesc, int nnXLen, int nnYLen) { + int nameCounter = 0; + + const int modelVersion = modelDesc.modelVersion; + const int numInputChannels = modelDesc.numInputChannels; + const int numInputGlobalChannels = modelDesc.numInputGlobalChannels; + const int numPolicyChannels = modelDesc.numPolicyChannels; + const int numValueChannels = modelDesc.numValueChannels; + const int numScoreValueChannels = modelDesc.numScoreValueChannels; + const int numOwnershipChannels = modelDesc.numOwnershipChannels; + + const TrunkDesc& trunk = modelDesc.trunk; + const PolicyHeadDesc& policyHead = modelDesc.policyHead; + const ValueHeadDesc& valueHead = modelDesc.valueHead; + + onnx::ModelProto model; + model.set_ir_version(8); + model.set_producer_name("KataGo"); + model.set_domain("ai.katago"); + + auto* opset = model.add_opset_import(); + opset->set_domain(""); + opset->set_version(18); + + onnx::GraphProto* graph = model.mutable_graph(); + graph->set_name("katago"); + + // ------------------------------------------------------------------ + // Graph Inputs + // ------------------------------------------------------------------ + addGraphInput(graph, "input_spatial", {-1, numInputChannels, nnYLen, nnXLen}); + addGraphInput(graph, "input_global", {-1, numInputGlobalChannels}); + if(modelDesc.numInputMetaChannels > 0) { + addGraphInput(graph, "input_meta", {-1, modelDesc.numInputMetaChannels}); + } + + // ------------------------------------------------------------------ + // Derive mask and maskSumHW from input_spatial. + // Channel 0 of the spatial input is the "on board" indicator: 1.0 for + // positions on the board, 0.0 for off-board padding. This is Feature 0 + // set by fillRowV3/V4/V5/V6/V7 in nninputs.cpp and holds across all + // supported input versions (V3-V7). + // + // mask = input_spatial[:, 0:1, :, :] -> [N, 1, H, W] + // maskSumHW = ReduceSum(mask, [2, 3], keepdims=true) -> [N, 1, 1, 1] + // ------------------------------------------------------------------ + + // Slice channel 0 to get mask + string sliceStarts = addInt64Initializer(graph, "mask_starts", {0}); + string sliceEnds = addInt64Initializer(graph, "mask_ends", {1}); + string sliceAxes = addInt64Initializer(graph, "mask_axes", {1}); + string mask = uniqueName(nameCounter, "mask"); + addNode(graph, "Slice", {"input_spatial", sliceStarts, sliceEnds, sliceAxes}, mask); + + // maskSumHW + string sumAxes = addInt64Initializer(graph, "mask_sum_axes", {2, 3}); + string maskSumHW = uniqueName(nameCounter, "maskSumHW"); + onnx::NodeProto* maskSumNode = addNode(graph, "ReduceSum", {mask, sumAxes}, maskSumHW); + setAttrInt(maskSumNode, "keepdims", 1); + + // ------------------------------------------------------------------ + // Trunk: Initial conv + matmul bias + // ------------------------------------------------------------------ + string trunkOut = addConvNode(graph, nameCounter, "input_spatial", trunk.initialConv, "trunk/init_conv"); + + // initialMatMul: global features -> [N, trunkNumChannels] + string globalBias = addMatMulNode(graph, nameCounter, "input_global", trunk.initialMatMul, "trunk/init_matmul"); + + // Reshape to [N, C, 1, 1] for broadcasting + string biasShape = addInt64Initializer(graph, "trunk_bias_shape", {0, -1, 1, 1}); + string globalBiasReshaped = uniqueName(nameCounter, "trunk/gbr"); + addNode(graph, "Reshape", {globalBias, biasShape}, globalBiasReshaped); + + // Add global bias to conv output + string trunkCombined = uniqueName(nameCounter, "trunk/combined"); + addNode(graph, "Add", {trunkOut, globalBiasReshaped}, trunkCombined); + trunkOut = trunkCombined; + + // ------------------------------------------------------------------ + // Trunk: Metadata encoder (SGF metadata -> trunk bias) + // ------------------------------------------------------------------ + if(trunk.metaEncoderVersion > 0) { + const SGFMetadataEncoderDesc& enc = trunk.sgfMetadataEncoder; + string metaOut = addMatMulNode(graph, nameCounter, "input_meta", enc.mul1, "trunk/meta_mul1"); + metaOut = addBiasNode(graph, nameCounter, metaOut, enc.bias1, "trunk/meta_b1"); + metaOut = addActivationNode(graph, nameCounter, metaOut, enc.act1.activation, "trunk/meta_a1"); + metaOut = addMatMulNode(graph, nameCounter, metaOut, enc.mul2, "trunk/meta_mul2"); + metaOut = addBiasNode(graph, nameCounter, metaOut, enc.bias2, "trunk/meta_b2"); + metaOut = addActivationNode(graph, nameCounter, metaOut, enc.act2.activation, "trunk/meta_a2"); + metaOut = addMatMulNode(graph, nameCounter, metaOut, enc.mul3, "trunk/meta_mul3"); + + // Reshape to [N, C, 1, 1] for spatial broadcasting + string metaBiasShape = addInt64Initializer(graph, "trunk_meta_bias_shape", {0, -1, 1, 1}); + string metaBiasReshaped = uniqueName(nameCounter, "trunk/mbr"); + addNode(graph, "Reshape", {metaOut, metaBiasShape}, metaBiasReshaped); + + // Add to trunk + string trunkWithMeta = uniqueName(nameCounter, "trunk/with_meta"); + addNode(graph, "Add", {trunkOut, metaBiasReshaped}, trunkWithMeta); + trunkOut = trunkWithMeta; + } + + // ------------------------------------------------------------------ + // Trunk: Residual blocks + // ------------------------------------------------------------------ + for(int i = 0; i < trunk.numBlocks; i++) { + int blockKind = trunk.blocks[i].first; + string blockPrefix = "trunk/block" + to_string(i); + + if(blockKind == ORDINARY_BLOCK_KIND) { + const ResidualBlockDesc& blockDesc = *((const ResidualBlockDesc*)trunk.blocks[i].second.get()); + trunkOut = addResidualBlock(graph, nameCounter, trunkOut, mask, blockDesc, blockPrefix); + } else if(blockKind == GLOBAL_POOLING_BLOCK_KIND) { + const GlobalPoolingResidualBlockDesc& blockDesc = *((const GlobalPoolingResidualBlockDesc*)trunk.blocks[i].second.get()); + trunkOut = addGPoolResidualBlock(graph, nameCounter, trunkOut, mask, maskSumHW, blockDesc, blockPrefix); + } else if(blockKind == NESTED_BOTTLENECK_BLOCK_KIND) { + const NestedBottleneckResidualBlockDesc& blockDesc = *((const NestedBottleneckResidualBlockDesc*)trunk.blocks[i].second.get()); + trunkOut = addNestedBottleneckResidualBlock(graph, nameCounter, trunkOut, mask, maskSumHW, blockDesc, blockPrefix); + } else { + throw StringError("ONNX backend: unknown block kind " + to_string(blockKind)); + } + } + + // Trunk tip: BN + activation + mask + trunkOut = addBNActivationMask(graph, nameCounter, trunkOut, trunk.trunkTipBN, trunk.trunkTipActivation, mask, "trunk/tip"); + + // ------------------------------------------------------------------ + // Policy Head + // ------------------------------------------------------------------ + + // p1Conv: spatial path + string p1Out = addConvNode(graph, nameCounter, trunkOut, policyHead.p1Conv, "policy/p1conv"); + + // g1Conv: global pooling path + string g1Out = addConvNode(graph, nameCounter, trunkOut, policyHead.g1Conv, "policy/g1conv"); + string g1BNAct = addBNActivationMask(graph, nameCounter, g1Out, policyHead.g1BN, policyHead.g1Activation, mask, "policy/g1bn"); + string g1Pool = addGlobalPool(graph, nameCounter, g1BNAct, mask, maskSumHW, "policy/g1pool"); + + // gpoolToBiasMul: [N, 3*g1C] -> [N, p1C] + string policyBias = addMatMulNode(graph, nameCounter, g1Pool, policyHead.gpoolToBiasMul, "policy/g2b"); + + // Reshape to [N, C, 1, 1] + string pBiasShape = addInt64Initializer(graph, uniqueName(nameCounter, "policy/bias_shape"), {0, -1, 1, 1}); + string policyBiasReshaped = uniqueName(nameCounter, "policy/pbr"); + addNode(graph, "Reshape", {policyBias, pBiasShape}, policyBiasReshaped); + + // Add bias to p1 + string p1PlusBias = uniqueName(nameCounter, "policy/p1pb"); + addNode(graph, "Add", {p1Out, policyBiasReshaped}, p1PlusBias); + + // p1BN + activation + mask + string p1BNAct = addBNActivationMask(graph, nameCounter, p1PlusBias, policyHead.p1BN, policyHead.p1Activation, mask, "policy/p1bn"); + + // p2Conv: [N, p1C, H, W] -> [N, policyChannels, H, W] + string p2Out = addConvNode(graph, nameCounter, p1BNAct, policyHead.p2Conv, "policy/p2conv"); + + // Reshape to [N, policyChannels, H*W] + string pSpatialShape = addInt64Initializer(graph, uniqueName(nameCounter, "policy/spat_shape"), {0, numPolicyChannels, -1}); + string policySpatial = uniqueName(nameCounter, "policy/spatial"); + addNode(graph, "Reshape", {p2Out, pSpatialShape}, policySpatial); + + // Pass move: gpoolToPassMul + string passOut; + if(modelVersion >= 15) { + // gpoolToPassMul -> bias -> activation -> gpoolToPassMul2 + string passMul1 = addMatMulNode(graph, nameCounter, g1Pool, policyHead.gpoolToPassMul, "policy/pass_mul1"); + string passBiased = addBiasNode(graph, nameCounter, passMul1, policyHead.gpoolToPassBias, "policy/pass_bias"); + string passAct = addActivationNode(graph, nameCounter, passBiased, policyHead.passActivation.activation, "policy/pass_act"); + passOut = addMatMulNode(graph, nameCounter, passAct, policyHead.gpoolToPassMul2, "policy/pass_mul2"); + } else { + passOut = addMatMulNode(graph, nameCounter, g1Pool, policyHead.gpoolToPassMul, "policy/pass_mul"); + } + + // Reshape pass to [N, policyChannels, 1] + string passShape = addInt64Initializer(graph, uniqueName(nameCounter, "policy/pass_shape"), {0, numPolicyChannels, 1}); + string passReshaped = uniqueName(nameCounter, "policy/pass_r"); + addNode(graph, "Reshape", {passOut, passShape}, passReshaped); + + // Concat spatial + pass -> out_policy [N, policyChannels, H*W+1] + onnx::NodeProto* policyConcatNode = addNode(graph, "Concat", {policySpatial, passReshaped}, "out_policy"); + setAttrInt(policyConcatNode, "axis", 2); + + // ------------------------------------------------------------------ + // Value Head + // ------------------------------------------------------------------ + + // v1Conv + string v1Out = addConvNode(graph, nameCounter, trunkOut, valueHead.v1Conv, "value/v1conv"); + + // v1BN + activation + mask + string v1BNAct = addBNActivationMask(graph, nameCounter, v1Out, valueHead.v1BN, valueHead.v1Activation, mask, "value/v1bn"); + + // Value head global pooling + string v1Pool = addValueHeadGPool(graph, nameCounter, v1BNAct, mask, maskSumHW, "value/vpool"); + + // v2Mul + v2Bias + v2Activation + string v2Out = addMatMulNode(graph, nameCounter, v1Pool, valueHead.v2Mul, "value/v2mul"); + string v2Biased = addBiasNode(graph, nameCounter, v2Out, valueHead.v2Bias, "value/v2bias"); + string v2Act = addActivationNode(graph, nameCounter, v2Biased, valueHead.v2Activation.activation, "value/v2act"); + + // v3Mul + v3Bias -> out_value [N, 3] + string v3Out = addMatMulNode(graph, nameCounter, v2Act, valueHead.v3Mul, "value/v3mul"); + string v3Biased = addBiasNode(graph, nameCounter, v3Out, valueHead.v3Bias, "value/v3bias"); + addNode(graph, "Identity", {v3Biased}, "out_value"); + + // sv3Mul + sv3Bias -> out_miscvalue [N, numScoreValueChannels] + string sv3Out = addMatMulNode(graph, nameCounter, v2Act, valueHead.sv3Mul, "value/sv3mul"); + string sv3Biased = addBiasNode(graph, nameCounter, sv3Out, valueHead.sv3Bias, "value/sv3bias"); + addNode(graph, "Identity", {sv3Biased}, "out_miscvalue"); + + // vOwnershipConv -> out_ownership [N, 1, H, W] + string ownOut = addConvNode(graph, nameCounter, v1BNAct, valueHead.vOwnershipConv, "value/own_conv"); + addNode(graph, "Identity", {ownOut}, "out_ownership"); + + // ------------------------------------------------------------------ + // Graph Outputs + // ------------------------------------------------------------------ + int policyResultLen = nnXLen * nnYLen + 1; + addGraphOutput(graph, "out_policy", {-1, numPolicyChannels, policyResultLen}); + addGraphOutput(graph, "out_value", {-1, numValueChannels}); + addGraphOutput(graph, "out_miscvalue", {-1, numScoreValueChannels}); + addGraphOutput(graph, "out_ownership", {-1, numOwnershipChannels, nnYLen, nnXLen}); + + // ------------------------------------------------------------------ + // Serialize to string + // ------------------------------------------------------------------ + string serialized; + if(!model.SerializeToString(&serialized)) + throw StringError("ONNX backend: failed to serialize ONNX model to protobuf"); + + return serialized; +} diff --git a/cpp/neuralnet/onnxmodelbuilder.h b/cpp/neuralnet/onnxmodelbuilder.h new file mode 100644 index 000000000..96bc8e07a --- /dev/null +++ b/cpp/neuralnet/onnxmodelbuilder.h @@ -0,0 +1,14 @@ +#ifndef NEURALNET_ONNXMODELBUILDER_H_ +#define NEURALNET_ONNXMODELBUILDER_H_ + +#include +#include "../neuralnet/desc.h" + +namespace OnnxModelBuilder { + // Builds a serialized ONNX ModelProto from a KataGo ModelDesc. + // The model is constructed for a fixed spatial size of nnXLen x nnYLen. + // Returns the protobuf-serialized bytes, ready for Ort::Session creation. + std::string buildOnnxModel(const ModelDesc& modelDesc, int nnXLen, int nnYLen); +} + +#endif // NEURALNET_ONNXMODELBUILDER_H_ diff --git a/cpp/program/gtpconfig.cpp b/cpp/program/gtpconfig.cpp index 7a45c02de..8d1c6dc7d 100644 --- a/cpp/program/gtpconfig.cpp +++ b/cpp/program/gtpconfig.cpp @@ -280,6 +280,8 @@ nnCacheSizePowerOfTwo = $$NN_CACHE_SIZE_POWER_OF_TWO # Size of mutex pool for nnCache is (2 ** this). nnMutexPoolSizePowerOfTwo = $$NN_MUTEX_POOL_SIZE_POWER_OF_TWO +$$ONNX_PROVIDER + $$MULTIPLE_GPUS # =========================================================================== @@ -466,7 +468,8 @@ string GTPConfig::makeConfig( std::vector deviceIdxs, int nnCacheSizePowerOfTwo, int nnMutexPoolSizePowerOfTwo, - int numSearchThreads + int numSearchThreads, + const string& onnxProvider ) { string config = gtpBasePart1 + gtpBasePart2; auto replace = [&](const string& key, const string& replacement) { @@ -519,12 +522,27 @@ string GTPConfig::makeConfig( replace("$$NN_CACHE_SIZE_POWER_OF_TWO", Global::intToString(nnCacheSizePowerOfTwo)); replace("$$NN_MUTEX_POOL_SIZE_POWER_OF_TWO", Global::intToString(nnMutexPoolSizePowerOfTwo)); +#ifdef USE_ONNX_BACKEND + string onnxProviderLower = Global::toLower(Global::trim(onnxProvider)); + string onnxProviderConfigValue = onnxProviderLower.empty() ? "cpu" : onnxProviderLower; + replace("$$ONNX_PROVIDER", "onnxProvider = " + onnxProviderConfigValue); +#else + (void)onnxProvider; + replace("$$ONNX_PROVIDER", ""); +#endif + if(deviceIdxs.size() <= 0) { replace("$$MULTIPLE_GPUS", ""); } else { string replacement = ""; replacement += "numNNServerThreadsPerModel = " + Global::uint64ToString(deviceIdxs.size()) + "\n"; +#ifdef USE_ONNX_BACKEND + bool onnxProviderSupportsThreadDeviceMap = + onnxProviderConfigValue == "cuda" || + onnxProviderConfigValue == "tensorrt" || + onnxProviderConfigValue == "migraphx"; +#endif for(int i = 0; i deviceIdxs, int nnCacheSizePowerOfTwo, int nnMutexPoolSizePowerOfTwo, - int numSearchThreads + int numSearchThreads, + const std::string& onnxProvider = "cpu" ); } diff --git a/cpp/program/setup.cpp b/cpp/program/setup.cpp index 60baac228..1a9043bc3 100644 --- a/cpp/program/setup.cpp +++ b/cpp/program/setup.cpp @@ -20,6 +20,7 @@ std::vector Setup::getBackendPrefixes() { prefixes.push_back("metal"); prefixes.push_back("opencl"); prefixes.push_back("eigen"); + prefixes.push_back("onnx"); prefixes.push_back("dummybackend"); return prefixes; } @@ -86,12 +87,29 @@ vector Setup::initializeNNEvaluators( string backendPrefix = "metal"; #elif defined(USE_OPENCL_BACKEND) string backendPrefix = "opencl"; + #elif defined(USE_ONNX_BACKEND) + string backendPrefix = "onnx"; #elif defined(USE_EIGEN_BACKEND) string backendPrefix = "eigen"; #else string backendPrefix = "dummybackend"; #endif +#if !defined(USE_ONNX_BACKEND) + // In non-ONNX builds, fail fast on any ONNX-specific config instead of silently ignoring it. + { + const vector allKeys = cfg.unusedKeys(); + for(const string& key : allKeys) { + if(Global::isPrefix(Global::toLower(key),"onnx")) { + throw StringError( + "Config key '" + key + "' requires ONNX backend, but this executable is not built with USE_BACKEND=ONNX. " + "Remove onnx* settings or rebuild with -DUSE_BACKEND=ONNX." + ); + } + } + } +#endif + //Automatically flag keys that are for other backends as used so that we don't warn about unused keys //for those options for(const string& prefix: getBackendPrefixes()) { @@ -141,7 +159,7 @@ vector Setup::initializeNNEvaluators( requireExactNNLen = cfg.getBool("requireMaxBoardSize"); } - bool inputsUseNHWC = backendPrefix == "opencl" || backendPrefix == "trt" || backendPrefix == "metal" ? false : true; + bool inputsUseNHWC = backendPrefix == "opencl" || backendPrefix == "trt" || backendPrefix == "metal" || backendPrefix == "onnx" ? false : true; if(cfg.contains(backendPrefix+"InputsUseNHWC"+idxStr)) inputsUseNHWC = cfg.getBool(backendPrefix+"InputsUseNHWC"+idxStr); else if(cfg.contains("inputsUseNHWC"+idxStr)) @@ -220,9 +238,38 @@ vector Setup::initializeNNEvaluators( string homeDataDirOverride = loadHomeDataDirOverride(cfg); - string openCLTunerFile; + string backendExtraParam; +#if defined(USE_ONNX_BACKEND) + string onnxProvider = cfg.contains("onnxProvider") ? cfg.getString("onnxProvider") : "cpu"; + backendExtraParam = "provider=" + onnxProvider; + if(cfg.contains("onnxInputSpatial")) + backendExtraParam += ";inputSpatial=" + cfg.getString("onnxInputSpatial"); + if(cfg.contains("onnxInputGlobal")) + backendExtraParam += ";inputGlobal=" + cfg.getString("onnxInputGlobal"); + if(cfg.contains("onnxInputMeta")) + backendExtraParam += ";inputMeta=" + cfg.getString("onnxInputMeta"); + if(cfg.contains("onnxOutputPolicy")) + backendExtraParam += ";outputPolicy=" + cfg.getString("onnxOutputPolicy"); + if(cfg.contains("onnxOutputValue")) + backendExtraParam += ";outputValue=" + cfg.getString("onnxOutputValue"); + if(cfg.contains("onnxOutputMiscvalue")) + backendExtraParam += ";outputMiscvalue=" + cfg.getString("onnxOutputMiscvalue"); + if(cfg.contains("onnxOutputOwnership")) + backendExtraParam += ";outputOwnership=" + cfg.getString("onnxOutputOwnership"); + if(cfg.contains("onnxModelVersion")) + backendExtraParam += ";modelVersion=" + cfg.getString("onnxModelVersion"); + if(cfg.contains("onnxOpenVINODeviceType")) + backendExtraParam += ";openvinoDeviceType=" + cfg.getString("onnxOpenVINODeviceType"); + if(cfg.contains("onnxOpenVINODeviceId")) + backendExtraParam += ";openvinoDeviceId=" + cfg.getString("onnxOpenVINODeviceId"); + if(cfg.contains("onnxOpenVINOEnableNPUFastCompile")) + backendExtraParam += ";openvinoEnableNPUFastCompile=" + cfg.getString("onnxOpenVINOEnableNPUFastCompile"); + if(cfg.contains("onnxOpenVINOCacheDir")) + backendExtraParam += ";openvinoCacheDir=" + cfg.getString("onnxOpenVINOCacheDir"); +#else if(cfg.contains("openclTunerFile")) - openCLTunerFile = cfg.getString("openclTunerFile"); + backendExtraParam = cfg.getString("openclTunerFile"); +#endif bool openCLReTunePerBoardSize = false; if(cfg.contains("openclReTunePerBoardSize")) openCLReTunePerBoardSize = cfg.getBool("openclReTunePerBoardSize"); @@ -315,7 +362,7 @@ vector Setup::initializeNNEvaluators( nnCacheSizePowerOfTwo, nnMutexPoolSizePowerOfTwo, debugSkipNeuralNet, - openCLTunerFile, + backendExtraParam, homeDataDirOverride, openCLReTunePerBoardSize, useFP16Mode, diff --git a/cpp/runonnxtests.sh b/cpp/runonnxtests.sh new file mode 100644 index 000000000..2aff64733 --- /dev/null +++ b/cpp/runonnxtests.sh @@ -0,0 +1,43 @@ +#!/bin/bash -eux +set -o pipefail +{ +# --------------------------------------------------------------- +# ONNX backend integration tests +# +# Exercises three levels of the inference pipeline: +# 1. runtinynntests — tiny model, full pipeline (no external model) +# 2. testgpuerror -quick — FP32 unbatched vs batched comparison +# 3. runnnevalcanarytests — sanity checks on real game positions +# --------------------------------------------------------------- + +mkdir -p tests/scratch + +# 1. Tiny NN tests — self-contained, no external model needed +echo "=== runtinynntests ===" +./katago runtinynntests tests/scratch 1.0 \ + | grep -v ': nnRandSeed0 = ' \ + | grep -v 'finishing, processed' + +# 2. GPU error test (quick) — compares unbatched vs batched inference +# For CPU ONNX provider both paths are FP32, so errors should be near zero. +# Any ownership indexing bug would surface as large ownership error. +echo "=== testgpuerror -quick ===" +./katago testgpuerror \ + -config configs/gtp_example.cfg \ + -model tests/models/g170-b6c96-s175395328-d26788732.bin.gz \ + -quick \ + -override-config "nnRandSeed=forTesting,forDeterministicTesting=true" + +# 3. NN eval canary tests — sanity checks on 5 real game positions +# Uses symmetries 0, 3, 6 (same as runsearchtests.sh) +echo "=== runnnevalcanarytests ===" +./katago runnnevalcanarytests configs/gtp_example.cfg tests/models/g170e-b10c128-s1141046784-d204142634.bin.gz 0 \ + | grep -v ': nnRandSeed0 = ' +./katago runnnevalcanarytests configs/gtp_example.cfg tests/models/g170e-b10c128-s1141046784-d204142634.bin.gz 3 \ + | grep -v ': nnRandSeed0 = ' +./katago runnnevalcanarytests configs/gtp_example.cfg tests/models/g170e-b10c128-s1141046784-d204142634.bin.gz 6 \ + | grep -v ': nnRandSeed0 = ' + +echo "=== All ONNX tests passed ===" +exit 0 +}