Docker version: https://github.com/MagicBOTAlex/DockeredMLEyeTrack
Depending on number of users, I'll consider making an UI
If it's only me using this, then no UI is needed.
Recommendation: You should know a bit of python to use this. If you want a slightly easier non-python based, then go to Ryan's (It is JavaScript based 💀)
⚠️ The .exe has not been fully tested yet, and I need testers to finish it. Currently, only the python and docker version is confirmed to work.
Python and docker is confirmed working because I regularly use it. (If you're lucky, you'll find me at the great pug once a week)
Else you can find me at Ryan's discord: https://discord.gg/QTyU4eNKrv
This is what is included in the .zip
If you have DIY'ed eyetracking, then you definitely know how to use this software.
If not, then you just need to drag and drop your unconverted models (.h5) into the models folder.
These models are only V1 of Ryan's models. You still have to use Ryan's software to train the models. My software only provides a new engine to run the models.
Change the settings of Settings.json, then run the .exe and we gucci.
eyetrackapp_4rh8wafiXV.mp4 |
eye-tracking_QOVvprovUQ.mp4 |
WindowsTerminal_CnQtP4GHSa.mp4 |
- Lower latency
- ONNX based (Less GPU/CPU per infrence)
- Not JavaScript based
- Currenly licensed under Babble's restrictive license
- Uses Python
- .exe + python + CUDA + dependencies = BIG .EXE
- No UI
|
|
You need conda, but then it's as easy as running build.bat on windows. Linux is slightly different.
You can refer to the docker version.
Two scripts are unfortunately licensed under Project Babble's restrictive license because of their MJPG Streamer.
If somebody could make a replacement, then please do. If not, then this project will remain under their control/license.
The rest idk, ask me on Ryan's Disocrd.