When @shoaibkamil gave a talk at the FPBench Community Meetings, he and I briefly discussed merging the benchmarks here into FPBench. Naturally a bunch are already in FPBench, but there are a bunch of extra ones here. However, I have a few questions / blockers on this:
- I don't see a license anywhere on the benchmarks. Is it possible to license this repository under something permissive? FPBench uses the MIT license, so that's ideal. If you can't license all of the code, perhaps you could license just the
methods.hpp file?
- Is there anything you can say about where the benchmarks originally came from? Shoaib mentioned that some were extracted from some internal tools. Is that correct? Does that refer to the
exprN expressions? What's the level of detail you could give about these tools? (Is it at least safe to say they are doing "graphics"?) Are the extra_functionN benchmarks randomly generated by the generator in this repo? We could add this to the metadata.
- Is anything published about these benchmarks? Just the website? If there is we typically include a citation in the metadata.
- Any thoughts about which benchmarks are worth including in FPBench? My guess is: include the
exprN benchmarks, don't include the ones that are just one function, maybe include the extra_functionN benchmarks. I'm willing to include randomly generated benchmarks if there's some sense they were filtered & selected, or important for reproducibility, or something like that.
When @shoaibkamil gave a talk at the FPBench Community Meetings, he and I briefly discussed merging the benchmarks here into FPBench. Naturally a bunch are already in FPBench, but there are a bunch of extra ones here. However, I have a few questions / blockers on this:
methods.hppfile?exprNexpressions? What's the level of detail you could give about these tools? (Is it at least safe to say they are doing "graphics"?) Are theextra_functionNbenchmarks randomly generated by the generator in this repo? We could add this to the metadata.exprNbenchmarks, don't include the ones that are just one function, maybe include theextra_functionNbenchmarks. I'm willing to include randomly generated benchmarks if there's some sense they were filtered & selected, or important for reproducibility, or something like that.