Skip to content

A neural alignment benchmark for vision models based on the THINGS ventral stream spiking dataset (TVSD).

Notifications You must be signed in to change notification settings

serre-lab/tvsd-benchmark

Repository files navigation

TVSD Benchmark

This repo contains tools for loading and benchmarking models on the TVSD (THINGS Ventral Stream Spiking Dataset) from Papale et. al. 2025.

Setup

Begin by cloning the repository.

git clone git@github.com:serre-lab/tvsd-benchmark.git
cd tvsd-benchmark

Option 1: Docker + Make (recommended)

Build the container image and run the unit tests:

make build
make test

Open a shell in the container:

make shell

Option 2: Local Python environment

Create a conda environment with our requirements.

conda create -n tvsd-benchmark
conda activate tvsd-benchmark
pip install -r requirements.txt

Alternatively, you can use a venv environment.

python -m venv env
source env/bin/activate
pip install -r requirements.txt

To obtain the TVSD dataset, run

chmod +x scripts/download_tvsd.sh
./scripts/download_tvsd.sh

Which will download the normalized MUA and metadata .mat files into a new data directory. To obtain the THINGS dataset, you should analogously run the following snippet. You will be prompted by osfclient to provide a password in order to unzip the dataset. You can easily obtain this password here.

chmod +x scripts/download_things.sh
./scripts/download_things.sh

Benchmarking a Model

Ensure that you have your environment activated, and run

sbatch scripts/generate_activations.sh [MODEL_CONFIG_PATH]

When this completes, run

sbatch scripts/benchmark.sh [MODEL_CONFIG_PATH]

(We separate the two jobs, as only the former requires a GPU.) The results will populate outputs/results/[model].

To run unit tests locally without Docker:

make test-local

Benchmarking a Suite of Models

Fill configs/models.csv with the names of the models you want to benchmark. Then run

sbatch scripts/all_models.sh

Which will generative and evaluate activations for each model.

Adding Your Own Model

In the current configuration, each model is specified by a corresponding config file in configs. Making a new config for your model is self-explanatory--just follow the outline of the existing ones. You will also have to build out utils/load_model.py to accept your added model. In the future, direct integration with timm will be provided.

About

A neural alignment benchmark for vision models based on the THINGS ventral stream spiking dataset (TVSD).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published