Skip to content

gallantlab/voxelwise_tutorials

Repository files navigation

Voxelwise Encoding Model tutorials

Github Python License Build Build Tutorials Downloads

Welcome to the Voxelwise Encoding Model tutorials, brought to you by the Gallant Lab.

Paper

If you use these tutorials for your work, consider citing the corresponding paper:

Dupré la Tour, T., Visconti di Oleggio Castello, M., & Gallant, J. L. (2024). The Voxelwise Encoding Model framework: a tutorial introduction to fitting encoding models to fMRI data. https://doi.org/10.31234/osf.io/t975e

You can find a copy of the paper here.

Tutorials

This repository contains tutorials describing how to use the Voxelwise Encoding Model (VEM) framework. VEM is a framework to perform functional magnetic resonance imaging (fMRI) data analysis, fitting encoding models at the voxel level.

To explore these tutorials, one can:

  • Read the rendered examples in the tutorials website (recommended).
  • Run the merged notebook in Colab.
  • Run the Jupyter notebooks (tutorials/notebooks directory) locally.

The tutorials are best explored in order, starting with the "shortclips" tutorial. The "vim2" tutorial is optional and redundant with the "shortclips" one.

Dockerfiles

This repository contains Dockerfiles to run the tutorials locally. Please see the instructions in the docker directory.

Helper Python package

To run the tutorials, this repository contains a small Python package called voxelwise_tutorials, with useful functions to download the data sets, load the files, process the data, and visualize the results.

Installation

To install the voxelwise_tutorials package, run:

pip install voxelwise_tutorials

To also download the tutorial scripts and notebooks, clone the repository via:

git clone https://github.com/gallantlab/voxelwise_tutorials.git
cd voxelwise_tutorials
pip install .

Developers can also install the package in editable mode via:

pip install --editable .

Requirements

The tutorials are not compatible with Windows. If you are using Windows, we recommend running the tutorials on Google Colab or in the provided Docker containers.

git-annex is required to download the data sets. Please follow the instructions in the git-annex documentation to install it on your system.

The tutorials and the package voxelwise_tutorials require Python 3.9 or higher.

The package voxelwise_tutorials has the following Python dependencies: numpy, scipy, h5py, scikit-learn, matplotlib, networkx, nltk, pycortex, himalaya, pymoten, datalad.

Cite as

If you use one of our packages in your work (voxelwise_tutorials [1], himalaya [2], pycortex [3], or pymoten [4]), please cite the corresponding publications:

[1]Dupré la Tour, T., Visconti di Oleggio Castello, M., & Gallant, J. L. (2024). The Voxelwise Modeling framework: a tutorial introduction to fitting encoding models to fMRI data. https://doi.org/10.31234/osf.io/t975e
[2]Dupré la Tour, T., Eickenberg, M., Nunez-Elizalde, A.O., & Gallant, J. L. (2022). Feature-space selection with banded ridge regression. NeuroImage. https://doi.org/10.1016/j.neuroimage.2022.119728
[3]Gao, J. S., Huth, A. G., Lescroart, M. D., & Gallant, J. L. (2015). Pycortex: an interactive surface visualizer for fMRI. Frontiers in neuroinformatics, 23. https://doi.org/10.3389/fninf.2015.00023
[4]Nunez-Elizalde, A.O., Deniz, F., Dupré la Tour, T., Visconti di Oleggio Castello, M., and Gallant, J.L. (2021). pymoten: scientific python package for computing motion energy features from video. Zenodo. https://doi.org/10.5281/zenodo.6349625