This is the official PyTorch implementation of the paper MedFuncta: Modality-Agnostic Representations Based on Efficient Neural Fields by Paul Friedrich, Florentin Bieder and Philippe C. Cattin.
If you find our work useful, please consider to ⭐ star this repository and 📝 cite our paper:
@article{friedrich2025medfuncta,
title={MedFuncta: Modality-Agnostic Representations Based on Efficient Neural Fields},
author={Friedrich, Paul and Bieder, Florentin and Cattin, Philippe C},
journal={arXiv preprint arXiv:2502.14401},
year={2025}
}
Recent research in medical image analysis with deep learning has almost exclusively focused on grid- or voxel-based data representations in combination with convolutional neural networks.
We challenge this common choice by introducing MedFuncta, a modality-agnostic continuous data representation based on neural fields.
We demonstrate how to obtain these neural fields in a reasonable amount of time and compute, by exploiting redundancy in medical signals and by applying an efficient meta-learning approach with a context reduction scheme.
We further address the spectral bias in commonly used SIREN activations, by introducing a
We recommend using a conda environment to install the required dependencies.
You can create and activate such an environment called medfuncta
by running the following commands:
mamba env create -f environment.yaml
mamba activate medfuncta
To obtain meta-learned shared model parameters, simply run the following command with the correct config.yaml
:
CUDA_VISIBLE_DEVICES=0 python train.py --config ./configs/experiments/DATATYPE/DATASET_RESOLUTION.yaml
To perform reconstruction experiments (evaluate the representation quality), simply run the following command with the correct config.yaml
:
CUDA_VISIBLE_DEVICES=0 python eval.py --config ./configs/eval/DATATYPE/DATASET_RESOLUTION.yaml
To convert a dataset into our MedFuncta representation, simply run the following command with the correct config.yaml
:
CUDA_VISIBLE_DEVICES=0 python fit_NFset.py --config ./configs/fit/DATATYPE/DATASET_RESOLUTION.yaml
To ensure good reproducibility, we trained and evaluated our network on publicly available datasets:
-
MedMNIST, a large-scale MNIST-like collection of standardized biomedical images. More information is avilable here.
-
MIT-BIH Arryhythmia, a heartbeat classification dataset. We use a preprocessed version that is available here.
-
BRATS 2023: Adult Glioma, a dataset containing routine clinically-acquired, multi-site multiparametric magnetic resonance imaging (MRI) scans of brain tumor patients. We just used the T1-weighted images for training. The data is available here.
-
LIDC-IDRI, a dataset containing multi-site, thoracic computed tomography (CT) scans of lung cancer patients. The data is available here.
The provided code works for the following data structure (you might need to adapt the directories in data/dataset.py
):
data
└───BRATS
└───BraTS-GLI-00000-000
└───BraTS-GLI-00000-000-seg.nii.gz
└───BraTS-GLI-00000-000-t1c.nii.gz
└───BraTS-GLI-00000-000-t1n.nii.gz
└───BraTS-GLI-00000-000-t2f.nii.gz
└───BraTS-GLI-00000-000-t2w.nii.gz
└───BraTS-GLI-00001-000
└───BraTS-GLI-00002-000
...
└───LIDC-IDRI
└───LIDC-IDRI-0001
└───preprocessed.nii.gz
└───LIDC-IDRI-0002
└───LIDC-IDRI-0003
...
└───MIT-BIH
└───mitbih_test.csv
└───mitbih_train.csv
...
We provide a script for preprocessing LIDC-IDRI. Simply run the following command with the correct path to the downloaded DICOM files DICOM_PATH
and the directory you want to store the processed nifti files NIFTI_PATH
:
python data/preproc_lidc-idri.py --dicom_dir DICOM_PATH --nifti_dir NIFTI_PATH
Our code is based on / inspired by the following repositories: