The methodology for this code implementation can be found in: TAGIL.pdf
Game gaze data can be downloaded from the Atari-HEAD dataset:
Zenodo - Atari-HEAD Dataset
Before running the code, ensure you either:
- Update the Weights & Biases login key to your personal key
- Comment out the disable line to prevent logging to Weights & Biases
If you want to train on multiple gameplay sequences simultaneously run concat.py and enter the parent directory that contains all the pairs of image and .txt files.
This will concatenate multiple gameplays into a single .tar.bz2
file.
Both Gaze and T-Gaze models require 4 input files:
- Original dataset files:
.txt
file.tar.bz2
file
- Preprocessed files:
- Optical flow file
- Saliency file
To generate the preprocessed files:
- Edit the file paths in
optical_flow.py
and run it - Edit the file paths in
saliency.py
and run it
Note: This preprocessing must be performed separately for train, validation, and test files.
- Configure file paths in either:
Gaze_baseline.py
t_gaze.py
- Run the chosen script to train the model
- Use
inference.py
with your trained model to generate.npz
files - Visualize predictions using
show_pred.py
Use gaze_replay.py
to watch the game replay with gaze predictions overlay:
- Set boolean variables at the top of the file to choose visualization options:
- Baseline predictions
- T-gaze predictions
- Or both simultaneously
- Prepare the following files:
.txt
file.tar.bz2
file.npz
file (generated from gaze prediction)
- Run either:
agil.py
tagil.py
- Edit the relevant file paths in your chosen script
The script will train and produce an action prediction model.