Official Implementation for the intelligibility protocol, PXP.
We have very minimal dependencies, and you can install them using the following command:
pip install -r requirements.txt
You might want to create a virtual environment (or use conda) to avoid conflicts with your system packages. We use python3.9.18 for all experiments.
You will also have to create a results folder,
mkdir results
Finally, you can place your API keys in a .env
file in the root directory, a template is in .env.template
.
Our data is provided as a zip file, data.zip
, run
unzip data.zip
to extract the data.
You can then use src/preprocess.py
to generate the data in the correct format, for the experiments.
This will also summarize the data, using the summarize
function from src/utils.py
.
To reproduce our RAD results, you can run the following command:
python src/interact.py --num_iter=5 --machine="claude-3-5-sonnet-20240620" --human_type static
To reproduce our DRUG results, you can run the following command:
python src/interact.py --num_iter=5 --machine="claude-3-5-sonnet-20240620" --task=DRUG --human_type=static --eval_at_start
This will output the counts of one-way and two-way intelligible sessions, create a tags.txt
file of the actual tags exchanged between the two agents and also save the D (data.pkl
), M (messages.pkl
) and C (context.pkl
) (from Procedure 1 in the paper) to the results/
folder.
To reproduce the trend in Figure 3 from the paper, we ran the above command 5 times and manually extracted the number of strong and ultra-strong intelligible sessions (upto an interaction limit) were generated per agent.
Reproducing the DRUG-Human results requires an expert and so the outcome may be stochastic, but an experiment can be launched using:
python src/interact.py --num_iter=5 --machine="claude-3-sonnet-20240229" --task=DRUG --no_learn
Please run python src/interact.py --help
to see all the parameters that can be customized, we support several LLMs,
and the implementation should ideally run with all LLMs supported by litellm.
If the experiment crashes in between (due to API limits, or wrong input, etc.) you can resume the experiment by passing the --resume
flag. The -h
flag describes a group of arguments that need to be passed when this flag is set (path to the ongoing context, messages, metrics, etc. files, these are all saved by the code automatically).
In general the code allows for interaction between both static and real-time human feedback and an LLM (interfaced by the XMachine
).
To use the approach with custom data,
- you can use some form of static human feedback stored in data as a CSV,
- as with the human experiments, one can use the command line and a real expert human for feedback.
- DRUG can also be run in static mode by passing
--human_type=static
Here, we precisely describe how to use the code for a different task, say MATS (Materials Science).
- Decide the type of feedback you have access to, static (CSV with some predictions and explanations) or real-time (human expert)
- If it is static then you would need to add the data to the
data/
folder. - Now, depeding on the type of feedback, you should implement a
MATSAgent
class insrc/agents.py
which should inherit fromAgent
. - Following this, implement
MATSMachine
andMATSHuman
classes in the same file, to see the differences in real-time / static feedback, you can look atDRUGHuman
andDRUGHumanStatic
. - With this, you need to change the
create_agent
insrc/agent.py
to also be compatible with the new task. - Finally, you have to implement the
MATS
class insrc/tasks.py
which should inherit fromTask
and borrow code fromRAD
andDRUG
appropriately. - Now, you can run the code using the following command: (add this task to the choices for the
--task
argument)
python src/interact.py --num_iter=5 --machine="claude-3-sonnet-20240229" --task=MATS
- We also provide a
--debug
flag, this will clip the train_data to 5 examples, and the val and test data to 2 examples each, this is useful for debugging.
This is an example interaction (from the RAD task) generated by using the PXP protocol and our implementation (As explained in the paper, this is a special case of the protocol such that the human-agent can never revise it's internal model).
Please raise an issue if you have any questions or need help with the code!
@misc{srinivasan2024implementationapplicationintelligibilityprotocol,
title={Implementation and Application of an Intelligibility Protocol for Interaction with an LLM},
author={Ashwin Srinivasan and Karan Bania and Shreyas V and Harshvardhan Mestha and Sidong Liu},
year={2024},
eprint={2410.20600},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2410.20600},
}