Harnessing Analogical Reasoning for Empathy: A Unified Framework for Enhancing Large Language Models
Wenjia Tan, Junan Wang, Yumu Xie
CISC7021 Group 50 Project Implementation Code
The oral presentation is accessible at YouTube
Empathy generation in Large Language Models (LLMs) is a pivotal challenge in advancing human-centered artificial intelligence (HAI). While recent techniques such as Chain-of-Thought (CoT) reasoning have demonstrated promise, they often fail to generalize effectively by leveraging prior successes in novel contexts. Our research systematically investigates the potential of analogical reasoning to enhance empathetic generation in LLMs, focusing on three key dimensions: intrinsic analogical ability, analogy-based prompting, and the integration of external knowledge. The results show that models with higher analogical reasoning capabilities, such as Qwen-2.5-3B-Instruct, significantly outperform peers on empathy-driven tasks, highlighting analogy as a cornerstone of cognitive sophistication in LLMs. Additionally, we uncover a bell-shaped relationship between the number of self-generated analogical examples and empathetic performance, offering practical guidance for prompt engineering. We propose the Emotional Buffer-of-Thought (EBoT) framework, which leverages external analogical knowledge to enable LLMs to “stand on the shoulders of giants” in their reasoning and empathetic capabilities. Our work establishes analogy as a transformative paradigm for developing scalable, explainable, and emotionally intelligent AI systems. By bridging internal cognition and external knowledge, our findings pave the way for LLMs that are not only technically proficient but also capable of genuine human connection.
Wenjia Tan (MC451431) [email protected]
Junan Wang (MC450642) [email protected]
Yumu Xie (MC451742) [email protected]
Department of Computer and Information Science, Faculty of Science and Technology, University of Macau