Skip to content

bethxx99/FinTruthQA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FinTruthQA: A Benchmark Dataset for Evaluating the Quality of Financial Information Disclosure

This repository includes the dataset and benchmark of the paper:

FinTruthQA: A Benchmark Dataset for Evaluating the Quality of Financial Information Disclosure

Abstract

Accurate and transparent financial information disclosure is crucial in the fields of accounting and finance, ensuring market efficiency and investor confidence. Among many information disclosure platforms, the Chinese stock exchanges' investor interactive platform provides a novel and interactive way for listed firms to disclose information of interest to investors through an online question-and-answer (Q&A) format. However, it is common for listed firms to respond to questions with limited or no substantive information, and automatically evaluating the quality of financial information disclosure on large amounts of Q&A pairs is challenging. This paper builds a benchmark FinTruthQA, that can evaluate advanced natural language processing (NLP) techniques for the automatic quality assessment of information disclosure in financial Q&A data. FinTruthQA comprises 6,000 real-world financial Q&A entries and each Q&A was manually annotated based on four conceptual dimensions of accounting: question identification, question relevance, answer readability, and answer relevance. We benchmarked various NLP techniques on FinTruthQA, including statistical machine learning models, pre-trained language model and their fine-tuned versions, as well as the large language model (LLM) GPT-4. Experiments showed that existing NLP models have strong predictive ability for question identification and question relevance tasks, but are suboptimal for answer readability and answer relevance tasks. By establishing this benchmark, we provide a robust foundation for the automatic evaluation of information disclosure, significantly enhancing the transparency and quality of financial reporting. FinTruthQA can be used by auditors, regulators, and financial analysts for real-time monitoring and data-driven decision-making, as well as by researchers for advanced studies in accounting and finance, ultimately fostering greater trust and efficiency in the financial markets.

avatar

Dataset

We collected Q&A entries from the interactive platforms for communication between investors and listed companies established by the Shanghai Stock Exchange(SSE) and the Shenzhen Stock Exchange(SZSE). The data from the SSE covers the timeframe from January 4, 2016, to December 31, 2021, while the data from the SZSE spans from September 1, 2021, to May 31, 2022. Each Q&A entry was annotated based on four key information disclosure quality evaluation criteria: question identification, question relevance, answer relevance, and answer readability.

avatar

Data Statistics

The figure below shows the distribution of questions and answers in FinTruthQA(in characters).

avatar

Data Access

The data are available in dataset folder, which are saved in a CSV format.

Annotation Guidelines

We focuses on four key information disclosure quality evaluation criteria: question identification, question relevance, answer relevance, and answer readability. These criteria are widely recognized as crucial indicators of information quality and are important for investors to consider when evaluating Q&A information.
More details can be found in our guidelines (see annotation guidelines/). Both Chinese version and English version are released.

Model Benchmarking

To reproduce the results of ML-based model, users could use the scikit-learn library.

All the experiments of PLM-based models are conducted using PyTorch and HuggingFace’s framework.

The links to the PLM models are here:

The continued pre-training was performed based on the UER framework. Below is the link to the library:

Leaderboard

We reported the results, namely mean ± std, based on experiments on 3 different random seeds: 12, 42, and 123. Mean is calculated by averaging the performances of different seeds. Std denotes the standard error, which is calculated by dividing the standard deviation of the mean value by the square root of the number of seeds.

Task 1 & 2: Question identification and Question relevance

avatar

Task 3: Answer readability

avatar

Task 4: Answer relevance

avatar

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published