Automated novelty evaluation of academic paper: A collaborative approach integrating human and large language model knowledge

Wu W. Zhang C. Zhao Y (2025). Automated novelty evaluation of academic paper: A collaborative approach integrating human and large language model knowledge. Journal of the Association for Information Science and Technology, https://doi.org/10.1002/asi.70005

Authors
Wenqing Wu, Chengzhi Zhang, Yi Zhao
Journal
Journal of the Association for Information Science and Technology
First published
2025
Type
Journal Article
DOI
10.1002/asi.70005

Abstract

AbstractNovelty is a crucial criterion in the peer‐review process for evaluating academic papers. Traditionally, it is judged by experts or measured by unique reference combinations. Both methods have limitations: experts have limited knowledge, and the effectiveness of the combination method is uncertain. Moreover, it is unclear if unique citations truly measure novelty. The large language model (LLM) possesses a wealth of knowledge, while human experts possess judgment abilities that the LLM does not possess. Therefore, our research integrates the knowledge and abilities of LLM and human experts to address the limitations of novelty assessment. The most common novelty in academic papers is the introduction of new methods. In this paper, we propose leveraging human knowledge and LLM to assist pre‐trained language models (PLMs, e.g., BERT, etc.) in predicting the method novelty of papers. Specifically, we extract sentences related to the novelty of the academic paper from peer‐review reports and use LLM to summarize the methodology section of the academic paper, which are then used to fine‐tune PLMs. In addition, we have designed a text‐guided fusion module with novel Sparse‐Attention to better integrate human and LLM knowledge. We compared the method we proposed with a large number of baselines. Extensive experiments demonstrate that our method achieves superior performance.

Reviews

Informative Title

100%
Appropriate
Slightly Misleading
Exaggerated

Methods

100%
Sound
Questionable
Inadequate

Statistical Analysis

100%
Appropriate
Some Issues
Major concerns

Data Presentation

100%
Complete and Transparent
Minor Omissions
Misrepresented

Discussion

100%
Appropriate
Slightly Misleading
Exaggerated

Limitations

100%
Appropriately acknowledged
Minor Omissions
Inadequate

Data Available

100%
Completely Available
Partial data available
Not Open Access

Sign in to add a review. Help the research community by sharing your assessment of this journal-article.