News
CASAD News

Symposium on Science and Technology Ethics Focusing on "Research Integrity in AI Era" Held in Beijing

Oct 23, 2025

|  Source: CASADA+ / A-

Artificial Intelligence (AI) empowers scientific research but also introduces new ethical concerns, such as "AI-driven content spinning," "data decoration," and "ghostwriting." Addressing these challenges is crucial to healthy scientific development.

On September 26, the Symposium on Science and Technology Ethics of the Academic Divisions of the Chinese Academy of Sciences (CASAD) was held in Beijing, with the theme "Research Integrity in the Age of Artificial Intelligence". The symposium aimed to provide suggestions for addressing research integrity challenges in the AI era and build consensus for improving China's research integrity system and promoting the responsible application of AI.

Since the emergence of large language models (LLMs) like ChatGPT, AI has been widely applied in fields such as knowledge acquisition, data analysis, and academic writing, significantly enhancing the efficiency of learning and research. However, the increasing integration of AI has also brought about fundamental changes in the roles of individuals involved in academic activities.

The "algorithm-assisted"research model has led to a series of new problems, including unclear definitions of contributions, lack of transparency in research processes, and reduced interpretability of results. More concerningly, AI systems can harbor algorithmic biases and even generate false or misleading content, which poses serious threats to the reliability of research findings.

In his speech, CAS member Prof. HU Haiyan, Director of the Scientific Ethical Construction Committee of CASAD, pointed out that the development of AI detection technology is currently lagging behind that of AI generation technology. Traditional plagiarism detection methods are becoming less effective, making research misconduct more hidden and complex. Numerous retractions resulting from randomly generated AI content and false citations have raised widespread concerns within the international academic community.

"The use of AI has exacerbated traditional research misconduct in at least two aspects,” analyzed CAS member Prof. MEI Hong, who is also a professor at Peking University. “First, generative AI is fast, which dramatically lowers the cost of fraud. Second, the content produced by current large language models is structurally sound, well-formatted, and grammatically correct, making detection and judgment much more difficult."

The application of AI has also brought about many new problems. "For example, the involvement of AI blurs the boundaries of responsibility. Should the errors or 'hallucinations' generated by large models be considered 'honest mistakes' or attributed to users? Can large models be listed as authors or co-authors?" Prof. MEI gave examples, adding that "new tactics like manipulating AI peer review have surfaced, such as embedding instructions for 'positive review only' in papers to deceive AI review systems."

What is particularly alarming is the potential for AI-generated content to contaminate the human knowledge base. Recent cases have shown self-proclaimed media outlets publishing fabricated "scientific and technological news" and invented research results from top universities. These AI-generated articles, written in fluent language but full of factual errors, have been widely disseminated, misleading the public and eroding the credibility knowledge.

In response to this, CAS member Prof. TAN Tieniu, who is also the Secretary of the Party Committee of Nanjing University, emphasized that if false or inaccurate content generated by AI enters the academic communication chain without effectively detection, its long-term consequences cannot be underestimated.

Faced with these challenges, Prof. MEI proposed a systematic and comprehensive approach to upholding research integrity while embracing technological innovation. He gave the following suggestions:

Continuously deepen theoretical discussions and case studies to clarify the mechanism of AI's impact on research integrity;

Comprehensively implement science and technology ethics education and AI skills training to guide researchers in the responsible use of AI tools;

Actively build global consensus and establish a science and technology ethics governance system adapted to the AI era.

"For scientific research, AI is both an 'Aladdin's Lamp' that empowers innovation and a 'Pandora's Box' that may unlock risks," Prof. TAN argued. "Only by adhering to a comprehensive strategy that emphasizes both management and technology, aligns institutional norms with educational culture, and integrates domestic management with international cooperation, can we effectively address research integrity problems in the age of AI."

The Symposium on Science and Technology Ethics is an important annual academic event hosted by the Scientific Ethical Construction Committee of CASAD. Since 2011, it has been successfully held 14 times, with more than 70 participants attending this year's event.

(This article is translated and edited based on public information.)