AI for Science Sparks Paradigm Shift, Draws Focus at Two Sessions
Hailed as humanity’s “second brain”, AI for Science is triggering profound transformations in global scientific research paradigms and has emerged as a key topic of discussion among deputies and members during the national Two Sessions.
On March 7, at the second plenary meeting of the 4th Session of the 14th National Committee of the Chinese People's Political Consultative Conference (CPPCC), a CPPCC member and academician of the Chinese Academy of Sciences proposed that artificial intelligence (AI) should lead the reform of scientific research paradigms.
A day later, another CPPCC member, a former vice minister of the Ministry of Industry and Information Technology, stated in an interview that AI has evolved from an auxiliary research tool to a core driver of scientific and technological progress, and will profoundly reshape research paradigms and industrial formats in the future.

Deputies and members both inside and outside the conference venues acknowledged that AI has undoubtedly injected new vitality into the global technological revolution, yet the difficulties and challenges in its application to scientific research cannot be ignored.
Landmark achievements have demonstrated AI’s great potential in advancing science: the AlphaFold model helped two researchers outside the life sciences win the 2024 Nobel Prize, while the “Large Atomic Model Project” can be widely applied in the research and development of semiconductor, alloy, organic molecule and other material systems.
From a global perspective, phased results of AI-enabled scientific research have fully proved that AI can not only accelerate the solution to long-standing major scientific problems, but also is expected to restructure the basic path of scientific discovery, according to one CPPCC member.
However, the surge in research outcomes driven by AI for Science has brought new problems. The lack of authoritative, standardized and large-scale scientific datasets, coupled with high data acquisition costs, inconsistent standards and insufficient data sharing, has led to low efficiency and unreliable performance in AI model training, with prominent redundant construction and resource waste.
Another CPPCC member highlighted the waste of computing power in his research. He noted that among 380,000 stable materials predicted by the Graph Network for Materials Exploration (GNoME) model, only 736 have been experimentally verified, a verification rate of less than 0.2%.
“This means that the predictive results generated by AI in a single day often take humans a decade or even longer to fully verify, resulting in the inefficient occupation of scientific and computing resources,” he explained.
More worrying, he added, is that a large number of predictive results remain only at the academic paper level and fail to achieve industrialization. This not only prevents the release of their industrial value, but also causes dual problems of resource waste and transformation obstruction, which he described as a “barrier lake” — with AI’s powerful predictive capacity as the surging upstream water, while the downstream channels for verification and industrialization remain narrow and blocked.
He pointed out that the obstruction in AI-enabled scientific research stems from three key factors: limitations of predictive models themselves, lack of unified standards and evaluation systems, and serious insufficiency in experimental verification capabilities, requiring systematic solutions.
To address these bottlenecks and optimize China’s layout of the AI for Science frontier R&D system, deputies and members put forward targeted suggestions.
One member suggested strengthening policy guidance to enhance basic innovation capacity, optimizing the overall layout of scientific research in the AI field, increasing support for basic research on AI algorithms, and encouraging original research through diversified investment mechanisms involving enterprises and social capital.
To tackle the shortage of interdisciplinary talents proficient in both scientific research and AI, he proposed establishing a training system for compound innovative talents from the source, supporting top research universities to pilot “doctoral + master’s” dual-degree programs that enable doctoral students in AI to pursue a master’s degree in a scientific discipline, exploring a new interdisciplinary postgraduate training model.
Another member recommended promoting the construction of three core carriers: high-quality datasets, high-value knowledge centers, and an evaluation standard system for AI predictive results. He advised establishing public high-value data sharing centers in key industries to reduce redundant work and building an authoritative evaluation system for AI predictions to facilitate result screening and circulation.
He also called for coordinated development of AI for Science and AI for R&D — the former focusing on basic scientific breakthroughs to solve “bottleneck” problems, and the latter on addressing engineering issues in technology landing. He suggested implementing a “task-based bidding” mechanism, where enterprises put forward technical demands and research institutions tackle them, and exploring government sandbox supervision to grant autonomy to the R&D process and stimulate innovation vitality.
