2025-09-09
Establishing a Multi-Party Co-Governance Pattern to Promote the Safe and Orderly Development of AI
Source:Science and Technology Daily

  At present, with the global sci-tech revolution and industrial transformation accelerating, artificial intelligence (AI) is evolving from a “supporting tool” into a core engine driving social transformation. Recently, the State Council issued the Opinions on Deepening the Implementation of the “AI Plus” Initiative (hereinafter referred to as the Opinions), promoting a “two-way empowerment” between technology and applications through six key actions and eight foundational supports. The Opinions aim to deeply integrate AI into scientific research, industrial development, the improvement of people’s well-being, and so on, while actively planning for AI applications in security governance, laying out China’s approach to modern governance in the era of intelligence.

  Strategic Leap: from “Empowering Variables” to “Development Drivers”

  The Opinions aims to position AI as a “key growth driver” for high-quality development, highlighting its core value across four dimensions: empowerment, burden reduction, quality enhancement, and efficiency improvement.

  AI-Empowered New Frontiers: Expanding the “Cognitive Boundaries” of Scientific Research—AI is emerging as an “accelerator” of fundamental research. For instance, AlphaFold resolved the protein-folding challenge that had confounded the scientific community for half a century. The Opinions focuses on domestic priorities, supporting paradigm shifts in scientific research, speeding up the commercialization of innovation achievements, and advancing “AI Plus” to empower industrial development. New business forms, new competitive strengths, and new models are reaching into countless sectors and bringing tangible benefits to tens of thousands of households.

  New Approaches to Reducing Work Burdens: Unlocking the “Efficiency Dividend”—Through automation, AI is freeing people from hazardous and labor-intensive tasks while creating more and better opportunities for intelligent employment and new ways of work. The Opinions focuses on improving people’s well-being and stimulating consumption, and vividly illustrates the principle of “people-centered development”: the ultimate goal of technological progress is to improve people’s quality of life and to bring greater convenience to daily living.

  New Paths to Quality Improvement: Strengthening “Growth Drivers” in Key Sectors—In intelligent manufacturing, automakers are leveraging AI-driven maintenance systems to cut equipment failure rates by 20%. In education, AI learning systems customize learning paths based on students’ individual circumstances. In healthcare, AI is increasingly serving as doctors’ “intelligent assistant.” The Opinions offers targeted support not only for industries but also for areas essential for the people’s wellbeing, such as education, healthcare, and consumption, underscoring the inclusiveness and practicality of technological applications.

  New Ways to Enhance Effectiveness: Driving a “Precision Upgrade” in Governance Capacity—In social governance, AI is leveraged to handle complex tasks, easing the workload and concerns of citizens. In security governance, smart policing, dynamic prevention and control, and emergency response efficiency have been significantly enhanced. In ecological governance, real-time monitoring and scenario simulations support the harmonious coexistence of humans and nature. The Opinions provides a new AI-powered engine for advancing the modernization of national governance systems and capacities.

  In summary, the Opinions closely follows the global development trend of science and technology, making strategic plans to set the direction, employing scientific design to drive integration, introducing directive measures to clarify development paths, and implementing timely initiatives to seize opportunities, thereby laying a solid foundation for the high-quality development and application of AI.

  Systematic Consideration: from “Technology Benefits” to “Security and Controllability”

  The Opinions lists “upholding security and controllability” as one of the four fundamental principles, highlighting the need to actively prevent security risks arising from AI.

  Inherent Risks of Models: “Uncontrollability” Arising from Technical Limitations—Current large models display pronounced “black box” characteristics: the lack of explainability makes it hard to interpret decision-making logic, limited robustness exposes them to adversarial attacks, and hallucination issues may generate content that is false yet appears convincing. Without appropriate constraints, these limitations could cause AI to evolve from a valuable tool into a potential source of risk.

  Emerging Ethical Challenges: “Cognitive Crises” Arising from Value Conflicts—The values of AI are essentially a reflection of data, and any biases present in the training data can be amplified by the model. More subtly, AI may unintentionally propagate content that conflicts with mainstream values, acting as an “amplifier” of negative social sentiments and potentially undermining social harmony and stability.

  Derivative Application Risks: “Emerging Threats” in the Digital Space—Deepfake technologies can generate highly realistic audio and video content that is difficult to distinguish from reality. AI’s heavy reliance on vast amounts of data may lead to privacy leakage, and AI-driven cyberattacks are becoming increasingly frequent... These risks pose direct threats to both personal security and social stability.

  It is important to emphasize that these security risks do not mark the culmination of AI development, but rather represent stage-specific challenges along the way. With the rapid advancement of technologies such as explainability, safety alignment, and risk assessment, these risks will be addressed systematically.

  Governance Innovation: From “Single-Point Control” to “Multi-Party Coordination”

  To balance development with security, the Opinions explicitly requires “building a new system for security governance and establishing a new pattern of multi-party co-governance.” By enhancing institutional frameworks, leveraging technological capabilities, and promoting ecosystem-wide collaboration, a solid safety and security foundation is laid for the healthy growth of AI.

  Build a “four-in-one” collaborative governance system. First, enhance the legal and regulatory framework, explore policy mechanisms that balance innovation with risk prevention and control, and adopt prudent yet inclusive management standards to promote healthy development. Second, establish a multi-stakeholder public safety system covering natural persons, robots, digital humans, and other subjects, ensure safe and reliable human-machine interactions, and foster a dynamic and agile governance pattern. Third, develop an integrated cyberspace governance system for “humans, machines, and objects,” refine regulatory rules, and leverage intelligent platforms to strengthen content review and risk management capabilities. Fourth, create an intelligent human-machine collaborative emergency response system, and use AI-assisted decision-making to improve disaster prevention and rescue efficiency.

  Strengthen security governance capabilities across four key areas. In terms of technological security, integrate native capabilities to build a solid security foundation and develop explainable AI technologies to ensure transparency in decision-making; enhance model resistance to attacks and robustness through adversarial training and multi-scenario testing; and address hallucination issues by developing content authenticity detection algorithms to minimize the risk of false information. In terms of ethical security, reinforce alignment technologies to establish security safeguards. For intrinsic value alignment, embed China’s outstanding traditional culture and core socialist values into models through reinforcement learning, guiding generated content to reflect Chinese heritage. For external safeguards, optimize content review and behavioral constraint technologies to ensure outputs adhere to mainstream moral standards. In terms of application security, provide guidance for key sectors and reinforce security measures. Establish security management measures covering data confidentiality, cybersecurity, supply chain security, and other links; guide the security industry in accelerating digital and intelligent transformation, cultivating a reliable and trustworthy workforce for intelligent security services; and enhance public risk awareness through security awareness training programs, “AI Security Week,” and other science popularization initiatives. In terms of national security, coordinate efforts at both the domestic and international levels to address critical challenges. Domestically, accelerate the establishment of a comprehensive security assessment system to support technological autonomy and controllability. Internationally, participate in formulating global governance rules and technical standards, promoting the thorough and practical implementation of the principle of “AI for good” to build a community with a shared future for mankind.

  The Opinions coordinates both “empowerment” and “security,” representing China’s strategic initiative to seize opportunities from the sci-tech revolution and drive high-quality development. We are confident that, through both technological and governance innovations, risks can be effectively managed and mitigated, thereby contributing to the building of an intelligent, inclusive, and secure future.