生成式人工智能技术治理的三重困境与应对

    The Triple Dilemmas of Technical Governance of Generative AI and Responses

    • 摘要: 进入数智时代,生成式人工智能应用在全球掀起的使用热潮引发了人们对相关伦理困境与风险问题的关注,虽然公平、准确、可靠、安全、人类监督等几项内容被设定为负责任人工智能的核心原则;但生成式人工智能的技术治理方面存在因本体论身份引发的价值张力、训练数据的代表性偏差与安全系统被动保护带来的三重困境并为生成式人工智能如何实践“负责任”的原则带来了现实挑战。在风险全球化与国际竞争加剧的历史变局下,人类社会面对生成式人工智能潜在的伦理与社会风险,应加快建立全球治理的合作机制,践行文化包容的治理智慧,以及回归社会问题本源的认识论是实现生成式人工智能“负责任”原则的关键。

       

      Abstract: With the advent of the era of digital intelligence, the proliferation of generative AI applications has aroused great concern regarding ethical dilemmas and risks. Fairness, accuracy, accountability, safety and human supervision are core principals of Responsible AI. However, the technical governance of generative AI has triple dilemmas: the ontological tension among values, representational biases in training data, and the passive protection system. This also poses actual challenges to how generative AI can be "responsible". Under the historical context of globalized risks and intensified international competition, accelerating the establishment of cooperative mechanisms for global governance, practicing the governance wisdom of cultural inclusion, and returning to the epistemological orientation towards the root causes of social problems are the keys to realizing the "responsible" principles in the face of the potential ethical and social risks of generative AI.

       

    /

    返回文章
    返回