新加坡通讯及新闻部长杨莉明:早已出台政策应对人工智能的风险

▲ 新加坡眼,点击卡片关注,加星标,以防失联


2024年5月8日,新加坡通讯与新闻部长杨莉明在国会书面答复官委议员

拉斯瓦娜副教授有关人工智能技术提供商风险评估的问题。


以下内容为新加坡眼根据国会英文资料翻译整理:   


拉斯瓦娜副教授(官委议员)询问通讯及新闻部长:

(a)新加坡的国家人工智能(AI)战略2.0是否包括要求人工智能技术提供者在公开提供技术之前完成风险评估?

(b)如果是,由谁进行评估?

(c)评估哪些风险?


杨莉明(通讯及新闻部长)女士:新加坡《国家人工智能战略2.0》 将可信生态系统的存在视为促进人工智能蓬勃发展的关键因素。


事实上,新加坡早在2019年就率先推出了我们的《人工智能模型治理评估框架》,该框架推荐了解决人工智能部署治理问题的最佳实践。我们将继续更新该框架,以应对新出现的风险,包括在今年推出


与此同时,政府还为寻求在开发和部署人工智能过程中管理风险的组织提供实际支持,包括推出开源测试工具包,如 "人工智能验证"(AI Verify),以帮助他们验证其人工智能系统在国际公认的治理原则(如稳健性和可解释性)方面的表现。


这些框架为政府与行业合作管理和评估整个生态系统的人工智能风险提供了有用的基准在金融领域,金融机构遵循特定行业人工智能治理指南,例如新加坡金融管理局 (MAS) 的《促进公平、道德、问责和透明度的原则》(FEAT),这与前面提到的人工智能治理框架密切相关。许多公司还通过额外的内部指南来监督人工智能开发,这些例子可以在个人数据保护委员会(PDPC)的《模型人工智能治理框架用例汇编》中找到。例如,星展银行已为其人工智能模型实施了自己的“负责任数据使用”框架,以符合法律、安全和质量标准,并使用概率—严重性矩阵等风险评估工具。


除了在国内加强治理方法外,我们还与国际伙伴合作,共同构建全球人工智能的可信环境。例如,我们在 "人工智能验证 "和美国的 《人工智能风险管理框架》之间进行了联合映射,以统一方法并简化在不同司法管辖区部署人工智能的合规负担。我们将继续寻找这样的机会,并根据技术发展的需要调整我们的方法。


图片


以下是英文质询内容:

Assoc Prof Razwana Begum Abdul Rahim asked the Minister for Communications and Information (a) whether Singapore's National Artificial Intelligence (AI) Strategy 2.0 includes a requirement for providers of AI technologies to complete a risk assessment prior to making the technology publicly available; (b) if so, who undertakes the assessment; and (c) what risks are assessed.

Mrs Josephine Teo: The Singapore National AI Strategy 2.0 identifies the presence of a trusted ecosystem as a key enabler for robust AI development.

In fact, Singapore was a first-mover in launching our AI Model Governance Framework back in 2019, which recommends best practices to address governance issues in AI deployment. We continue to update it to address emerging risks, including by launching a Framework for Generative AI this year. Meanwhile, the Government also provides practical support for organisations seeking to manage risks in the development and deployment of AI, including through launching open-source testing toolkits, such as AI Verify to help them validate their AI systems' performance on internationally-recognised governance principles like robustness and explainability. 

These frameworks provide a useful baseline for the Government to partner industry on managing and assessing AI risks across the ecosystem. In the finance sector, financial institutions are guided by sector-specific AI governance guidelines, such as the Monetary Authority of Singapore's (MAS') Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT), which aligns closely to the earlier-mentioned AI governance frameworks. Many companies have supplemented these with additional internal guidelines to oversee AI development, examples of which can be found in the Personal Data Protection Commission's Compendium of Use Cases for the Model AI Governance Framework. For example, DBS has implemented its own Responsible Data Use framework for its AI models to comply with legal, security and quality standards and utilises risk assessment tools, such as the probability-severity matrix. 

Besides enhancing our governance approach domestically, we collaborate with international partners to build a trusted environment for AI worldwide. For instance, we have conducted a joint mapping exercise between AI Verify and the US' AI Risk Management Framework, to harmonise approaches and streamline the compliance burdens on organisations deploying AI across different jurisdictions. We will continue to seek out such opportunities and adapt our methods in tandem with the technology development.


HQ丨编辑

HQ丨编审

新加坡国会丨来源