CGTN专访丨梁正:中国在人工智能监管领域的探索与实践

640次阅读
没有评论

点击蓝字

关注我们

CGTN专访丨梁正:中国在人工智能监管领域的探索与实践

梁正

清华大学人工智能国际治理研究院副院长、人工智能治理研究中心主任、中国科技政策研究中心副主任、公共管理学院教授

编者按:近年来,人工智能领域发展迅猛,技术更新迭代节奏很快。如何确保这项技术良性发展,中国在人工智能监管方面取得了哪些成果?近日,CGTN记者专访清华大学公共管理学院教授、清华大学人工智能国际治理研究院副院长梁正,就人工智能监管等问题进行深入探讨。以下为专访视频文字稿。

Editor’s note: Generative AI tools like ChatGPT, Midjourney and Sora have been fueling our daily discussions recently for their jaw-dropping capabilities. However, this initial excitement also comes with anxiety: What exactly can we expect from AI? Are fears of human labor being replaced by AI warranted? Can we find ways to coexist? In this special series of Tech Talk, we invite scholars and industry experts to explore these questions and see if we can find any common ground. The first episode will focus on effects of AI regulation on its development, as well as China’s role in global AI governance.
As we are stepping into a world shaped by artificial intelligence (AI), the global quest for responsible governance in the field has unfolded.
In a recent interview with CGTN, Liang Zheng, vice dean of the Institute for AI International Governance at Tsinghua University, said that the regulation of AI use will not slow its development but actually help the industry grow better.
“Speaking of the regulation of AI, people might think it’s about restricting its development.” said the expert. “(Actually) for the industry, if there is no such red line, the risk of its development is greater.” 
Liang said that the industry have had some complaints about this issue, thinking that they are not developing very fast, and it will make them even slower by putting some norms.
“You must find a better way to solve the risks,” he said. “I think it’s good for the industry to have regulations or standards. They are not restricting the development but helping the industry develop better.”

01

Difficulties in AI regulation

Liang refers the internal causal mechanism of large AI models as a “black box.”
“It may act against human beings,” said the expert, adding that this is called an “uncontrollable risk” because AI’s emergent ability cannot yet be explained scientifically.
“How can you tell an intelligent form that is different from you about what is good for you? It’s like talking to a monkey about what it should do.” he told CGTN. “You can’t communicate with it, so what should you do? You can only put it in the zoo or make some restrictions compulsorily.”
Another difficulty Liang think the world is facing is about global coordination issues.
“We can see that the U.S. and some Western countries are excluding other countries from participating in the development of AI in the name of so-called security and value, due to which there is a high risk of fragmentation of the international governance system,” the expert noted, adding that this kind of views should be abandoned.
“Like during the Cold War, we could all reach an agreement on some nuclear arms control issues; why can’t we do the same with possible threats of AI?” Liang said.

02

‘You can’t solve this problem without China’

China has made very useful explorations in the field. Liang said that China might be at the top in terms of the number of users and the popularity of the use of digital tools.
“There are many problems that we may encounter first,” he said, adding that the country takes a balanced view in the development of AI.
“We think there must be some rules for some common risks. That’s why the AI Safety Summit in the UK and the special agency of AI regulation must involve China,” said the expert.
“This is the same as climate change. You can’t solve this problem without China.” he said.
China has been putting efforts into regulating the rapidly advancing AI technology.
A regulation effective since January 2023 on the deep synthesis of internet-based information services clearly specifies that content providers need to mark machine-generated pictures, videos and other content.
Additionally, the interim regulation on the management of generative AI, which took effect in August last year, states that content generated by AI shall not infringe on the portrait rights of others, and AI-generated images, videos and other content should be marked.
The country’s action aligns with the global trend of addressing the risks and ethics associated with AI.

Last June, the European Parliament approved the EU AI Act, marking the bloc’s inaugural set of regulations for the technology. Similarly, the U.S. Justice Department recently named its first official focused on AI to keep the country safe and protect civil rights.

Videographer: Hu RuiVideo editor: Guo MeipingCover image: Zhu Shangfan


关于我们

清华大学人工智能国际治理研究院(Institute for AI International Governance, Tsinghua University,THU I-AIIG)是2020年4月由清华大学成立的校级科研机构。依托清华大学在人工智能与国际治理方面的已有积累和跨学科优势,研究院面向人工智能国际治理重大理论问题及政策需求开展研究,致力于提升清华在该领域的全球学术影响力和政策引领作用,为中国积极参与人工智能国际治理提供智力支撑。


新浪微博:@清华大学人工智能国际治理研究院

微信视频号:THU-AIIG

Bilibili:清华大学AIIG

来源 | 本文转载自CGTN,点击“阅读原文”获取更多内容

 

Read More 

正文完
可以使用微信扫码关注公众号(ID:xzluomor)
post-qrcode
 
评论(没有评论)
Generated by Feedzy