AI phát triển mạnh ở Trung Quốc dấy lên lo ngại gì?

nhận thức về những rủi ro liên quan đến phát triển AI được nâng cao ở Trung Quốc, nhưng lĩnh vực này vẫn cần nhiều kinh phí hơn.


The challenge of ensuring human control (sự kiểm soát của con người) over AGI has made AI safety a mainstream (thịnh hành) topic, most notably at the AI Safety Summit (Hội nghị thượng đỉnh) held in the United Kingdom last November. As one of the world’s leaders in AI development, China’s perspectives on these issues are of huge importance, yet they remain poorly understood due to a belief outside China that the country is uninterested in AI ethics (đạo đức) and risks (rủi ro).

In fact, leading Chinese experts and AI bodies (ủy ban về AI) have not only been active in promoting AI safety internationally, including by signing on to the safety-focused Bletchley Declaration (Tuyên bố Bletchley tập trung vào an toàn AI)  at the UK summit, but they have also taken concrete (vững chắc) steps to address AI risks domestically (trong nước).

Major moves

While Chinese policymakers have introduced numerous regulations for recommendation algorithms (thuật toán) and “deepfakes” — fake videos or recordings of people manipulated (điều khiển) through AI — there has been a clear rise in interest in AI safety over the past year. 

Room for improvement

Yet funding for safety research is a weak spot. According to Concordia AI, China has yet to make a major state investment in safety research, whether in the form of National Natural Science Foundation (Quỹ khoa học tự nhiên quốc gia) grants or government plans and pilots. It remains to be seen whether a new grant program for generative AI safety and evaluation (đánh giá) announced last December signals a shift (chuyển đổi) in this approach.

An international dialogue

As AI models become more powerful, international cooperation (hợp tác quốc tế) is even more important. With China and the U.S. launching a landmark intergovernmental dialogue (đối thoại liên chính phủ) on AI at November’s APEC Summit, there is a “great window of opportunity” for communication between leading Chinese and American AI developers and AI safety experts, says Concordia AI’s Ng.

“These dialogues could discuss and strive for agreement on more technical issues, such as watermarking standards for generative AI, or encourage mutual learning on best practices (học hỏi kinh nghiệm lẫn nhau), such as third-party red-teaming (hành động đánh giá bảo mật dài hạn) and auditing (kiểm toán) of large models,” Ng says, the former referring to the simulation of real-world cybersecurity attacks to identify system vulnerabilities.

source: Sixth Tone,

Post a Comment

Tin liên quan

    Tài chính

    Trung Quốc