简体中文
繁體中文
English
Pусский
日本語
ภาษาไทย
Tiếng Việt
Bahasa Indonesia
Español
हिन्दी
Filippiiniläinen
Français
Deutsch
Português
Türkçe
한국어
العربية
Zusammenfassung:Global politicians and AI bosses will next week attend a summit to debate the disruptive technology.
The summit is being held at the Grand Palais building in the centre of Paris
Theres one more theme running through the AI Summit that will be worth keeping an eye on.
The first summit had the word safety in its title. Some felt the event pushed the narrative too hard and terrified people with dark talk of existential threats.
But it hasnt fallen of the agenda entirely.
As a subject, AI safety is a rather broad church. It can relate to any number of risks: the generation and spread of misinformation, displays of bias and discrimination against individuals or races, the ongoing development by multiple countries of AI-controlled weapons, the potential for AI to create unstoppable computer viruses.
Prof Geoffrey Hinton, often described as one of the Godfathers of AI, says these as short-term risks. They might be up for discussion in Paris, but he argued on BBC Radio 4s Today programme last week that they are unlikely to garner strong international collaboration in the long term.
The big scenario which he believes will really pull everyone together is the prospect of AI becoming more intelligent than humans – and wanting to seize control.
Nobody wants AI to take over from people, he says. The Chinese would much rather the Chinese Communist Party ran the show than AI.
Prof Hinton compared this eventuality to the height of the Cold War, when the US and Russia just about succeeded in collaborating in order to prevent global nuclear war.
There‘s no hope of stopping [AI development], he said. What we’ve got to do is to try to develop it safely.
Prof Max Tegmark, founder of the Future of Life Institute, also share a stark warning. Either we develop amazing Artificial General Intelligence [AGI] I that helps humans, or uncontrollable AI that replaces humans, he says.
We are unfortunately closer to building AGI than to figuring out how to control it.
Prof Tegmark hopes the summit will push for binding safety standards like we have in every other critical industry.
Haftungsausschluss:
Die Ansichten in diesem Artikel stellen nur die persönlichen Ansichten des Autors dar und stellen keine Anlageberatung der Plattform dar. Diese Plattform übernimmt keine Garantie für die Richtigkeit, Vollständigkeit und Aktualität der Artikelinformationen und haftet auch nicht für Verluste, die durch die Nutzung oder das Vertrauen der Artikelinformationen verursacht werden.