简体中文
繁體中文
English
Pусский
日本語
ภาษาไทย
Tiếng Việt
Bahasa Indonesia
Español
हिन्दी
Filippiiniläinen
Français
Deutsch
Português
Türkçe
한국어
العربية
요약:Deepfake artist Hao Li said that soon we will get to the point where there is no way to actually detect deepfakes.
Deepfake artist Hao Li, who created a Putin deepfake for an MIT conference this week, told CNBC on Friday that “perfectly real” manipulated videos are just six to 12 months away.
Li had previously said that he expected “virtually undetectable” deepfakes to be “a few years” away.
When asked for clarification on his timeline, Li told CNBC that recent developments, including the emergence of the wildly popular Chinese app Zao, had led him to “recalibrate” his timeline.
A deepfake pioneer said in an interview with CNBC on Friday that “perfectly real” digitally manipulated videos are just six to 12 months away from being accessible to everyday people.
“It's still very easy you can tell from the naked eye most of the deepfakes,” Hao Li, an associate professor of computer science at the University of Southern California, said on CNBC's Power Lunch. “But there also are examples that are really, really convincing.”
He continued: “Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions.”
Li created a deepfake of Russian president Vladimir Putin, which was showcased at an MIT tech conference this week. Li said that the video was intended to show the current state of deepfake technology, which is developing more rapidly than he expected. He told the MIT Technology Review at that time that “perfect and virtually undetectable” deepfakes were “a few years” away.
When CNBC asked for clarification on his timeline in an email after his interview this week, Li said that recent developments, including the emergence of the wildly popular Chinese app Zao, had led him to “recalibrate” his timeline.
“In some ways, we already know how to do it,” he said in an email to CNBC. “[It's] only a matter of training with more data and implementing it.”
The advancements in artificial intelligence are enabling deepfakes to become more believable, and it's now more difficult to decipher real videos from doctored ones. This has raised alarm bells about spreading misinformation, especially as we head into the 2020 presidential election.
면책 성명:
본 기사의 견해는 저자의 개인적 견해일 뿐이며 본 플랫폼은 투자 권고를 하지 않습니다. 본 플랫폼은 기사 내 정보의 정확성, 완전성, 적시성을 보장하지 않으며, 개인의 기사 내 정보에 의한 손실에 대해 책임을 지지 않습니다.