聽(tīng)力課堂TED音頻欄目主要包括TED演講的音頻MP3及中英雙語(yǔ)文稿,供各位英語(yǔ)愛(ài)好者學(xué)習(xí)使用。本文主要內(nèi)容為演講MP3+雙語(yǔ)文稿:換臉黑科技來(lái)了,我們應(yīng)如何面對(duì)這樣的挑戰(zhàn)?,希望你會(huì)喜歡!
【演講人及介紹】Danielle Citron
丹妮爾·希特倫,法律教授, 律師和民權(quán)倡導(dǎo)者。
【演講主題】“后真相時(shí)代”,我們應(yīng)如何面對(duì)Deepfake 換臉黑科技帶來(lái)的挑戰(zhàn)?
【演講文稿-中英文】
翻譯者Nan Yang 校對(duì) Yolanda Zhang
00:13
Rana Ayyub is a journalist in India whose work has exposed government corruption and human rights violations. And over the years, she's gotten used to vitriol and controversy around her work. But none of it could have prepared her for what she faced in April 2018.
Rana Ayyub 是一位印度記者。 她的工作揭露了政府腐敗和人權(quán)侵犯。 這些年來(lái), 她已經(jīng)習(xí)慣了工作中的殘酷和爭(zhēng)議。 但這些都不足以讓她準(zhǔn)備好來(lái) 面對(duì)2018年4月份的事情。
00:38
She was sitting in a café with a friend when she first saw it: a two-minute, 20-second video of her engaged in a sex act. And she couldn't believe her eyes. She had never made a sex video. But unfortunately, thousands upon thousands of people would believe it was her.
當(dāng)時(shí)她和一個(gè)朋友坐在 咖啡廳,第一次看見(jiàn)了 她自己出現(xiàn)在一個(gè) 2分20秒的性愛(ài)視頻里。 她不能相信自己的眼睛。 她從來(lái)沒(méi)有拍攝過(guò)這樣的視頻。 但是很不幸的是,成千上萬(wàn)的人 選擇相信就是她。
00:58
I interviewed Ms. Ayyub about three months ago, in connection with my book on sexual privacy. I'm a law professor, lawyer and civil rights advocate. So it's incredibly frustrating knowing that right now, law could do very little to help her. And as we talked, she explained that she should have seen the fake sex video coming. She said, "After all, sex is so often used to demean and to shame women, especially minority women, and especially minority women who dare to challenge powerful men," as she had in her work. The fake sex video went viral in 48 hours. All of her online accounts were flooded with screenshots of the video, with graphic rape and death threats and with slurs about her Muslim faith. Online posts suggested that she was "available" for sex. And she was doxed, which means that her home address and her cell phone number were spread across the internet. The video was shared more than 40,000 times.
我在大約三個(gè)月前 采訪了Ayyub女士, 為了我的關(guān)于性隱私的書(shū)籍。 我是一個(gè)法律教授, 律師和民權(quán)倡導(dǎo)者。 所以我非常沮喪,因?yàn)槲抑?現(xiàn)在的法律幾乎幫不到她。 當(dāng)我們?cè)谡勗挼臅r(shí)候, 她解釋道她應(yīng)該更早意識(shí)到 虛假性愛(ài)視頻的到來(lái)。 她說(shuō):‘’畢竟,性愛(ài)經(jīng)常被用來(lái)貶低和羞辱女性, 特別是少數(shù)民族的婦女, 尤其是敢于挑戰(zhàn)權(quán)勢(shì) 的少數(shù)民族婦女。” 就像她在工作所做的那樣。 那個(gè)偽造的性愛(ài)視頻 在48小時(shí)之內(nèi)像病毒一樣傳播。 她所有的線上賬戶(hù) 都被這些視頻截屏淹沒(méi), 同時(shí)還有圖片形式的強(qiáng)奸和死亡威脅和對(duì)她穆斯林信仰的誹謗。 網(wǎng)上的帖子說(shuō)她可以隨意跟其他人進(jìn)行性行為。 并且她已經(jīng)被“人肉”, 也就是說(shuō)她的家庭住址和手機(jī)號(hào) 已經(jīng)在互聯(lián)網(wǎng)上隨處可見(jiàn)。 那個(gè)視頻已經(jīng)被分享了超過(guò)4萬(wàn)次。
02:09
Now, when someone is targeted with this kind of cybermob attack, the harm is profound. Rana Ayyub's life was turned upside down. For weeks, she could hardly eat or speak. She stopped writing and closed all of her social media accounts, which is, you know, a tough thing to do when you're a journalist. And she was afraid to go outside her family's home. What if the posters made good on their threats? The UN Council on Human Rights confirmed that she wasn't being crazy. It issued a public statement saying that they were worried about her safety.
如今,這樣的網(wǎng)絡(luò)暴力, 其傷害是非常深遠(yuǎn)的。 Rana Ayyub的生活已經(jīng)徹底改變。 幾周時(shí)間里, 她幾乎不吃飯也不說(shuō)話。 她不再寫(xiě)文章, 關(guān)閉了所有社交媒體賬戶(hù), 這是作為一個(gè)記者很難應(yīng)付的事情。 她不敢走出家門(mén)。 如果那些發(fā)帖的人 真的進(jìn)行了威脅呢? 聯(lián)合國(guó)人權(quán)理事會(huì)確認(rèn)她沒(méi)有精神問(wèn)題。 他們發(fā)布了一個(gè)聲明, 表示擔(dān)心她的人身安全。
02:48
What Rana Ayyub faced was a deepfake: machine-learning technology that manipulates or fabricates audio and video recordings to show people doing and saying things that they never did or said. Deepfakes appear authentic and realistic, but they're not; they're total falsehoods. Although the technology is still developing in its sophistication, it is widely available.
Rana Ayyub面對(duì)的是deepfake : (“深度學(xué)習(xí)”和“偽造”的混合詞) 一種機(jī)器學(xué)習(xí)技術(shù), 能夠操縱或者偽造視頻和音頻來(lái)展示人們做了或者說(shuō)了他們從來(lái)沒(méi)有做過(guò)或說(shuō)過(guò)的事情。 Deepfake讓這些看起來(lái) 是真實(shí)的,但事實(shí)上并不是; 這些都是虛假的視頻。 盡管這項(xiàng)技術(shù)還在這種復(fù)雜性中發(fā)展, 但已經(jīng)廣泛被使用。
03:17
Now, the most recent attention to deepfakes arose, as so many things do online, with pornography.
大家最近對(duì)deepfakes最關(guān)注的, 就像很多網(wǎng)上的事情一樣, 是色情媒介。
03:24
In early 2018, someone posted a tool on Reddit to allow users to insert faces into porn videos. And what followed was a cascade of fake porn videos featuring people's favorite female celebrities. And today, you can go on YouTube and pull up countless tutorials with step-by-step instructions on how to make a deepfake on your desktop application. And soon we may be even able to make them on our cell phones. Now, it's the interaction of some of our most basic human frailties and network tools that can turn deepfakes into weapons. So let me explain.
在2018年早期, 有人在Reddit論壇上發(fā)布了一個(gè)工具, 可以讓用戶(hù)在色情視頻中插入人臉。 這促成了一系列偽造的色情視頻, 使用了人們最喜愛(ài)的女星的面貌。 現(xiàn)在,你在YoutTube上能搜索到 不計(jì)其數(shù)步驟詳細(xì)的教學(xué)視頻 教你如何在電腦上制作這種視頻。 很快,我們也許也可以 在我們的手機(jī)上制作。 此時(shí),正是我們最基本的 人性弱點(diǎn)和網(wǎng)絡(luò)工具的 相互作用 使得deepfakes變成了武器。 讓我解釋一下。
04:06
As human beings, we have a visceral reaction to audio and video. We believe they're true, on the notion that of course you can believe what your eyes and ears are telling you. And it's that mechanism that might undermine our shared sense of reality. Although we believe deepfakes to be true, they're not. And we're attracted to the salacious, the provocative. We tend to believe and to share information that's negative and novel. And researchers have found that online hoaxes spread 10 times faster than accurate stories. Now, we're also drawn to information that aligns with our viewpoints. Psychologists call that tendency "confirmation bias." And social media platforms supercharge that tendency, by allowing us to instantly and widely share information that accords with our viewpoints.
作為人類(lèi),我們對(duì)音頻 和視頻有直觀的反應(yīng)。 我們相信反應(yīng)是真實(shí)的, 理論上當(dāng)然你應(yīng)該相信 你的眼睛和耳朵告訴你的事情。 并且正是那種機(jī)制 削弱了我們共有的對(duì)現(xiàn)實(shí)的感覺(jué)。 盡管我們相信deepfakes 是真的,但它不是。 我們被那些淫穢和挑釁吸引了。 我們傾向于去相信和共享 消極和新奇的信息。 研究者們發(fā)現(xiàn)網(wǎng)上的騙局比真實(shí)故事 的傳播速度快10倍。 我們也容易被迎合 我們觀點(diǎn)的信息吸引。 心理學(xué)家稱(chēng)這種傾向?yàn)?“驗(yàn)證性偏見(jiàn)”。 同時(shí)社交媒體平臺(tái) 極大鼓勵(lì)了這種傾向, 允許我們即刻并廣泛地分享 符合我們自我觀點(diǎn)的信息。
05:08
Now, deepfakes have the potential to cause grave individual and societal harm. So, imagine a deepfake that shows American soldiers in Afganistan burning a Koran. You can imagine that that deepfake would provoke violence against those soldiers. And what if the very next day there's another deepfake that drops, that shows a well-known imam based in London praising the attack on those soldiers? We might see violence and civil unrest, not only in Afganistan and the United Kingdom, but across the globe.
Deepfakes能夠?qū)€(gè)人 和社會(huì)造成嚴(yán)重的潛在危害。 想象一個(gè)deepfake視頻 顯示美國(guó)士兵在阿富汗焚燒可蘭經(jīng)。 你可以想象這個(gè)視頻會(huì)挑起 針對(duì)這些士兵的暴力行為。 然后如果第二天 又有另一個(gè)deepfake視頻出現(xiàn), 展示在倫敦的一位有名的 伊瑪目(伊斯蘭教領(lǐng)袖的稱(chēng)號(hào)) 在贊美對(duì)士兵的襲擊行為呢? 我們可能會(huì)看見(jiàn)暴力和內(nèi)亂, 不僅出現(xiàn)在阿富汗和英國(guó), 而是全世界。
05:48
And you might say to me, "Come on, Danielle, that's far-fetched." But it's not. We've seen falsehoods spread on WhatsApp and other online message services lead to violence against ethnic minorities. And that was just text -- imagine if it were video.
你可能會(huì)對(duì)我說(shuō), “拜托, Danielle,你說(shuō)的太牽強(qiáng)了。 ” 但并不是。 我們已經(jīng)看見(jiàn)了許多虛假信息 在WhatsApp和其它 在線聊天服務(wù)里傳播, 導(dǎo)致了對(duì)少數(shù)民族的暴力。 而那還僅僅是文字—— 如果是視頻呢?
06:06
Now, deepfakes have the potential to corrode the trust that we have in democratic institutions. So, imagine the night before an election. There's a deepfake showing one of the major party candidates gravely sick. The deepfake could tip the election and shake our sense that elections are legitimate. Imagine if the night before an initial public offering of a major global bank, there was a deepfake showing the bank's CEO drunkenly spouting conspiracy theories. The deepfake could tank the IPO, and worse, shake our sense that financial markets are stable.
Deepfakes有可能會(huì)腐蝕 我們?cè)诿裰鳈C(jī)構(gòu)里 的所擁有的信任。 想象一下在選舉的前一晚, Deepfake展示了 其中一個(gè)主要黨派的候選人 生了重病。 它可以顛覆這個(gè)選舉 并動(dòng)搖我們認(rèn)為選舉合法的想法。 接下來(lái),想象在一個(gè)跨國(guó)銀行的 首次公開(kāi)募股的前夜, deepfake顯示銀行的CEO 因?yàn)楹茸硇麚P(yáng)陰謀論。 這個(gè)視頻可以降低IPO的估值, 更糟的是,動(dòng)搖我們認(rèn)為 金融市場(chǎng)穩(wěn)定的感覺(jué)。
06:51
So deepfakes can exploit and magnify the deep distrust that we already have in politicians, business leaders and other influential leaders. They find an audience primed to believe them. And the pursuit of truth is on the line as well. Technologists expect that with advances in AI, soon it may be difficult if not impossible to tell the difference between a real video and a fake one.
所以,deepfakes可以利用 并放大我們對(duì)政客,商業(yè)領(lǐng)袖 和其他有影響力的領(lǐng)導(dǎo)人 已有的深深的不信任性。 它們找到了愿意相信它們的觀眾。 對(duì)真相的追尋也岌岌可危。 技術(shù)人員認(rèn)為, 未來(lái)隨著人工智能的進(jìn)步, 想要分辨出真視頻和假視頻 可能將會(huì)變得非常困難。
07:23
So how can the truth emerge in a deepfake-ridden marketplace of ideas? Will we just proceed along the path of least resistance and believe what we want to believe, truth be damned? And not only might we believe the fakery, we might start disbelieving the truth. We've already seen people invoke the phenomenon of deepfakes to cast doubt on real evidence of their wrongdoing. We've heard politicians say of audio of their disturbing comments, "Come on, that's fake news. You can't believe what your eyes and ears are telling you." And it's that risk that professor Robert Chesney and I call the "liar's dividend": the risk that liars will invoke deepfakes to escape accountability for their wrongdoing.
那么真相如何才能在一個(gè)deepfake 驅(qū)使的思想市場(chǎng)中產(chǎn)生? 我們僅僅沿著阻力最小的路前進(jìn) 并相信我們想去相信的, 而真相已經(jīng)被詛咒了嗎? 而且不僅是我們相信虛假, 可能是我們也開(kāi)始不相信真相。 我們已經(jīng)見(jiàn)過(guò)人們 利用deepfakes的現(xiàn)象 來(lái)質(zhì)疑能證明他們 錯(cuò)誤行為的真實(shí)證據(jù)。 我們也聽(tīng)過(guò)政客回應(yīng) 他們令人不安的評(píng)論, “拜托,那是假新聞。 你不能相信你的眼睛 和耳朵告訴你的事情?!?這是一種風(fēng)險(xiǎn), Robert Chesney教授和我 把它叫做“撒謊者的福利”: 撒謊者利用deepfakes來(lái) 逃避他們?yōu)殄e(cuò)誤行為負(fù)責(zé)的風(fēng)險(xiǎn)。
08:18
So we've got our work cut out for us, there's no doubt about it. And we're going to need a proactive solution from tech companies, from lawmakers, law enforcers and the media. And we're going to need a healthy dose of societal resilience. So now, we're right now engaged in a very public conversation about the responsibility of tech companies. And my advice to social media platforms has been to change their terms of service and community guidelines to ban deepfakes that cause harm. That determination, that's going to require human judgment, and it's expensive. But we need human beings to look at the content and context of a deepfake to figure out if it is a harmful impersonation or instead, if it's valuable satire, art or education.
所以我們知道我們的工作 非常艱難,毋庸置疑。 我們需要一個(gè) 主動(dòng)積極的解決方案, 來(lái)自于科技公司,國(guó)會(huì)議員, 執(zhí)法者和媒體。 并且我們將需要健康的社會(huì)復(fù)原力。 當(dāng)下我們正在與科技公司 進(jìn)行關(guān)于社會(huì)責(zé)任的公開(kāi)談話。 我對(duì)社交媒體平臺(tái)的建議 已經(jīng)影響了他們的 服務(wù)條款和社區(qū)準(zhǔn)則, 來(lái)禁止deepfakes造成危害。 那種決心需要人類(lèi)的判斷, 并且代價(jià)很大。 但是我們需要人類(lèi) 去看看deepfake的內(nèi)容和背景, 弄清楚它是否是有害的偽裝, 或者相反,是否是有價(jià)值的 諷刺,藝術(shù)或教育。
09:16
So now, what about the law? Law is our educator. It teaches us about what's harmful and what's wrong. And it shapes behavior it deters by punishing perpetrators and securing remedies for victims. Right now, law is not up to the challenge of deepfakes. Across the globe, we lack well-tailored laws that would be designed to tackle digital impersonations that invade sexual privacy, that damage reputations and that cause emotional distress. What happened to Rana Ayyub is increasingly commonplace. Yet, when she went to law enforcement in Delhi, she was told nothing could be done. And the sad truth is that the same would be true in the United States and in Europe.
接下來(lái),法律呢? 法律是我們的教育家。 它教育我們什么是有害的, 什么是錯(cuò)誤的。 法律通過(guò)懲罰肇事作案者 和保護(hù)對(duì)受害者的補(bǔ)救 來(lái)規(guī)范行為。 現(xiàn)在,法律還無(wú)法應(yīng)對(duì) deepfakes帶來(lái)的挑戰(zhàn)。 在全球范圍內(nèi), 我們?nèi)鄙倬闹贫ǖ姆桑?解決數(shù)字偽造問(wèn)題的法律, 針對(duì)會(huì)侵犯性隱私, 毀壞名譽(yù) 并造成情緒困擾的事件的法律。 發(fā)生在Rana Ayyub身上的 事情越來(lái)越普遍。 但是,當(dāng)她去德里 的執(zhí)法部門(mén)舉報(bào)時(shí), 她被告知他們無(wú)能為力。 并且令人悲傷的事實(shí)是, 在美國(guó)和歐洲 可能是同樣的情況。
10:07
So we have a legal vacuum that needs to be filled. My colleague Dr. Mary Anne Franks and I are working with US lawmakers to devise legislation that would ban harmful digital impersonations that are tantamount to identity theft. And we've seen similar moves in Iceland, the UK and Australia. But of course, that's just a small piece of the regulatory puzzle.
所以我們需要填補(bǔ)這一法律空白。 我和同事Mary Anne Franks博士 正在和美國(guó)國(guó)會(huì)議員一起 制定可以禁止有危害的數(shù)字偽造, 相當(dāng)于身份盜竊的法律。 同時(shí)我們看見(jiàn)了 在冰島,英國(guó)和澳大利亞 有類(lèi)似的過(guò)程在進(jìn)行。 但是當(dāng)然,這只是 整個(gè)監(jiān)管難題的一小部分。
10:34
Now, I know law is not a cure-all. Right? It's a blunt instrument. And we've got to use it wisely. It also has some practical impediments. You can't leverage law against people you can't identify and find. And if a perpetrator lives outside the country where a victim lives, then you may not be able to insist that the perpetrator come into local courts to face justice. And so we're going to need a coordinated international response. Education has to be part of our response as well. Law enforcers are not going to enforce laws they don't know about and proffer problems they don't understand. In my research on cyberstalking, I found that law enforcement lacked the training to understand the laws available to them and the problem of online abuse. And so often they told victims, "Just turn your computer off. Ignore it. It'll go away." And we saw that in Rana Ayyub's case. She was told, "Come on, you're making such a big deal about this. It's boys being boys." And so we need to pair new legislation with efforts at training.
我知道法律并不是靈丹妙藥。 它是一個(gè)武器。 我們需要智慧地使用它。 法律也有一些實(shí)際的障礙。 你不能利用法律對(duì)抗 你無(wú)法找到和定位的人。 如果一個(gè)作案者居住在 不同于受害者的國(guó)家, 那么你可能無(wú)法強(qiáng)制作案者 來(lái)到當(dāng)?shù)胤ㄍ?來(lái)接受審判。 因此我們需要一項(xiàng)協(xié)調(diào)一致的, 聯(lián)合全球的響應(yīng)策略。 教育也是我們響應(yīng)策略中的一部分。 執(zhí)法者不會(huì)去強(qiáng)制執(zhí)行 他們不知道的法律, 也不會(huì)提出他們不明白的問(wèn)題。 在我關(guān)于網(wǎng)絡(luò)追蹤的研究中, 我發(fā)現(xiàn)執(zhí)法者缺少 了解可供使用的法律 和網(wǎng)絡(luò)暴力問(wèn)題的培訓(xùn)。 所以他們經(jīng)常告訴受害者, “把電腦關(guān)掉就行了, 不要管,自然就沒(méi)了?!?我們?cè)赗ana Ayyub的案例中 也發(fā)現(xiàn)了這點(diǎn)。 她被告知: “拜托,你太小題大做了。 他們只是不懂事的孩子?!?所以我們需要展開(kāi)針對(duì)新法律的培訓(xùn)。
11:54
And education has to be aimed on the media as well. Journalists need educating about the phenomenon of deepfakes so they don't amplify and spread them. And this is the part where we're all involved. Each and every one of us needs educating. We click, we share, we like, and we don't even think about it. We need to do better. We need far better radar for fakery.
媒體也同樣需要接受相關(guān)的培訓(xùn)。 記者們需要接受關(guān)于 deepfakes現(xiàn)象的培訓(xùn), 這樣它們就不會(huì)被放大和傳播。 這正是我們所有人 需要參與其中的部分。 我們每個(gè)人都需要這方面的教育。 我們點(diǎn)擊,我們分享,我們點(diǎn)贊, 但是我們沒(méi)有認(rèn)真去思考。 我們需要做得更好。 我們需要針對(duì)虛假事件 更敏銳的雷達(dá)。
12:25
So as we're working through these solutions, there's going to be a lot of suffering to go around. Rana Ayyub is still wrestling with the fallout. She still doesn't feel free to express herself on- and offline. And as she told me, she still feels like there are thousands of eyes on her naked body, even though, intellectually, she knows it wasn't her body. And she has frequent panic attacks, especially when someone she doesn't know tries to take her picture. "What if they're going to make another deepfake?" she thinks to herself. And so for the sake of individuals like Rana Ayyub and the sake of our democracy, we need to do something right now.
所以在我們研究解決方案的時(shí)候, 會(huì)有很多痛苦和折磨圍繞著我們。 Rana Ayyub仍在和后果搏斗。 她還是無(wú)法自在地 在線上和線下表達(dá)自己。 就像她告訴我的, 她仍然感覺(jué)有上千雙眼睛 在看著她的裸體, 盡管理智上她知道 那并不是她的身體。 而且她經(jīng)常遭受驚嚇, 特別是當(dāng)有陌生人試圖去偷拍她。 “如果他們想拍來(lái)做另一個(gè) deepfake呢?”她不禁想。 所以為了像Rana Ayyub一樣的個(gè)人, 為了我們的民主, 我們現(xiàn)在就要有所作為。
13:11
Thank you.
謝謝大家。
瘋狂英語(yǔ) 英語(yǔ)語(yǔ)法 新概念英語(yǔ) 走遍美國(guó) 四級(jí)聽(tīng)力 英語(yǔ)音標(biāo) 英語(yǔ)入門(mén) 發(fā)音 美語(yǔ) 四級(jí) 新東方 七年級(jí) 賴(lài)世雄 zero是什么意思四平市九州加州風(fēng)情(南仁興街1953號(hào))英語(yǔ)學(xué)習(xí)交流群