英語閱讀 學(xué)英語,練聽力,上聽力課堂! 注冊 登錄
> 輕松閱讀 > 雙語閱讀 >  內(nèi)容

無人駕駛汽車如何處置險情?

所屬教程:雙語閱讀

瀏覽:

2016年09月28日

手機版
掃描二維碼方便學(xué)習(xí)和分享

Today I have been both murderous and merciful. I have deliberately mown down pensioners and a pack of dogs. I have ploughed into the homeless, slain a couple of athletes and run over the obese. But I have always tried to save the children.

今天,我既兇殘又仁慈。我故意殺死了領(lǐng)取養(yǎng)老金者和幾條狗。我撞了無家可歸者,殺死了兩名運動員,軋過了肥胖者。但是,我始終努力救孩子。

As I finish my session on the Moral Machine — a public experiment being run by the Massachusetts Institute of Technology — I learn that my moral outlook is not universally shared. Some argue that aggregating public opinions on ethical dilemmas is an effective way to endow intelligent machines, such as driverless cars, with limited moral reasoning capacity. Yet after my experience, I am not convinced that crowdsourcing is the best way to develop what is essentially the ethics of killing people. The question is not purely academic: Tesla is being sued in China over the death of a driver of a car equipped with its “semi-autonomous” autopilot. Tesla denies the technology was at fault.

我在“道德機器”(Moral Machine)——麻省理工學(xué)院(MIT)運行的一項公開實驗——上完成測試后發(fā)現(xiàn),我的道德觀跟很多人不一樣。有些人辯稱,在道德困境上把公眾意見匯集到一起,是向無人駕駛汽車等智能機器賦予有限道德推理能力的有效手段。然而,在測試之后,我不相信眾包是形成殺戮道德(本質(zhì)上就是這么回事)的最佳途徑。這個問題并不單純是學(xué)術(shù)層面的:一輛配備“半自動式”Autopilot的特斯拉(Tesla)汽車的駕車者死亡,導(dǎo)致該公司在中國被起訴。特斯拉否認那起事故的過錯在于該項技術(shù)。

Anyone with a computer and a coffee break can contribute to MIT’s mass experiment, which imagines the brakes failing on a fully autonomous vehicle. The vehicle is packed with passengers, and heading towards pedestrians. The experiment depicts 13 variations of the “trolley problem” — a classic dilemma in ethics that involves deciding who will die under the wheels of a runaway tram.

任何人只要有臺電腦,利用咖啡時間就可以參加麻省理工學(xué)院的大眾實驗。該實驗想象一輛全自動駕駛汽車的剎車失靈。這輛車載滿了乘客,正朝行人開過去。實驗給出了這一“無軌電車難題”的13個版本。這是一個經(jīng)典的道德難題,需要決定誰將死于一輛失控電車的車輪之下。

In MIT’s reformulation, the runaway is a self-driving car that can keep to its path or swerve; both mean death and destruction. The choice can be between passengers and pedestrians, or two sets of pedestrians. Calculating who should perish involves pitting more lives against less, young against old, professionals against the homeless, pregnant women against athletes, humans against pets.

在麻省理工學(xué)院的重新設(shè)計中,失控的是一輛自動駕駛汽車,它既可以按原來路線行駛,也可以急轉(zhuǎn)彎;兩種情形都會造成死亡和破壞。被選對象可以是乘客或行人,或者兩組行人。計算誰應(yīng)送命,需要在較多生命和較少生命之間、年輕人和老年人之間、專業(yè)人士和無家可歸者之間、懷孕女性和運動員之間,以及人類和寵物之間做出抉擇。

At heart, the trolley problem is about deciding who lives, who dies — the kind of judgment that truly autonomous vehicles may eventually make. My “preferences” are revealed afterwards: I mostly save children and sacrifice pets. Pedestrians who are not jaywalking are spared and passengers expended. It is obvious: by choosing to climb into a driverless car, they should shoulder the burden of risk. As for my aversion to swerving, should caution not dictate that driverless cars are generally programmed to follow the road?

電車難題的核心是決定誰生、誰死——這正是真正自動駕駛的汽車最終或許要做出的那種判斷。我的“偏好”在實驗后被披露出來:基本上,我會救孩子,犧牲寵物。沒有亂穿馬路的行人得以幸免,而乘客被犧牲了。很明顯:選擇上一輛無人駕駛汽車的人,應(yīng)當(dāng)分擔(dān)一部分風(fēng)險。至于我不愿急轉(zhuǎn)彎,難道謹慎沒有意味著無人駕駛汽車的程序指令通常是沿道路行駛嗎?

It is illuminating — until you see how your preferences stack up against everyone else. In the business of life-saving, I fall short — especially when it comes to protecting car occupants. Upholding the law and not swerving seem more important to me than to others; the social status of my intended victims much less so.

這很有啟發(fā)意義——直到你看到自己的偏好跟其他所有人有多么不同。我在救命這件事上做得不夠好——尤其是在保護汽車乘員方面。相比其他事項,守法和避免急轉(zhuǎn)彎似乎對我更重要一些;我選擇的受害人的社會地位對我完全不重要。

We could argue over the technical aspects of dishing out death judiciously. For example, if we are to condemn car occupants, would we go ahead regardless of whether the passengers are children or criminals?

我們可能對于明智而審慎地分發(fā)死亡的技術(shù)方面爭論不休。例如,如果我們宣判汽車乘員死刑,那么無論乘客是孩子還是罪犯,我們都會照做不誤嗎?

But to fret over such details would be pointless. If anything, this experiment demonstrates the extreme difficulty of reaching a consensus on the ethics of driverless cars. Similar surveys show that the utilitarian ideal of saving the greatest number of lives works pretty well for most people as long as they are not the roadkill.

但是,為此類細節(jié)煩惱將是毫無意義的。如果說有任何收獲的話,那就是這個實驗證明,要在無人駕駛汽車的道德上達成共識是極其困難的。類似調(diào)查顯示,對大多數(shù)人而言,救下最多條命這個功利主義觀念合情合理——只要他們自己不在車輪下喪生。

I am pessimistic that we can simply pool our morality and subscribe to a norm — because, at least for me, the norm is not normal. This is the hurdle faced by makers of self-driving cars, which promise safer roads overall by reducing human error: who will buy a vehicle run on murderous algorithms they do not agree with, let alone a car programmed to sacrifice its occupants?

我對于只是把大家的道德集合到一起、然后遵守一個規(guī)范感到很悲觀,因為,至少在我看來,這個規(guī)范不是正常的。這是自動駕駛汽車廠商面臨的障礙。他們承諾通過減少人類過錯來提高整體道路安全,但是誰會購買一輛由他本人并不認可的殺戮算法操控的汽車呢?更別提程序設(shè)定犧牲車上乘客的汽車了。

It is the idea of premeditated killing that is most troubling. That sensibility renders the death penalty widely unpalatable, and ensures abortion and euthanasia remain contentious areas of regulation. Most of us, though, grudgingly accept that accidents happen. Even with autonomous cars, there may be room for leaving some things to chance.

最令人不安的正是這種“預(yù)謀”殺戮的構(gòu)想。那種敏感性讓死刑普遍難以接受,并確保墮胎和安樂死仍是引起爭議的監(jiān)管領(lǐng)域。不過,我們大多數(shù)人咬牙接受事故可能發(fā)生。即便是自動駕駛汽車,或許也應(yīng)該留下讓某些事情“聽天由命”的空間。
 


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思宜春市學(xué)苑春天(新民大道)英語學(xué)習(xí)交流群

網(wǎng)站推薦

英語翻譯英語應(yīng)急口語8000句聽歌學(xué)英語英語學(xué)習(xí)方法

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦