We humans are fallible, imperfect beings prone to making mistakes and basic errors of judgment. So we create technologies to help ourselves out.
我們?nèi)祟惾菀追稿e,不完美,容易犯錯和出現(xiàn)基本的判斷失誤。因此,我們發(fā)明技術(shù)來幫助自己。
Take plans to launch fleets of self-driving cars. The belief here is that the technology will improve safety. But this is a very bold assumption to make at this stage in its development.
以推出自動駕駛汽車的計劃為例。人們相信,這項技術(shù)將加強安全。但在當(dāng)前發(fā)展階段,這是一個非常大膽的假設(shè)。
For one thing, the evidence to support the view simply is not there yet. The best data we have come from tests conducted over the course of 2016 in California, a state which, it must be remembered, boasts a mild climate hardly representative of global driving conditions.
首先,支持這種觀點的證據(jù)并不存在。我們擁有的最好數(shù)據(jù)來自加州在2016年期間進行的測試。必須記住,這個州擁有溫暖的氣候,很難代表全球的駕駛狀況。
Google’s Waymo scored best, with one human intervention every 5,127 miles driven. This was an improvement on the year before, but nowhere near perfect. In all, Waymo’s 60 testing vehicles drove about 10,597 miles in 2016 — 3,000 miles less than the annual US average per vehicle — and within that timeframe required at least two interventions each.
谷歌(Google)的Waymo得分最高,每駕駛5127英里需要一次人為干預(yù)。這比上一年有所改善,但遠遠算不上完美。2016年,Waymo的60輛測試汽車的駕駛總里程約為1.0597萬英里(比美國每輛汽車的年度平均里程少3000英里),其間每輛汽車需要至少兩次干預(yù)。
Tesla fared much worse. The electric vehicle maker’s four cars tested on average 137 miles each that year, encountering 45 disengagement events per vehicle — or one roughly every three miles. Each intervention represents an accident which was potentially avoided.
特斯拉(Tesla)的表現(xiàn)糟糕得多。去年,這家電動汽車制造商有4輛汽車接受測試,每輛汽車的平均行駛里程為137英里,每輛汽車發(fā)生45次人為干預(yù),大約每3英里一次。每次干預(yù)代表著一起潛在被避免的事故。
Given that most industry watchers believe the public will not tolerate any faults at all, none of this is encouraging. It is certainly true that the technology is improving. But it is also the case that self-driving cars have been around since the mid- 1990s, with one vehicle achieving a 98.2 per cent “autonomous driving percentage” even back then.
鑒于多數(shù)行業(yè)觀察者認為公眾不會容忍任何故障,因此這些結(jié)果都不振奮人心。相關(guān)技術(shù)確實在改進。但還有一個事實,自動駕駛汽車早在上世紀90年代中期就已出現(xiàn),甚至當(dāng)時就有一款汽車的“自動駕駛比例”達到了98.2%。
But even if the technical challenges can be surmounted, unexpected negative externalities probably cannot.
然而,即便能夠克服技術(shù)挑戰(zhàn),意料之外的負面外部性也很可能難以逾越。
A case in point is Uber’s latest autonomous vehicle accident in Arizona where the fault lay with the human driver of another car not Uber’s vehicle. In this case the human failed to yield, drawing attention to one of the biggest challenges for the forthcoming autonomous transition: a world in which humans and autonomous vehicles will have to interact with each other safely.
優(yōu)步(Uber)在亞利桑那州的最新自動駕駛汽車事故恰恰說明了這點,事故責(zé)任方是另一輛汽車的人類駕駛員,而非優(yōu)步的汽車。在這個例子里,責(zé)任人沒有讓路,這令人關(guān)注即將到來的自動化過渡的最大挑戰(zhàn)之一:人類和自動駕駛汽車必須能夠安全互動。
What motivates humans to act, however, is very different to what motivates algorithms. At the basic level, most drivers — save those who are drunk, suicidal or intent on sowing fear or terror — have an interest in their own self-preservation or the preservation of others. That cannot be guaranteed of complex algorithms.
然而,推動人類行動的因素與算法的運行迥然不同。在基本層面,多數(shù)駕駛者(除了那些醉酒、有自殺傾向或者蓄意制造害怕或恐懼的人)都有意保護自己,保護其他人。復(fù)雜的算法并不能保證會如此。
There are exceptions, such as last week’s terror attack in London. But eliminating humans from the wheel will not necessarily reduce the risks. Cars that drive themselves could be easily weaponised, since they need only to be hacked, not driven by a martyr.
也有例外,例如最近倫敦發(fā)生的恐怖襲擊。但是淘汰人類駕駛員不一定會減輕風(fēng)險。那些自動駕駛汽車可能很容易變成武器,因為他們只需要被黑客入侵,而不需要由恐怖分子駕駛。
Alcoholism, meanwhile, killed three times the number of Americans that fatal crashes did in 2014. So if driving encourages sobriety, another unintended consequence could be the rise of alcohol and drug-abuse when humans are freed of that responsibility.
與此同時,2014年,因酒精中毒導(dǎo)致死亡的美國人數(shù)量是致命車禍的3倍。因此,如果駕車會鼓勵人們保持清醒,另一個意外后果可能是當(dāng)人類擺脫了這種責(zé)任后,濫用酒精和藥物的人數(shù)會上升。
Then there is the trust we have to put in the coders. Normally in the corporate world employers devise elaborate reward and penalty programmes to ensure that workers are incentivised to do the best possible job, even when it’s in their interests to take short-cuts. They are held accountable. In sectors where human sloppiness can have a disproportionately bad effect on others — banking, say, or air traffic control — these sorts of incentives matter even more.
還有就是我們不得不相信程序員。通常,在企業(yè)界,雇主會精心設(shè)計獎懲計劃,以確保員工有動力把工作做得盡可能好,即便偷工減料對他們有利。他們會被追究責(zé)任。在人類草率行為可能對他人造成特別糟糕影響的領(lǐng)域(例如銀行業(yè)或空中交通管制),這種激勵更為重要。
Self-driving cars, however, will be programmed and maintained by coder armies benefiting from safety in numbers when it comes to accountability. Can we be sure that they will always be properly motivated?
然而,自動駕駛汽車將由程序員大軍編程和維護,就問責(zé)機制而言,這些程序員受益于他們?nèi)藬?shù)眾多。我們能夠確定他們會一直得到妥善激勵嗎?
Finally, while there is a good case for self-driving technology to augment human driving skills, there is a risk this could lead to a degradation of those abilities. And we surely would not want to risk discovering that they were no longer there when we really needed them.
最后,盡管自動駕駛技術(shù)將增強人類駕駛技能的說法理由充足,但風(fēng)險在于這可能導(dǎo)致人類能力的退化。我們肯定不想冒這個險:在我們真正需要這些能力時,卻發(fā)現(xiàn)它們不存在了。
瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思北京市三間房西里小區(qū)英語學(xué)習(xí)交流群