機(jī)器人并不只搶走人類(lèi)的工作,它們也開(kāi)始招聘人類(lèi)員工了,因?yàn)樗鼈兛梢钥焖俸Y選應(yīng)聘者,但這很危險(xiǎn)。
測(cè)試中可能遇到的詞匯和知識(shí):
the air is thick with空氣里彌漫著
seductive有魅力的;性感的[s?'d?kt?v]
inherently內(nèi)在地;固有地;天性地[?n?h??r?ntl?]
bias偏見(jiàn);偏愛(ài);斜紋;乖離率['ba??s]
ethnicity種族劃分[eθ'n?s?t?]
proxy代理人;委托書(shū) ['pr?ks?]
scenario劇本;設(shè)想[s?'nɑ?r???]
murky黑暗的;朦朧的;陰郁的['m??k?]
By Sarah O’Connor
Robots are not just taking people’s jobs away,they are beginning to hand them out,too. Go to any recruitment industry event and you will find the air is thick with terms like“machine learning”,“big data”and“predictive analytics”.
The argument for using these tools in recruitment is simple. Robo-recruiters can sift through thousands of job candidates far more efficiently than humans. They can also do it more fairly. Since they do not harbour conscious or unconscious human biases,they will recruit a more diverse and meritocratic workforce.
This is a seductive idea but it is also dangerous. Algorithms are not inherently neutral just because they see the world in zeros and ones.
For a start,any machine learning algorithm is only as good as the training data from which it learns. Take the PhD thesis of academic researcher Colin Lee,released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview. The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding“human error and unconscious bias”.
But a model like this would absorb any human biases at work in the original recruitment decisions. For example,the research found that age was the biggest predictor of being invited to interview,with the youngest and the oldest applicants least likely to be successful. You might think it fair enough that inexperienced youngsters do badly,but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate.
Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender,age and ethnicity before using them. Even then,algorithms can wind up discriminating. In a paper published this year,academics Solon Barocas and Andrew Selbst use the example of an employer who wants to select those candidates most likely to stay for the long term. If the historical data show women tend to stay in jobs for a significantly shorter time than men(possibly because they leave when they have children),the algorithm will probably discriminate against them on the basis of attributes that are a reliable proxy for gender.
Or how about the distance a candidate lives from the office? That might well be a good predictor of attendance or longevity at the company; but it could also inadvertently discriminate against some groups,since neighbourhoods can have different ethnic or age profiles.
These scenarios raise the tricky question of whether it is wrong to discriminate even when it is rational and unintended. This is murky legal territory. In the US,the doctrine of“disparate impact”outlaws ostensibly neutral employment practices that disproportionately harm“protected classes”,even if the employer does not intend to discriminate. But employers can successfully defend themselves if they can prove there is a strong business case for what they are doing. If the intention of the algorithm is simply to recruit the best people for the job,that may be a good enough defence.
Still,it is clear that employers who want a more diverse workforce cannot assume that all they need to do is turn over recruitment to a computer. If that is what they want,they will need to use data more imaginatively.
Instead of taking their own company culture as a given and looking for the candidates statistically most likely to prosper within it,for example,they could seek out data about where(and in which circumstances) a more diverse set of workers thrive.
Machine learning will not propel your workforce into the future if the only thing it learns from is your past.
1.What is not the reason of using these robots in recruitment?
A. sift job candidates more efficiently
B. take a more procedural approach to save time
C. sift job candidates more fairly
D. recruit a more diverse and meritocratic workforce
答案(1)
2.Which one is not right about relying on robots for fairer staff recruitment as mentioned?
A. algorithms are inherently neutral
B. it is seductive but dangerous
C. robots see the world in zeros and ones
D. machine learning algorithm is only as good as the training data from which it learns
答案(2)
3.What was the biggest predictor of being invited to interview of Colin Lee’s research?
A. gender
B. ethnicity
C. age
D. education
答案(3)
4.What should employers do if they want a more diverse workforce by computer recruitment?
A. use data more accurately
B. use data more imaginatively
C. gather more data
D. strip the CVs of attributes such as gender,age and ethnicity
答案(4)
(1) 答案:B.take a more procedural approach to save time
解釋:機(jī)器人招聘者可以快速篩選數(shù)以千計(jì)的應(yīng)聘者,效率遠(yuǎn)高于人類(lèi)。它們還能做到更加公平。因?yàn)樗鼈儾粫?huì)像人類(lèi)那樣帶著有意或無(wú)意的偏見(jiàn),它們會(huì)招聘到一批更多元化和擇優(yōu)錄用的員工。
(2) 答案:A.algorithms are inherently neutral
解釋:這是個(gè)很誘人的想法,但也是危險(xiǎn)的。算法的中立并非是其固有,而是因?yàn)樗鼈兛吹降氖澜缰皇恰?”和“1”。任何機(jī)器學(xué)習(xí)的算法,并不會(huì)比它所學(xué)習(xí)的訓(xùn)練數(shù)據(jù)更好。
(3) 答案:C.age
解釋:研究發(fā)現(xiàn),年齡因素可以在最大程度上預(yù)測(cè)該應(yīng)聘者是否會(huì)被邀請(qǐng)面試,最年輕和最年長(zhǎng)的應(yīng)聘者最不可能成功。
(4) 答案:B.use data more imaginatively
解釋:那些希望把招聘交給電腦去做,又要擁有更多元化的員工隊(duì)伍的雇主,應(yīng)該把數(shù)據(jù)運(yùn)用得更富想象力一些。