Robots and Humans
机器人和人类
——文章来源: CET4-2016-06
As Artificial Intelligence (AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat.
随着人工智能(AI)愈加世故老练,人们也更加担心机器人将会成为一种威胁。
This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.
根据计算机学教授Stuart Russell的观点,如果研究出把人类价值观转化成为可编程的代码,就可以避免这种危险。
Russell argues, if a robot does chores around the house, you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children, "You would want the robot preloaded with a good set of values," said Russell.
Russell提出例证,如果一个机器人正在家里做家务,你肯定不希望它把宠物猫扔进锅里给饥肠辘辘的孩子们做晚餐。他说,“你肯定希望你的机器人预装一个比较正确的价值观”。
Some robots are already programmed with basic human values.
一些机器人已经具备了人类基本的价值观。
For example, mobile robots have been programmed to keep a comfortable distance from humans.
例如,手机机器人被设定会与人类保持一个对人类来说比较舒适的距离。
Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn't think that's the kind of thing a properly brought-up person would do.
显然,(不同地域之间)存在文化差异,但是如果你正在和其他人说话,(这时候)他们在你的个人空间里接近,你就不会认为这是有教养的人才会做的事。
It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules.
创造出道德上更加富有经验的机器是有可能的,我们要做的仅仅是把人类价值观设定成为明确的规则。
Robots could also learn values from drawing patterns from large sets of data on human behavior.
机器人还可以从根据人类行为大数据绘制成的模型中学习价值观。
They are dangerous only if programmers are careless.
仅当程序员粗心大意时才会有危险(黑程序员了)。
One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.
一个简单的检查是,当机器人遇到不寻常的情况时,检查并矫正自己的行为。
If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps, and ask for directions from a human.
当机器人无法确定一下动物是否适合微波,它还有机会停止动作,发出哔哔声的警告,并且请求人类的指令。
If we humans aren't quite sure about a decision, we go and ask somebody else.
如果我们无法做出准确的决定,我们会去问其他人。
The most difficult step in programming values will be deciding exactly what we believe is moral, and how to create a set of ethical rules.
对于编程价值观来说,最困难的步骤是确定道德正确,以及以此创建道德规则。
But if we come up with an answer, robots could be good for humanity.
但是如果解决了这个问题,机器人对人类将是有益的。
——2018-11-23 Edited by @Theo