On Machine Intelligence 2 - 懂你英语 流利说 Level7 Unit2 Part1
Machine intelligence is here.
We're now using computation to make all sort of decisions, but also new kinds of decisions.
We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.
We're asking questions like, "Who should the company hire?"
"Which update from which friend should you be shown?"
"Which convict is more likely to reoffend?"
"Which news item or movie should be recommended to people?"
Look, yes, we've been using computers for a while, but this is different.
This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon.
You know. Are airplanes safer?
Did the bridge sway and fall?
There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us.
We have no such anchors and benchmarks for decisions in messy human affairs.
*
What does Tufekci mean by historical twist? Computers are being used to solve subjective problems for the first time in history.
Why is using machine intelligence to solve subjective problems and issue? There are no guidelines for subjective issues.
With the development of machine intelligence, algorithms are now being used to answer subjective questions.
Computation is more reliable for objective questions because…there are clear standards.
If something reflects your personal values, it is value-laden.
*
This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon.
To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex.
Recently, in the past decade, complex algorithms have made great strides.
They can recognize human faces.
They can decipher handwriting.
They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging.
They can beat humans in chess and Go.
Much of this progress comes from a method called "machine learning."
Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions.
It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives.
And the system learns by churning through this data.
And also, crucially, these systems don't operate under a single-answer logic.
They don't produce a simple answer; it's more probabilistic: "This one is probably more like what you're looking for."
Now, the upside is: this method is really powerful.
The head of Google's AI systems called it, "the unreasonable effectiveness of data."
The downside is, we don't really understand what the system learned. In fact, that's its power.
This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control.
So this is our problem.
It's a problem when this artificial intelligence system gets things wrong.
It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem.
We don't know what this thing is thinking.
*
How is machine learning different from traditional programming? It leads to probabilistic answers.
If a method or argument is probabilistic, it is...based on what is most likely to be true.
What is one characteristic of traditional programming? It requires explicit instructions.
Which of the following best describes machine learning? It enables computers to process complex data and learn from it.
Why is that a problem when machine intelligence gets things right? People can't examine how the system reaches its conclusion.
To make great strides means...to achieve significant progress.
*
Much of this progress comes from a method called "machine learning."
Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions.
*
This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control.
*
For machine learning, you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives.
And the system learns by churn ing through this data.
*
Recently, in the past decade, complex algorithms have made great strides.
So, consider a hiring algorithm -- a system used to hire people, (right?), using machine-learning systems.
Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company.
Sounds good.
I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring.
They were super excited.
They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers.
And look -- human hiring is biased.
I know. I mean, in one of my early jobs as a programmer,
my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, "Zeynep, let's go to lunch!"
I'd be puzzled by the weird timing. It's 4pm. Lunch?
I was broke, so free lunch. I always went.
I later realized what was happening.
My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneaker s to work.
I was doing a good job, I just looked wrong and was the wrong age and gender.
So hiring in a gender- and race-blind way certainly sounds good to me.
But with these systems, it is more complicated, and here's why:
Currently, computational systems can infer all sorts of things about you from your digital crumbs , even if you have not disclosed those things.
They can infer your sexual orientation, your personality traits, your political leanings.
They have predictive power with high levels of accuracy.
Remember -- for things you haven't even disclosed. This is inference .
*
What does Tufekci's personal experience with her immediate manager suggest? Human bias is a problem in the workplace.
Why did Tufekci’s immediate manager want to hide her from the higher-ups? She didn’t appear qualified for the job due to her age and gender.
Why were people in the conference excited about the hiring algorithm? It could remove bias from the hiring process.
*
A hiring algorithm would find and hire strong candidates by basing its criteria on existing employees.
To make an inference means…to form an opinion based on the available information.
To provide a benchmark for something means…to set a standard for it.
*
Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company.
*
My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work.
*
Hiring in a gender- and race-blind way certainly sounds good to me.
I was doing a good job, but I was the wrong age and gender.
Recently, in the past decade, complex algorithms have made great strides.
The downside is, we don't really understand what the system learned.
They can detect credit card fraud and block spam and they can translate between languages.
We have no such anchors and benchmarks for decisions in messy human affairs.
*
Currently, computational systems can infer all sorts of things about you from your digital crumbs , even if you have not disclosed those things.
It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives.
This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control.
We cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon.
We're asking questions to computation which are subjective, open-ended, value-laden and have no single right answer.