Video 4: On Machine Intelligence IV
Audits are great and important, but they don't solve all our problems.
Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow.
Should you be shown another baby picture?
(Laughter)
A sullen note from an acquaintance?
An important but difficult news item?
There's no right answer.
Facebook optimizes for engagement on the site: likes, shares, comments.
In August of 2014, protests broke out in Ferguson, Missouri,
after the killing of an African-American teenager by a white police officer, under murky circumstances.
The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook.
Was it my Facebook friends?
I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control,
and saw that my friends were talking about it. It's just that the algorithm wasn't showing it to me.
I researched this and found this was a widespread problem.
The story of Ferguson wasn't algorithm-friendly.
It's not "likable." Who's going to click on "like?"
It's not even easy to comment on.
Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this.
Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge.
Worthy cause; dump ice water, donate to charity, fine.
But it was super algorithm-friendly.
The machine made this decision for us.
A very important but difficult conversation might have been smothered, had Facebook been the only channel.
Questions
What is a possible danger of using algorithm to filter news?
>Important social issues could be ingored.
How is news ranked by Facebook news feed algorithm?
>according to the likelihood of user engagement.
When you protest something,...
>you strongly object to it.
Why did Tufekci share example about Facebook algorithm?
>to show how algorithms limit people's access to information
Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow.
Now, finally, these systems can also be wrong in ways that don't resemble human systems.
Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy?
It was a great player.
But then, for Final Jeopardy, Watson was asked this question:
"Its largest airport is named for a World War II hero, its second-largest for a World War II battle."
(Hums Final Jeopardy music)
Chicago. The two humans got it right.
Watson, on the other hand, answered "Toronto" -- for a US city category!
The impressive system also made an error that a human would never make, a second-grader wouldn't make.
Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for.
It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.
(Laughter)
In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes.
I don't even want to think what "error" means in the context of lethal autonomous weapons.
So yes, humans have always made biases.
Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point.
We cannot escape these difficult questions.
We cannot outsource our responsibilities to machines.
(Applause)
Artificial intelligence does not give us a "Get out of ethics free" card.
Questions
What does "Watson's error on Jeopardy" indicate ?
>Machine intelligence makes errors that humans wouldn't.
Why does Tufekci admit that humans have baises and make mistakes ?
> to emphasize that people shouldn't expect machines to solve all of their problems.
To wipe the floor with someone is...
>to defeat them easily.
In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes.
Data scientist Fred Benenson calls this math-washing.
We need the opposite.
We need to cultivate algorithm suspicion, scrutiny and investigation.
We need to make sure we have algorithmic accountability, auditing and meaningful transparency.
We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity;rather, the complexity of human affairs invades the algorithms.
Yes, we can and we should use computation to help us make better decisions.
But we have to own up to our moral responsibility to judgment,
and use algorithms within that framework,
not as a means to abdicate and outsource our responsibilities to one another as human to human.
Machine intelligence is here.
That means we must hold on ever tighter to human values and human ethics.
Thank you.
(Applause)
Questions
How does Tufekci think algorithm should be used?
>as a tool for making better decisions
To abdicate responsibility means to...
>fail or refuse to perform a duty.
We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity;rather, the complexity of human affairs invades the algorithms.
Facebook optimizes for engagement on the site: likes, shares, comments.
I don't even want to think what "error" means in the context of lethal autonomous weapons.
We must hold on ever tighter to human values and human ethics
How does Tufekci end her presentation? by emphasizing the importance of human values and ethnics
To cultivate something means to try to develop and improve it.
Audits are great and important, but they don't solve all our problems.