The incredible inventions of intuitive AI

How many of you are creatives, designers, engineers, entrepreneurs, artists, or maybe you just have a really big imagination? Show of hands? (Cheers)

That's most of you. I have some news for us creatives. Over the course of the next 20 years, more will change around the way we do our work than has happened in the last 2,000. In fact, I think we're at the dawn of a new age in human history.

Now, there have been four major historical eras defined by the way we work. The Hunter-Gatherer Age lasted several million years. And then the Agricultural Age lasted several thousand years. The Industrial Age lasted a couple of centuries. And now the Information Age has lasted just a few decades. And now today, we're on the cusp of our next great era as a species.

Welcome to the Augmented Age. In this new era, your natural human capabilities are going to be augmented by computational systems that help you think, robotic systems that help you make, and a digital nervous system that connects you to the world far beyond your natural senses. Let's start with cognitive augmentation. How many of you are augmented cyborgs?

(Laughter)

I would actually argue that we're already augmented. Imagine you're at a party, and somebody asks you a question that you don't know the answer to. If you have one of these, in a few seconds, you can know the answer. But this is just a primitive beginning. Even Siri is just a passive tool. In fact, for the last three-and-a-half million years, the tools that we've had have been completely passive. They do exactly what we tell them and nothing more. Our very first tool only cut where we struck it. The chisel only carves where the artist points it. And even our most advanced tools do nothing without our explicit direction. In fact, to date, and this is something that frustrates me, we've always been limited by this need to manually push our wills into our tools — like, manual, literally using our hands, even with computers. But I'm more like Scotty in "Star Trek."

(Laughter)

I want to have a conversation with a computer. I want to say, "Computer, let's design a car," and the computer shows me a car. And I say, "No, more fast-looking, and less German," and bang, the computer shows me an option.

(Laughter)

That conversation might be a little ways off, probably less than many of us think, but right now, we're working on it. Tools are making this leap from being passive to being generative. Generative design tools use a computer and algorithms to synthesize geometry to come up with new designs all by themselves. All it needs are your goals and your constraints.

I'll give you an example. In the case of this aerial drone chassis, all you would need to do is tell it something like, it has four propellers, you want it to be as lightweight as possible, and you need it to be aerodynamically efficient. Then what the computer does is it explores the entire solution space: every single possibility that solves and meets your criteria — millions of them. It takes big computers to do this. But it comes back to us with designs that we, by ourselves, never could've imagined. And the computer's coming up with this stuff all by itself — no one ever drew anything, and it started completely from scratch. And by the way, it's no accident that the drone body looks just like the pelvis of a flying squirrel.

(Laughter)

It's because the algorithms are designed to work the same way evolution does.

What's exciting is we're starting to see this technology out in the real world. We've been working with Airbus for a couple of years on this concept plane for the future. It's a ways out still. But just recently we used a generative-design AI to come up with this. This is a 3D-printed cabin partition that's been designed by a computer. It's stronger than the original yet half the weight, and it will be flying in the Airbus A320 later this year. So computers can now generate; they can come up with their own solutions to our well-defined problems. But they're not intuitive. They still have to start from scratch every single time, and that's because they never learn. Unlike Maggie.

(Laughter)

Maggie's actually smarter than our most advanced design tools. What do I mean by that? If her owner picks up that leash, Maggie knows with a fair degree of certainty it's time to go for a walk. And how did she learn? Well, every time the owner picked up the leash, they went for a walk. And Maggie did three things: she had to pay attention, she had to remember what happened and she had to retain and create a pattern in her mind.

Interestingly, that's exactly what computer scientists have been trying to get AIs to do for the last 60 or so years. Back in 1952, they built this computer that could play Tic-Tac-Toe. Big deal. Then 45 years later, in 1997, Deep Blue beats Kasparov at chess. 2011, Watson beats these two humans at Jeopardy, which is much harder for a computer to play than chess is. In fact, rather than working from predefined recipes, Watson had to use reasoning to overcome his human opponents. And then a couple of weeks ago, DeepMind's AlphaGo beats the world's best human at Go, which is the most difficult game that we have. In fact, in Go, there are more possible moves than there are atoms in the universe. So in order to win, what AlphaGo had to do was develop intuition. And in fact, at some points, AlphaGo's programmers didn't understand why AlphaGo was doing what it was doing.

And things are moving really fast. I mean, consider — in the space of a human lifetime, computers have gone from a child's game to what's recognized as the pinnacle of strategic thought. What's basically happening is computers are going from being like Spock to being a lot more like Kirk.

(Laughter)

Right? From pure logic to intuition. Would you cross this bridge? Most of you are saying, "Oh, hell no!"

(Laughter)

And you arrived at that decision in a split second. You just sort of knew that bridge was unsafe. And that's exactly the kind of intuition that our deep-learning systems are starting to develop right now. Very soon, you'll literally be able to show something you've made, you've designed, to a computer, and it will look at it and say, "Sorry, homie, that'll never work. You have to try again." Or you could ask it if people are going to like your next song, or your next flavor of ice cream. Or, much more importantly, you could work with a computer to solve a problem that we've never faced before. For instance, climate change. We're not doing a very good job on our own, we could certainly use all the help we can get. That's what I'm talking about, technology amplifying our cognitive abilities so we can imagine and design things that were simply out of our reach as plain old un-augmented humans.

So what about making all of this crazy new stuff that we're going to invent and design? I think the era of human augmentation is as much about the physical world as it is about the virtual, intellectual realm. How will technology augment us? In the physical world, robotic systems. OK, there's certainly a fear that robots are going to take jobs away from humans, and that is true in certain sectors. But I'm much more interested in this idea that humans and robots working together are going to augment each other, and start to inhabit a new space.

This is our applied research lab in San Francisco, where one of our areas of focus is advanced robotics, specifically, human-robot collaboration. And this is Bishop, one of our robots. As an experiment, we set it up to help a person working in construction doing repetitive tasks — tasks like cutting out holes for outlets or light switches in drywall.

(Laughter)

So, Bishop's human partner can tell what to do in plain English and with simple gestures, kind of like talking to a dog, and then Bishop executes on those instructions with perfect precision. We're using the human for what the human is good at: awareness, perception and decision making. And we're using the robot for what it's good at: precision and repetitiveness.

Here's another cool project that Bishop worked on. The goal of this project, which we called the HIVE, was to prototype the experience of humans, computers and robots all working together to solve a highly complex design problem. The humans acted as labor. They cruised around the construction site, they manipulated the bamboo — which, by the way, because it's a non-isomorphic material, is super hard for robots to deal with. But then the robots did this fiber winding, which was almost impossible for a human to do. And then we had an AI that was controlling everything. It was telling the humans what to do, telling the robots what to do and keeping track of thousands of individual components. What's interesting is, building this pavilion was simply not possible without human, robot and AI augmenting each other.

OK, I'll share one more project. This one's a little bit crazy. We're working with Amsterdam-based artist Joris Laarman and his team at MX3D to generatively design and robotically print the world's first autonomously manufactured bridge. So, Joris and an AI are designing this thing right now, as we speak, in Amsterdam. And when they're done, we're going to hit "Go," and robots will start 3D printing in stainless steel, and then they're going to keep printing, without human intervention, until the bridge is finished.

So, as computers are going to augment our ability to imagine and design new stuff, robotic systems are going to help us build and make things that we've never been able to make before. But what about our ability to sense and control these things? What about a nervous system for the things that we make?

Our nervous system, the human nervous system, tells us everything that's going on around us. But the nervous system of the things we make is rudimentary at best. For instance, a car doesn't tell the city's public works department that it just hit a pothole at the corner of Broadway and Morrison. A building doesn't tell its designers whether or not the people inside like being there, and the toy manufacturer doesn't know if a toy is actually being played with — how and where and whether or not it's any fun. Look, I'm sure that the designers imagined this lifestyle for Barbie when they designed her.

(Laughter)

But what if it turns out that Barbie's actually really lonely?

(Laughter)

If the designers had known what was really happening in the real world with their designs — the road, the building, Barbie — they could've used that knowledge to create an experience that was better for the user. What's missing is a nervous system connecting us to all of the things that we design, make and use. What if all of you had that kind of information flowing to you from the things you create in the real world? With all of the stuff we make, we spend a tremendous amount of money and energy — in fact, last year, about two trillion dollars — convincing people to buy the things we've made. But if you had this connection to the things that you design and create after they're out in the real world, after they've been sold or launched or whatever, we could actually change that, and go from making people want our stuff, to just making stuff that people want in the first place.

The good news is, we're working on digital nervous systems that connect us to the things we design. We're working on one project with a couple of guys down in Los Angeles called the Bandito Brothers and their team. And one of the things these guys do is build insane cars that do absolutely insane things. These guys are crazy —

(Laughter)

in the best way. And what we're doing with them is taking a traditional race-car chassis and giving it a nervous system.

So we instrumented it with dozens of sensors, put a world-class driver behind the wheel, took it out to the desert and drove the hell out of it for a week. And the car's nervous system captured everything that was happening to the car. We captured four billion data points; all of the forces that it was subjected to. And then we did something crazy. We took all of that data, and plugged it into a generative-design AI we call "Dreamcatcher." So what do get when you give a design tool a nervous system, and you ask it to build you the ultimate car chassis? You get this. This is something that a human could never have designed. Except a human did design this, but it was a human that was augmented by a generative-design AI, a digital nervous system and robots that can actually fabricate something like this.

So if this is the future, the Augmented Age, and we're going to be augmented cognitively, physically and perceptually, what will that look like? What is this wonderland going to be like?

I think we're going to see a world where we're moving from things that are fabricated to things that are farmed. Where we're moving from things that are constructed to that which is grown. We're going to move from being isolated to being connected. And we'll move away from extraction to embrace aggregation. I also think we'll shift from craving obedience from our things to valuing autonomy.

Thanks to our augmented capabilities, our world is going to change dramatically. We're going to have a world with more variety, more connectedness, more dynamism, more complexity, more adaptability and, of course, more beauty. The shape of things to come will be unlike anything we've ever seen before. Why? Because what will be shaping those things is this new partnership between technology, nature and humanity. That, to me, is a future well worth looking forward to.

Thank you all so much.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,294评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,493评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,790评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,595评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,718评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,906评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,053评论 3 410
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,797评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,250评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,570评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,711评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,388评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,018评论 3 316
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,796评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,023评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,461评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,595评论 2 350

推荐阅读更多精彩内容

  • **2014真题Directions:Read the following text. Choose the be...
    又是夜半惊坐起阅读 9,437评论 0 23
  • 文/梅林的胡子- 图/网络 有些失去,永远无法释怀,甚至宁愿至此~ 就像Lee一样,冷漠的躯壳下,透出抹不去的忧伤...
    梅林的胡子_阅读 601评论 2 3
  • 冬天的北京需要更多的温暖。
    lucky大慧阅读 134评论 0 0
  • 一直想去南方,听说南方的山水很秀气,听说南方的空气很潮湿。 端午的季节,似乎哪里都飘零着雨,湖南的张家界,湘西的凤...
    阿焦文字阅读 248评论 0 2
  • 小时候家里穷,刚出生7个月的晓柒便被送回了奶奶家,对父母的印象大都在于电话里“要听话”的叮嘱声和那一年看到一次的熟...
    ShineBABY阅读 429评论 4 7