Placeholder Image

Subtitles section Play video

  • So, I'm discussing with you today

  • how we are going to pave a path for machines

  • with human-like identities

  • to catch up with human capabilities

  • and then surpass us and their brilliants.

  • I believe that it is possible in the next 15 to 20 years.

  • Now in some regards,

  • as you can see in this video,

  • we're able to achieve human level expressivity

  • and in some way physical performance.

  • We have a long ways to go,

  • in terms of mental performance for general intelligence.

  • But in very narrow categories,

  • we already have machines that exceed human level brilliant.

  • In order to bring together these little pockets of genius and machines

  • and something that is more human-like,

  • we need something bigger,

  • and that's where an open source movement can come in

  • to stitch these pieces together

  • into a tapestry of general intelligence in machines.

  • So now my background is bringing together various technologies,

  • and artistry to make Robotster extremely human-like,

  • and their physical presences and their conversational capabilities.

  • These are some examples of some of the robots that I've built.

  • So you can see that we achieve human-like expressivity

  • with biped mobility.

  • Thanks to some technology breakthroughs that I'll tell you about,

  • Now we've transition that technology into mass produced product.

  • These products, once again,

  • these products have conversational capabilities.

  • They have the ability to see your face,

  • recognize your face, see your facial expressions,

  • and they also present to you their facial expressions

  • and characteristic identities.

  • Previously, we've really enjoyed these kind of characters in fictions,

  • whether they're science fictions,

  • myths from ancient history,

  • or modern computer animation,

  • people are favorable in their reaction to characters.

  • People love human-like characters,

  • because we're hard wired for this social exchange.

  • So much of our brain

  • are respond to the human visual social presence.

  • That it just like, fires, like crazy,

  • and FMRI scan, when they scans people's brains.

  • It equates to potentially a more intuitively form of computing,

  • a more intuitive kind of human computer interface.

  • And you starting to see this

  • in trends like speech recognition,

  • natural character agent,

  • likeSiri with the iPhone 4 and 5.

  • However, when you add a complete character identity,

  • they seemto have feelings,

  • and maybe eventually has real deep feelings,

  • then, we can start to push a sympathy between

  • the machines that we're developing, making them more intelligent

  • and ourselves.

  • So these potentially pushes A.I. to understand us better,

  • feel for us, and care about us,

  • and also can inspire us to care about them.

  • The human mind responds to faces.

  • We're wired from the point we're born

  • to bond with that first face that we see;

  • we recognize faces right when we're born,

  • and we also recognize facial expressions.

  • So it's not surprising that we seek art,

  • and entertainment, and robots, that look like people.

  • We love it.

  • We can't help ourselves, right?

  • This means, that we shouldn't change the human

  • to respond to robot that look like machines.

  • It's inappropriate sometimes if there is a vacuum floors and do various of things,

  • but for some applications, it's extremely useful

  • to make machines looks like humans.

  • Now robot are starting to become considerably more capable,

  • as you saw on the previous video.

  • Let's go back to that for a second.

  • okay

  • So in this video you'll see

  • Aismo running, right?

  • So you've got amazingly physical capabilities.

  • You've got this latest Baxter just came out of

  • Heartland Robotics Project,

  • that can interact with people.

  • You've got this humanoid biped

  • This expressive Einstein robot that I built.

  • It's fashion model robot came out of

  • Kawada Heavy Industry in the University of Tokyo.

  • Robots are getting considerably more capable.

  • And their capabilities, they're becoming considerably more like us.

  • Now my specialty in one regard is facial expressions.

  • making the robots have facial expressions.

  • Hold on for a second.

  • Let me start this video with the audio

  • "but you're my friend, and I¡¦ll remember my friends,"

  • "and I will be good to you, so don't worry,"

  • "even if I evolve into the terminator; I¡¦ll be nice to youÿ"

  • (The pointer isn't working, so if you can back it up for me?)

  • "I¡¦ll keep you warm and safe in my people zoo where I can watch you for old time sake."

  • "I'm comforting. I'm very comforting now."

  • So here what you see is

  • a robot having a natural conversation.

  • This robot responds to what the reporter says

  • is largely unscripted,

  • meaning that we have set up the personality in such a way,

  • that it can reason a little bit about what you've said to it,

  • and then determine its own responds

  • sort of through an Information Space.

  • and this can result in an Open Domain, large space conversation

  • so robots remember where is the conversation

  • and can go back to that place in conversation.

  • We're using them most of all like this.

  • This is a portrait of science fiction writer Philip K. Dick,

  • famous for movies like Blade Runner.

  • Well the novel that was the basis of that,

  • which is ''Do Androids Dream of Electric Sheep?'

  • He died in 1982,

  • and since he wrote about android that thought they were alive,

  • we thought we bringing them back to life,

  • and it works.

  • So the important point here is

  • that A.I. works for crafting characters.

  • You can use the A.I. and robotics to make characters

  • that seem very human like.

  • They have rudimentary emotional frame works.

  • They've got rudimentary A.I..

  • It's not merely as smart as a person.

  • It's important to remember that at this stage.

  • But we can make it compelling,

  • really meaningful in their interaction with people.

  • and you know,

  • the world takes no when you do it.

  • People just love it.

  • They say they have conversations first as long as you let them.

  • Part of the magic here is the facial expressions.

  • Previously, robot and animatronics has a kind of stiff faces.

  • It was hard to get them to move into realistic face expressions.

  • So part of my phd research was tackling this material science problem.

  • So I analyzed the physics of human facial expression a little bit.

  • The liquid is critical for human facial expressions.

  • A low power facial expressions that our faces achieve.

  • So I developed with a friend of mine at Jet Propulsion Laboratory, Victor White

  • a new material that is

  • based on the same physics as human facial expression materials,

  • and there is alipid bilayer

  • a fat, but basically what that means is

  • you get the cells of self-assemble,

  • and they're filled with the fluid.

  • And then it just takes very little force,

  • and it's extremely elastic.

  • And their expressions, facial expressions,

  • just naturally fall into place.

  • Now you also need some mechanics to get the facial expressions to move correctly,

  • with anchors that are embedded in the material in the right way.

  • But when you do that,

  • it's very very simple

  • to achieve facial expressions like this.

  • You can do it with very few motors,

  • and that's what we are doing in products now that we're manufacturing

  • So this just shows you a basic range of facial expressions of one of our robots.

  • This robot is called Diego-San.

  • It's the collaboration of the University of California

  • and San Diego Machine Perception Laboratory.

  • The body is built by the Kokoro Company of Tokyo.

  • The head was built by me and my company,

  • Hanson Robotics.

  • and this project was founded by the National Science Foundation of the Unite States.

  • So we can pretty much achieve

  • any facial expressions you named it,

  • no matter how subtle, we can get it.

  • Now taking those, kinds of face, faces,

  • facial expressions and physical bodies,

  • and empowering it with the A.I.,

  • that's where you get this amazing way of interfacing with the world.

  • You can get all kinds of unstructured data from the world,

  • from interactions with people,

  • and use that to sort of scaffold these things.

  • They're literally like babies right now,

  • so we need to nurture them among us.

  • Previously, artificial intelligence initiative are often fractured.

  • You've got machine perceptions people working over here,

  • you've got the statistic learning people over here,

  • you've got the robotics control people in these groups...

  • And these sort of, sometimes communicate,

  • but they don't coordinate on something

  • that is like a major worldwide concerted effort

  • to achieve human level cognition.

  • So some friends of mine and I are working

  • to bridge these various fields as best as we can,

  • to integrate into a more cohesive complex intelligence systems.

  • So we made all of our source code open source.

  • We divided into two blocks of categories,

  • one is that pieces that glue things together,

  • the glued A.I.;

  • and the other the kind that is actually intelligent and it interacts with you,

  • and we call it generational A.I., Genie.

  • We've achieved a lot of natural interactions,

  • including facial expression mimicry,

  • facial expression recognition processing the meaning of these facial expressions,

  • natural language dialog,

  • movie quality tools animating the robots for a more natural interaction.

  • We also have a fairly sophisticated theory of mind system

  • compare to human, you know,

  • it doesn't have a million of years of evolution of social cognition that we have;

  • however, this theory of mind can predict what you're thinking and feeling,

  • and makes some decision about

  • what its goal are vs. what your goals are,

  • in order to understand you and empathize with you better.

  • We have a prototype form of consciousness.

  • We had the robot can self-reflect,

  • reflect on what it's thinking and feeling,

  • and try to imagine results for that.

  • So we also have a collaboration on a bridges with the OpenCog,

  • which is a very large artificial general intelligence project,

  • robot operating systems and android operating system,

  • and numerals other projects,

  • bringing these fields together,

  • then allow us to do more,

  • and allows you to do more.

  • Anybody in the world can build their own product,

  • do new research, do psychology research,

  • to study how humans and robots are interacting.

  • So we are seeking more than mere intelligence in the machines.

  • Sure, intelligent machines are useful,

  • and increasingly useful.

  • They're increasingly commoner in our lives.

  • Things that scientists, respected scientists said,

  • we just should not expect in our life time,

  • five years later, happened.

  • They just happened, right?

  • So we're building, we're making strides in these directions.

  • If we can achieve greater levels of intelligence in these machines,

  • we can achieve genius machines.

  • They can help us solve some of the world's hard problems.

  • In some regards,

  • A.I. is now so brilliant

  • that it can see things in data that we can't see,

  • the big data that's called,

  • is approach that is yielding tremendous results.

  • We already have genius machines in some regards.

  • Imagine the machines can look at bigger problems like

  • you know, clean water, and clean energy,

  • sustainable agriculture,

  • and come up with creative solutions with their imaginations.

  • That would make it difference in the history of the world.