Subtitles section Play video Print subtitles (bright music) >> Narrator: Live from Austin, Texas. It's theCUBE, covering South by Southwest 2017. Brought to you by Intel. Now here's John Furrier. >> We're here live in South by Southwest Austin, Texas. Silicon Angle, theCUBE, our broadcast, we go out and extract the signal from noise. I'm John Furrier, I'm here with Naveene Rao, the vice president general manager of the artificial intelligence solutions group at Intel. Welcome to theCUBE. >> Thank you, yeah. >> So we're here, big crowd here at Intel, Intel AI lounge. Okay, so that's your wheelhouse. You're the general manager of AI solutions. >> Naveene: That's right. >> What is AI? (laughs) I mean-- >> AI has been redefined through time a few times. Today AI means generally applied machine learning. Basically ways to find useful structure in data to do something with. It's a tool, really, more than anything else. >> So obviously AI is a mental model, people can understand kind of what's going on with software. Machine learning and IoT gets kind of in the industry, it's a hot area, but this really is points to a future world where you're seeing software tackling new problems at scale. So cloud computing, what you guys are doing with the chips and software has now created a scale dynamic. Similar to Moore's, but Moore's Law is done for devices. You're starting to see software impact society. So what are some of those game changing impacts that you see and that you're looking at at Intel? >> There are many different thought labors that many of us will characterize as drudgery. For instance, if I'm an insurance company, and I want to assess the risk of 10 million pages of text, I can't do that very easily. I have to have a team of analysts run through, write summaries. These are the kind of problems we can start to attack. So the way I always look at it is what a bulldozer was to physical labor, AI is to data. To thought labor, we can really get through much more of it and use more data to make our decisions better. >> So what are the big game changing things that are going on that people can relate to? Obviously, autonomous vehicles is one that we can all look at and say, "Wow, that's mind blowing." Smart cities is one that you say, "Oh my god, I'm a resident of a community. "Do they have to re-change the roads? "Who writes the software, is there a budget for that?" Smart home, you see Alexa with Amazon, you see Google with their home product. Voice bots, voice interfaces. So the user interface is certainly changing. How is that impacting some of the things that you guys are working on? >> Well, to the user interface changing, I think that has an entire dynamic on how people use tools. Easier something is, the more people use, the more pervasive it becomes, and we start discovering these emergent dynamics. Like an iPod, for instance. Storing music in a digital form, small devices around before the iPod. But when it made it easy to use, that sort of gave rise to the smartphone. So I think we're going to start seeing some really interesting dynamics like that. >> One of the things that I liked about this past week in San Francisco, Google had their big event, their cloud event, and they talked a lot about, and by the way, Intel was on stage with the new Xeon processor, up to 72 cores, amazing compute capabilities, but cloud computing does bring that scale together. But you start thinking about data science has moved into using data, and now you have a tsunami of data, whether it's taking an analog view of the world and having now multiple datasets available. If you can connect the dots, okay, a lot of data, now you have a lot of data plus a lot of datasets, and you have almost unlimited compute capability. That starts to draw in some of the picture a little bit. >> It does, but actually there's one thing missing from what you just described, is that our ability to scale data storage and data collection has outpaced our ability to compute on it. Computing on it typically is some sort of quadratic function, something faster than when your growth on amount of data. And our compute has really not caught up with that, and a lot of that has been more about focus. Computers were really built to automate streams of tasks, and this sort of idea of going highly parallel and distributed, it's something somewhat new. It's been around a lot in academic circles, but the real use case to drive it home and build technologies around it is relatively new. And so we're right now in the midst of transforming computer architecture, and it's something that becomes a data inference machine, not just a way to automate compute tasks, but to actually do data inference and find useful inferences in data. >> And so machine learning is the hottest trend right now that kind of powers AI, but also there's some talk in the leader circles around learning machines. Data learning from engaged data, or however you want to call it, also brings out another question. How do you see that evolving, because do we need to have algorithms to police the algorithms? Who teaches the algorithms? So you bring in this human aspect of it. So how does the machine become a learning machine? Who teaches the machine, is it... (laughs) I mean, it's crazy. >> Let me answer that a little bit with a question. Do you have kids? >> Yes, four. >> Does anyone police you on raising your kids? >> (laughs) Kind of, a little bit, but not much. They complain a lot. >> I would argue that it's not so dissimilar. As a parent, your job is to expose them to the right kind of biases or not biased data as much as possible, like experiences, they're exactly that. I think this idea of shepherding data is extremely important. And we've seen it in solutions that Google has brought out. There are these little unexpected biases, and a lot of those come from just what we have in the data. And AI is no different than a regular intelligence in that way, it's presented with certain data, it learns from that data and its biases are formed that way. There's nothing inherent about the algorithm itself that causes that bias other than the data. >> So you're saying to me that exposing more data is actually probably a good thing? >> It is. Exposing different kinds of data, diverse data. To give you an example from the biological world, children who have never seen people of different races tend to be more, it's something new and unique and they'll tease it out. It's like, oh, that's something different. Whereas children who are raised with people of many diverse face types or whatever are perfectly okay seeing new diverse face types. So it's the same kind of thing in AI, right? It's going to hone in on the trends that are coming, and things that are outliers, we're going to call as such. So having good, balanced datasets, the way we collect that data, the way we sift through it and actually present it to an AI is extremely important. >> So one of the most exciting things that I like, obviously autonomous vehicles, I geek out on because, not that I'm a car head, gear head or car buff, but it just, you look at what it encapsulates technically. 5G overlay, essentially sensors all over the car, you have software powering it, you now have augmented reality, mixed reality coming into it, and you have an interface to consumers