Placeholder Image

Subtitles section Play video

  • >> Narrator: Live from Las Vegas, it's theCUBE

  • covering AWS re:Invent 2017.

  • Presented by AWS, Intel,

  • and our ecosystem of partners.

  • >> Hey, welcome back everyone.

  • We are here live at AWS re:Invent in Las Vegas.

  • This is 45,000 people are here inside

  • the Sands Convention Center at the Venetian,

  • the Palazzo, and theCUBE is here

  • >> Offscreen: I don't have an earpiece, by the way.

  • >> for the fifth straight year,

  • and we're excited to be here,

  • and I wanna say it's our fifth year, we've got two sets,

  • and I wanna thank Intel for their sponsorship,

  • and of course our next guest is from Intel.

  • Scott Macepole, director

  • of the CTO's office at Intel PSG.

  • Welcome to theCUBE. >> Thank you.

  • >> Thanks for coming on.

  • So, had a lot of Intel guests on,

  • lot of great guests from customers of Amazon,

  • Amazon executives, Amy Jessup coming on tomorrow.

  • The big story is all this acceleration.

  • of software development.

  • >> Scott: Right.

  • >> You guys at the FPGA within intel

  • are doing acceleration at a whole nother level.

  • 'Cause these clouds have data centers,

  • they have to power the machines

  • even though it's going serverless.

  • What's going on with FPGAs,

  • and how does that relate to the cloud world?

  • >> Well, FPGAs I think have a unique place in the cloud.

  • They're used in a number of different areas,

  • and I think the great thing about them

  • is they're inherently parallel.

  • So you know, they're programmable hardware,

  • so instead of something like a GPU

  • or a purpose-built accelerator,

  • you can make them do a whole bunch of different things,

  • so they can do computer acceleration,

  • they can do network acceleration,

  • and they can do those at the same time.

  • They can also do things like machine learning,

  • and there's structures built inside of them

  • that really help them achieve all of those tasks.

  • >> Why is it gonna pick up lately?

  • Because what are they doing differently now with FPGAs

  • than they were before?

  • Because there's more talk of that now more than ever.

  • >> You know, I mean, I think it's just finally

  • come to a confluence where the programmability

  • is finally really needed.

  • It's very difficult to actually

  • create customized chips for specific markets,

  • and it takes a long time to actually go do that.

  • So by the time you actually create this chip,

  • you may have not had the right solution.

  • FPGAs are unique in that they're programmable,

  • and you can actually create the solution on the fly,

  • and if the solution's not correct

  • you can go and you can actually change that,

  • and they're actually pretty performant now.

  • So the performance has continued to increase

  • generation to generation,

  • and I think that's really what sets them apart.

  • >> So what's the relationship with Amazon?

  • Because now I'm kinda connecting the dots in my head.

  • Amazon's running full speed ahead.

  • >> Scott: Yeah.

  • And they're moving fast, I mean thousands of services.

  • Does FPGAs give you guys faster time to market

  • when they do joint designs with Intel?

  • And how does your relationship with Amazon

  • connect on all this?

  • >> Absolutely, we have a number of relationships with Amazon,

  • clearly the Xeon processors being one of them.

  • The FPGAs are something that we continue to

  • try to work with them on,

  • but we're also in a number of their other applications,

  • such as Alexa, so and there's actually technologies

  • within Alexa that we could take

  • and implement either in Xeon CPUs or actually in FPGAs

  • to further accelerate those,

  • so a lot of the speech processing,

  • a lot of the AI that's behind that,

  • and that's something that, it's not very prevalent now,

  • but I think it'll be in the future.

  • >> So, all that speech stuff matters for you guys, right?

  • That helps you guys, the speech,

  • all the voice stuff that's happening,

  • and the Alexa news, machine learning.

  • >> Right.

  • >> That's good for you, right? I mean, that, I mean...

  • >> It's very good, and it's actually,

  • it's really in the FPGA sweet spot.

  • There's a lot of structures within the FPGAs

  • that make them a lot better for AI than a GPU.

  • So for instance, they have a lot of memory

  • on the inside of the device,

  • and you can actually do the compute and the memory

  • right next to where it needs to be,

  • and that's actually very important,

  • because you want the latency to be very low

  • so that you can process these things very quickly.

  • And there's just a phenomenal amount of bandwidth

  • inside of an FPGA today.

  • There's over 60 terabytes a second of bandwidth

  • in our mid-range Stratix 10 device.

  • And when you couple that together

  • with the unique math capabilities,

  • you can really build exactly what you want.

  • So when you look at GPUs, they're kinda limited

  • to double precision floating pointers,

  • single precision, or integer.

  • The FPGAs can do all of those and more,

  • and you can actually custom build

  • your mathematical path to what you need,

  • save power, be more efficient,

  • and lower the latency.

  • So...

  • >> So Andy Jessup talked about

  • this is a builder's conference.

  • The developers, giving the tools to the developers

  • they need to create amazing things.

  • One of the big announcements was

  • the bare metal servers from AWS.

  • >> Scott: Yeah.

  • How do you see something like an FPGA

  • playing in a service like that?

  • >> Well, the FPGAs could use to help

  • provide security for that.

  • They could obviously be used

  • to help do some of the network processing as well.

  • In addition, they could be used

  • in a lot of classical modes that they could be used in,

  • whether it's like an attached solution

  • for pure acceleration.

  • So just because it's bare metal

  • doesn't mean it can't be bare metal

  • with FPGA to do acceleration.

  • >> And then, let's talk about some of the...

  • You guys, FPGAs is pretty big in the networking space.

  • >> Scott: Yeah.

  • >> Let's talk about some of the surrounding

  • Intel technologies around FPGAs.

  • How are you guys enabling your partners,

  • network partners, to take advantage of X86, Xeon, FPGAs,

  • and accelerating networking services

  • inside of a solution like Amazon.

  • >> We have a number of solutions that we're developing,

  • both with partners and ourselves,

  • to attach to our nix, and other folks' nix,

  • to help accelerate those.

  • We've also released what's called the acceleration stack,

  • and what that's about is really

  • just kinda lowering the barrier of entry for FPGAs,

  • and it has actually a driver solution

  • that goes with it as well, it's called OPAE,

  • and what that driver solution does,

  • it actually creates kind of a containerized environment

  • with an open source software driver

  • so that it just really helps remove the barrier of,

  • you know, you have this FPGA next to a CPU.

  • How do I talk to it?

  • How can we connect to it with our software?

  • And so we're trying to make all of this a lot simpler,

  • and then we're making it all open

  • so that everybody can contribute

  • and that the market can grow faster.

  • >> Yeah, and let's talk about ecosystem around data,

  • the telemetry data coming off of systems.

  • A lot of developers want as much telemetry data,

  • even from AWS, as possible.

  • >> Scott: Yeah.

  • >> Are you guys looking to expose

  • any of that to developers?

  • >> It's always something under consideration,

  • and one of the things that FPGAs are really good at

  • is that you can kinda put them towards the edge

  • so that they can actually process the data

  • so that you don't have to dump

  • the full stream of data that gets generated

  • down off to some other processing vehicle, right?

  • So you can actually do a ton of the processing

  • and then send limited packets off of that.

  • >> So we looked at the camera today,

  • super small device doing some really amazing things,

  • how does FPGAs playing a role in that, the IOT?

  • >> They do a lot of, FPGAs are great for image processing.

  • They can do that actually much quicker

  • than most other things.

  • When you start listening,

  • or reading a little bit about AI,

  • you'll see that a lot of times

  • when you're processing images,

  • you'll have to take a whole batch of them

  • for GPUs to be efficient.

  • FPGAs can operate down at a batch size of one,

  • so they can respond very quickly.

  • They can work on individual images,

  • and again, they can actually do it

  • not just efficiently in terms of the,

  • kinda the amount of hardware that you implement,

  • but efficiently in the power that's required to go do that.

  • >> So when we look at advanced IOT use cases,

  • what are some of the things that end-user customers

  • will be able to do potentially with FPGAs out to the edge,

  • of course less data, less power

  • needed to go back to the cloud, but practically,

  • what are some of the business outcomes

  • from using FPGAs out at the edge?

  • >> You know, there's a number of different applications,

  • you know, for the edge.

  • If you go back to the Alexa,

  • there's a lot of processing smarts

  • that actually go on there.

  • This is an example where the FPGA

  • could actually be used right next to the Xeons

  • to further accelerate some of the speech,

  • and that's stuff that we're looking at now.

  • >> What's the number one use case you're seeing that people,

  • what's the number one use case that you're seeing

  • that people could relate to?

  • Is it Alexa? Is it the video--

  • >> For the edge, or?

  • >> Host: For FPGAs, the value of accelerating.

  • >> For FPGAs, I mean, while there's usage

  • well beyond data center, you know.

  • There's a classic what we would call wire line

  • where it's used in everything today.

  • You know, if you're making a cellphone call,

  • it likely goes through an FPGA at some point.

  • In terms of data center, I think

  • where it's really being used today,

  • there's been a couple of very public announcements.

  • Obviously in network processing

  • in some of the top cloud providers, as well as AI.

  • So, you know, and I think a lot of people

  • were surprised by some of those announcements,

  • but as people look into them a little further,

  • I think they'll see that there's a lot of merit to that.

  • >> The devices get smaller and faster

  • and just the deep lens device has got a graphics engine

  • that would've been on a mainframe a few years ago.

  • I mean, it's huge software power.

  • >> Yeah.

  • >> You guys accelerate that, right?

  • I mean I'm looking, is that a direction?

  • What is the future direction for you guys?

  • What's the future look like for FPGAs?

  • >> It's fully programmable, so, you know,

  • it's really limited by what our customers and us

  • really wanna go invest in.

  • You know, one of the other things

  • that we're trying to do to make FPGAs more usable

  • is remove the kind of barrier where

  • people traditionally do RTL, if you're familiar with that,

  • they actually do the design,

  • and really make it a lot more friendly

  • for software developers, so that they can write things

  • in C or openCL, and that application

  • will actually end up on the inside of the FPGA

  • using some of these other frameworks that I talked about,

  • the acceleration stack.

  • So they don't have to really go

  • and build all the guts of the FPGA,

  • they just focus on their application,

  • you have the FPGA here whether it's attached to the network,

  • coherently attached to a processor,

  • or next to a processor on a, on PCI Express,

  • all of those can be supported,

  • and there's a nice software model

  • to help you do all that development.

  • >> So you wanna make it easy for developers.

  • >> Scott: We wanna make it very easy.

  • >> What specifically do you have for them right now?

  • >> We have the, they call it the DLA framework,

  • the deep learning framework that we released.

  • As I said before, we have the acceleration stack,

  • we have the OPEA which is the driver stack

  • that goes along with that,

  • as well of all our, what we call our

  • high-level synthesis tools, HLS,

  • and that supports C and openCL.

  • So it basically will take your classic software

  • and convert it into gates,

  • and help you get that on the FPGA.

  • >> Will bots be programming this soon?

  • Soon AI's going to be programming the FPGAs?

  • Software, programming software?

  • >> That might be a little bit of a stretch right now,

  • but you know, in the coming years perhaps.

  • >> Host: Scott, thanks for coming onto theCUBE,

  • really appreciate it.

  • >> Thanks for having me.

  • >> Scott Macepole who is with Intel,

  • he's the director of the CTO's office at Intel PSG,

  • they make FPGAs, really instrumental device in software

  • to help accelerate the chips, make it better for developers,

  • power your phone, Alexa, all the things

  • pretty much in our life.

  • Thanks for coming on the Cube, appreciate it.

  • >> Thank you.

  • >> We'll be back with more live coverage.

  • 45,000 people here in Las Vegas, it's crazy.

  • It's Amazon Web Services re:Invent, we'll be right back.

  • (soft electronic music)

>> Narrator: Live from Las Vegas, it's theCUBE

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it