Placeholder Image

Subtitles section Play video

  • (wondrous electronic music)

  • - Welcome to CES 2021.

  • My name is Aaron Jefferson, VP of Product.

  • And I'm here today to talk to you about needs for speed,

  • enabling highway autonomy.

  • And I want to key in on that word speed.

  • Many of the systems today are limited

  • in terms of the ODD or the speed capability of the system,

  • whether it's Level 2 or Level 2+.

  • Those systems have the limited say,

  • sensing and perception technology

  • to enable you to take your hands off the wheel,

  • take your eyes off the road,

  • and really be able to drive autonomously.

  • First, let me give you a little background about Luminar,

  • some of our company highlights, that is.

  • So we came up with this really great

  • breakthrough technology for Lidar,

  • really enabling us to see long distance

  • high resolution sensing technology

  • to enable highway autonomy.

  • Our belief is that and our vision

  • is to make highway autonomy safe and ubiquitous,

  • safety being a key goal there,

  • and ubiquitous meaning available for all.

  • So if we look at overall,

  • our integrated autonomous and safety solution,

  • we are the world's first autonomous solution

  • for series production.

  • There are a lot of different activities for development

  • and spinning systems

  • that aren't really designed

  • to go into a vehicle for production.

  • From day one, our concept was development activity

  • leading to a production platform,

  • which includes all the aspects

  • of what it takes to qualify for automotive grade.

  • We're not just built on sensor technology.

  • We're also built on our software.

  • What our sensor does is really unlock,

  • I would say, a clarity of the environment

  • and the environmental space

  • more than any other sensor this out there.

  • But with that clarity,

  • you can take that information

  • and really develop software

  • and unlock features and capabilities

  • that have never been on the market before.

  • And the key thing is

  • you don't want to just provide the function or the feature,

  • but you also want to do it in a robust way

  • with great confidence.

  • And that's what our sensor delivers.

  • And then we have our top tier,

  • which is essentially the software stack

  • that enables the functionality.

  • Path, planning, and vehicle control.

  • We have the capability to develop that.

  • We're working with several partners to develop that.

  • And we work in conjunction with our tier one.

  • So it really just depends on the model

  • being used by our end customer,

  • but essentially, we are being built,

  • and we are partnering,

  • and we're developing a solution

  • that goes from the sensing

  • all the way to the full stack development

  • and releasing the function at a vehicle level.

  • Now, if you take a look at the market,

  • we like to look at it in two halves, I would say.

  • One is ADAS.

  • ADAS includes everything from Level 0, basic AEB,

  • all the way up to say,

  • the systems you see really on the market today,

  • Level 2, Level 2+ systems

  • where the driver still has to remain engaged,

  • maybe can take their hands off,

  • but their eyes always have to remain on the vehicle,

  • I mean, on the road.

  • What we've noticed in this space,

  • and if you look at the numbers on the bottom left there,

  • it hasn't really been effective

  • at reducing the number of fatalities.

  • Still over a million fatalities globally,

  • still tons, I think 50 million accidents.

  • And if you look at the United States in particular,

  • 35,000 deaths on the highway every year.

  • And that's a real problem.

  • And that number has remained stagnant for a while.

  • Our belief is that with Lidar,

  • you can greatly improve the effectiveness of these systems

  • and really start to hit those numbers and knock those down.

  • The other half of the market is autonomous driving.

  • And autonomous really means driver out of the loop.

  • Anything under that is automated or ADAS.

  • For autonomous driving,

  • we are really looking at unlocking the capability

  • of allowing the driver to disengage

  • and do that in a safe manner,

  • meaning the vehicle knows what to do.

  • It can detect objects or threats well ahead of its path.

  • It can understand the maneuverability

  • and what it can and can't do.

  • And it has a redundancy in the safety behind it

  • to really deliver the capability.

  • So when we look at those two halves,

  • we say, okay, there's a really great opportunity

  • to greatly improve active safety.

  • And we call that proactive safety.

  • And then there's a,

  • I mean, we are the ones,

  • we believe, to unlock the autonomous space.

  • And we're really focused on Level 3 for a highway,

  • again, because once you get to the city

  • and you're talking Level 4 or 5,

  • and no steering wheels, and pedals, and brakes,

  • you get into a situation

  • where you have an extremely difficult ODD.

  • Over time, we believe that will be solved,

  • but the immediate benefit to an end consumer,

  • to someone buying a private vehicle is the highway autonomy.

  • So next, I would like to talk to you

  • about our industry leading performance.

  • We've talked a lot about unlocking autonomy

  • and it's important that we understand

  • in terms of where Luminar is

  • in relation to the rest of the market.

  • If you look at the graph to the right,

  • this will really explain how we founded

  • and designed our architecture from the beginning

  • we knew that the problem to solve

  • was high resolution at a long distance.

  • And we couldn't allow for that trade off.

  • If you look at the rest of the market,

  • and for a lot of reasons,

  • architecture selection, a solid state selection,

  • other things of that nature,

  • everybody has to make a trade off.

  • And what you don't see here is also a field of view.

  • There are some companies that can see really far,

  • say 150 meters.

  • However, their resolution is extremely low.

  • What that means is

  • if there are two objects out there 150 meters away,

  • they're gonna detect one

  • or they're going to highlight in their specifications

  • that they have a range detection of 150 meters.

  • But without the resolution,

  • it really doesn't make much of a difference.

  • Same way the other way.

  • If you were to say, hey,

  • I have the resolution, the high resolution.

  • I can detect two small objects separately from one another.

  • That's great.

  • But if you can only do that at 50 meters,

  • well, then I can do that with the radar.

  • So, do I need a Lidar in my system?

  • So, it's really important

  • that we provided the high resolution,

  • and the range, and also the field of view.

  • This just digs into that a little bit further.

  • I won't go into too much detail,

  • but essentially, if you look at the competition

  • and you look at what's required

  • to deliver into the market, right,

  • all these different factors play a part.

  • And if you're missing any of these,

  • it really does hinder your chances

  • at delivering the performance

  • and the say, automotive grade product

  • that the market needs.

  • And so this is just, again,

  • a deeper dive into that look.

  • Some of this, if you look at the competitors' side,

  • some of it's because of the selected 905 wavelength.

  • Some of it is because of the selected architecture.

  • And that really goes into why we selected what we did.

  • We built it from the standpoint

  • of what problem are we solving

  • and let's not make compromises around certain capability

  • that we know is gonna unlock the capability in the market.

  • Speaking of those architectures,

  • if you look at our system,

  • we really built things from the ground up

  • as was mentioned before.

  • If you look at our structure,

  • it is the scanning, the (indistinct), and the laser.

  • It's a very simple architecture,

  • again, that allows us to see

  • all the information that we need in a scene,

  • versus what we typically compare to

  • or have been compared to in the past

  • was a spinning Lidar,

  • which requires several lasers, several receivers.

  • And yeah, there's a ton of data there,

  • but there's a ton of cost associated with that.

  • And while that has developed some,

  • it still isn't an architecture

  • that will get you an affordable solution,

  • a robust solution in the market

  • that you can easily create an auto grade product

  • and then sell into the market.

  • It's one of the main reasons

  • you really don't see any spinning Lidar

  • in any production vehicles.

  • And it's also where we understood there were limitations

  • and we wanted to make sure

  • that we set up our architecture to solve the problem,

  • but also think about scalability,

  • think about cost,

  • think about manufacturability and production.

  • And we've done a really good job with that.

  • We really love this slide.

  • This is one of our favorite slides from my product group.

  • It is requirements that matter.

  • And essentially, what we want to explain to everybody

  • is everything on this page matters.

  • If you drop one ball here,

  • you essentially limit the chances

  • of your product being effective

  • and going into production.

  • The top row is really around performance.

  • You really want to bring that Level 3,

  • Level 4, hands off, eyes off value to the consumer.

  • You need that full top row.

  • Range, resolution, fidelity,

  • and the precision of the information,

  • the right field of view, the weather performance.

  • We need all of those pieces to work well.

  • What we're finding now is the bottom row,

  • when you really start

  • to get into discussions with customers,

  • you really start to explain to them what you have

  • and what you can deliver,

  • they also have a concern from the bottom row.

  • You know, how viable is your supply chain?

  • What's your manufacturing strategy?

  • Do you understand how to manufacture this

  • to where you can get yields that makes sense

  • and that will allow everybody

  • to feel safe about their product being developed?

  • Is it quality product?

  • Is it automotive grade?

  • You know, there are a lot of companies

  • that really focus on the performance

  • and a lot of companies fall short there,

  • and they're not even yet at the position

  • where they can think about the bottom row.

  • When you talk to the customers,

  • the premium OEMs, the volume OEMs,

  • every OEM that has

  • understands what it takes

  • to put a quality product into the market,

  • especially the automotive market.

  • That bottom row is just as important.

  • We were awarded a Volvo win.

  • And I want to be clear.

  • This Volvo win is a production award for SLP 2022.

  • This is not an experiment.

  • This is not a potential award.

  • This is a solid award.

  • And the nice thing about it is with Volvo.

  • Volvo has a history

  • of releasing new technology into the market,

  • and ADAS, and other places.

  • And again, their DNA is safety.

  • And so for us,

  • it's the perfect partner to collaborate and work with,

  • co-develop,

  • and deliver some pretty sweet technology,

  • and deliver the Level 4 highway autonomy

  • that we believe our consumers want.

  • In addition to Volvo,

  • we still have our Daimler Truck Award,

  • which is our activity to really work with Daimler Truck,

  • and again, their AV partner, Torc,

  • and deliver automated driving functionality and capability.

  • We're able to come alongside them

  • and really unlock what's necessary

  • to deliver highway autonomy.

  • And then lastly is a commercial agreement with MobilEye.

  • And you know, this award

  • and this partnership is really important

  • because if you look at where MobilEye started,

  • it's a similar story in terms of a startup

  • growing and transitioning

  • into essentially an automotive supplier.

  • And the leading automotive supplier

  • for ADAS computer vision technology

  • with very unique hardware capability,

  • and really the heart

  • and the science behind what they're doing

  • with their computer vision algorithms.

  • Those mated together have really done an amazing job

  • in pushing the market forward,

  • and delivering ADAS,

  • and certain levels

  • of Level 2 automated highway driving function.

  • So to work alongside them,

  • to be recognized as the Lidar leader

  • and the chosen company to partner with,

  • to deliver the new capability for robotaxi is really great.

  • And we're looking forward to that partnership continuing on.

  • One unique thing about the market

  • is a lot of people talk about data and data collection.

  • And they really look to the say,

  • robotaxi spaces

  • and a lot of the data being collected out in the market.

  • If you look at the robotaxi space,

  • there are a lot of big players,

  • a lot of money invested,

  • but they are dedicated routes.

  • There's only a certain amount of data

  • you can collect in those spaces.

  • You run the same routes,

  • you collect the same weather data,

  • you collect the same corner cases.

  • We believe that by 2025,

  • we could have a million vehicles on the road.

  • The nice thing about that iteration

  • is it doesn't take a change in hardware.

  • Our resolution, and our range,

  • and our capability today is gonna be good enough

  • for future upgrades of performance and time tomorrow.

  • We developed a single product

  • that can stretch into each vertical.

  • So whether it's past car, commercial vehicle, robotaxi,

  • we are not changing our architecture for each customer.

  • We have developed a product that we feel

  • is a solution for the market

  • to unlock that long range high resolution capability,

  • to unlock Level 3, Level 4 autonomy.

  • So, let's talk about, you know,

  • a bit more specific about the capability,

  • and the technology, and what we unlocked.

  • A dark tire on the road is one of the most widely used

  • use cases from our customers.

  • I want to be able to detect this dark object

  • 5% reflectivity as far away as possible,

  • 200 meters when I'm traveling at high speeds

  • and understand that it's there,

  • it's in my lane, and I want to avoid it.

  • And this is just a tire.

  • It could include a tire with the actual wheel or rim,

  • but essentially,

  • it's can you detect these small objects?

  • You know, we like to differentiate the fact

  • that we don't necessarily have to classify it as a tire.

  • We just have to understand what it is in terms of its size,

  • that it's in our way,

  • and is it a threat for our driving functionality?

  • If you look at what we've done

  • in terms of evaluating and analyzing things,

  • it really is a good measure of,

  • can I detect an object in time

  • for me to actually make a decision?

  • Is a drive over able?

  • Do I need to maneuver around it?

  • We also used say, a wooden pallet.

  • So if a wooden pallet falls off a truck

  • and you can't see that,

  • or if it's on the road,

  • you want to be able to avoid that.

  • Well, if we look at our performance today,

  • we're probably detecting say, a tire.

  • Around 200 meters is when we first detect the object

  • and we say, okay, there's something here we want to avoid.

  • And then when we really understand

  • and feel good about a target

  • is when we have six points on target.

  • I'll go back a slide.

  • If you look at our nearest competitor

  • from what we're doing on Iris,

  • we are approximately 125 meters ahead

  • of anybody else on what they can detect

  • in terms of that small tire on the road.

  • That 125 meters alone is far enough to,

  • I'd say, be a problem for others to be able to detect it

  • and then do something about it.

  • That's what our customers want

  • and that's what we plan to deliver.

  • So another aspect of our design

  • and what we're seeing in the market is,

  • do I have to put the Lidar in the crown of the vehicle?

  • We chose and we worked very closely with Volvo

  • to basically establish the fact

  • that in the crown of the vehicle

  • is where you really want your Lidar

  • to be able to see as much of the environment as possible.

  • If you think about yourself as a driver,

  • you're in the position where you can see

  • a fair amount of degrees,

  • a field of view around yourself,

  • especially right in front of you.

  • And we're not laying down in the vehicle

  • with our head in the bumper or in the grill.

  • We really have a vantage point where we can see the road,

  • we can see crossing objects,

  • we can see the lanes in front of us,

  • the objects in front of us.

  • And that's what we felt we needed for Lidar

  • to get the most benefit out of it.

  • Here, we show a few of the trade-offs

  • and understand that, you know,

  • if we're trying to see obstacles on the road,

  • if we're trying to see trucks, cars,

  • motorcycles, things of that nature,

  • we can still do that.

  • We can do that in a grill at the same distance.

  • If you have a clear sight of a road ahead of you,

  • then you can see the same object

  • whether it's in a crown or the roof of the vehicle

  • as it is in the bumper of the vehicle

  • or in the grill of the vehicle.

  • However, your vantage point up

  • does give you some advantages.

  • The first advantage is really the road

  • and drivable free space.

  • So, we essentially double the drivable free space

  • if we go from being in a grill

  • to being in the crown of the vehicle.

  • And what that really means is from your vantage point,

  • you can see more of the ground,

  • you have a better angle,

  • you can understand the road topology.

  • So, sinkholes, huge potholes, things of that nature,

  • you can understand what that road is.

  • And then in terms of free space,

  • which is really important

  • when you talk about highway autonomy,

  • especially Level 3,

  • if there's something that you want to avoid

  • and you want to safely pull over to the side of the road,

  • you actually need to know

  • that there's side of the road to pull over into.

  • And the longer you can see that,

  • the more important it is.

  • So, we really believe again,

  • that that vantage point provided,

  • and you can see from the data,

  • that it does extend the free space capability.

  • The other way,

  • the other thing is the lane markers.

  • Lane markers is important

  • because I'd say it's one of the most neglected aspects

  • of automated driving that people don't think about.

  • Usually, people think about avoiding small objects.

  • But where most Level 2 systems fail today is

  • when they can't detect the lane,

  • they throw the control back over to the driver

  • and now you have to take back control of the wheel

  • or you have to be paying better attention

  • because either the camera was blinded

  • or the lanes were faint

  • and that the confidence level was low.

  • What the Lidar does is really allow you

  • to detect those lane markings,

  • detect road edges really well.

  • And if you're low on the ground,

  • there's only so far you can see.

  • If you're high on the crown

  • or on the roof line of the vehicle,

  • then again, you have more of a vertical aspect

  • and you have a better vantage point

  • to be able to detect those lines.

  • So, we're talking from 75 meters of lane detection

  • up to 250 meters of lane detection.

  • And that is, again, that's significant

  • because if you want to identify an object

  • and classify it as in your lane or not,

  • you want to do that as long as possible

  • so that that object stops.

  • You don't stop inadvertently

  • if it's in an adjacent lane.

  • And if it isn't in your lane,

  • that you appropriately stop.

  • So either way, we really feel like, again,

  • that is a correct vantage point.

  • However, we also understand that

  • there are cycle plans to vehicles

  • and being able to design a vehicle

  • with the latter in the crown

  • takes some decision making and some time.

  • So for those that require it in the grill,

  • we share our performance specs.

  • And again, what we are able to do

  • is still detect those small objects far away

  • such that in an automated driving scenario,

  • we still understand what's ahead of us

  • as far out as possible

  • and we decide to drive over or avoid it.

  • And again, that's the critical measure

  • that some of our customers expect from us.

  • So, I want to talk a little bit

  • about the point cloud itself

  • before I get into the perception software

  • and what the software can do.

  • Your perception software

  • is only as good as your point cloud

  • or your data you're getting from the Lidar.

  • So if you take a look at this image,

  • this is essentially an image on the highway

  • and it very clearly highlights a few things.

  • One, the vehicles ahead.

  • We put bounding boxes around it,

  • but that's based off of point cloud data.

  • It clearly identifies the lane markers

  • which helps us identify lanes,

  • the guard rails on the side of the highway,

  • and then also, the infrastructure around you,

  • whether it's the opposite road, brush, trees,

  • things of that nature.

  • So, it shows you the granularity

  • and the clarity of the data

  • that we get into our perception algorithms.

  • Another thing to point here is

  • what can you see and how far can you see it?

  • So if you take a look at the first thing,

  • it's really what's the road topology?

  • What am I on?

  • What does that road look like?

  • And out to 75 meters, again,

  • this is at the crown of the vehicle.

  • Out to 75 meters,

  • you can really see the performance of the road,

  • understand exactly what it is, what it's doing,

  • curves and all different aspects of the road.

  • We have that data coming back to us.

  • And then if you say, okay, well, what's next?

  • Beyond that, you have say 150 meters

  • where you actually detect lanes.

  • We talked about, at the crown of the vehicle,

  • you can detect lanes out to 150 meters.

  • What that really does is allow you

  • to identify what lane you're in

  • as well as what lane those objects are in.

  • So all those boxes that you see there

  • and all those vehicles,

  • we can do a lane assignment for each of those

  • out to 150 meters,

  • mainly because we have the capability

  • to detect the lanes reliably

  • and confidently out to that range.

  • And then if you look at vehicles,

  • motorcycles, things of that nature,

  • we can see those out to 250.

  • And it's not just the first detection,

  • hey, there might be something out there.

  • It is actually identified as a cluster of points

  • that equates to a vehicle,

  • whether it's a car, a motorcycle, a truck.

  • It's our ability to be able to see that far

  • and understand, again,

  • not necessarily the lane assignment,

  • but exactly what it is

  • and the fact that it's something that we want to track.

  • So once we know that there's an object out there

  • and we can understand,

  • and we'll talk about segmentation in a minute,

  • we can understand what it is,

  • then we can track it.

  • And we can basically check for it as a threat

  • as we're driving with it,

  • as we're going past it,

  • whatever the case may be.

  • But your point cloud is the foundation

  • for what you do in software.

  • Another aspect of our point cloud which is really neat

  • is the reflectance and the resolution of the reflectance.

  • Imagine that you're on the road,

  • your headlights go out and it's completely dark.

  • You can only see as far as your eyes

  • can see through the darkness.

  • We don't have a reflectivity measure that comes back to us,

  • but the Lidar does.

  • So the Lidar can basically see the scene

  • in the same way that we see with our eyes

  • but without the light.

  • And that's the reflectivity of all aspects of the road.

  • The road edge is different than the ground

  • is different than the grass

  • is different than the trees.

  • The lane markings, vehicles, license plates, tail lights.

  • So all these things give us,

  • I'd say, a third dimension of data

  • that we can then use to develop

  • and further develop our algorithms.

  • So if you look here,

  • you'll see that there are vehicles detected

  • on the right side or in front of the vehicle

  • where we're seeing tail lights and license plates.

  • We can see that.

  • We can see those vehicles

  • whether it's dark or whether it's daylight.

  • There's a reflectivity measure that comes back

  • and allows us to perform at an even better level

  • than, say, cameras,

  • where we're looking at 2D information

  • and are really dependent on a light source

  • to be able to detect objects.

  • What we really love about this aspect

  • is that it clearly shows our ability

  • to also perform in inclement weather or at nighttime.

  • So, it goes back to availability.

  • If it's dark and your lights aren't on

  • or say, you have a malfunction,

  • it's cloudy, if it's dark, rainy, foggy,

  • and we'll talk a little bit about that,

  • there's a reflectance measure

  • that helps us determine what's in front of us

  • and then helps us control the vehicle still.

  • So we're not limited in terms of availability

  • and we're not limited say, by our visibility,

  • and what our vehicle performance can be.

  • We talked about weather a bit.

  • This is an image of penetrating rain.

  • And you see the rain level.

  • So this is a heavy rain that we're showing here.

  • The camera image is a bit skewed

  • because typically, you won't have water drops

  • on a camera that's behind the windscreen.

  • So you'll clean that off.

  • But still, we've all driven in heavy rain

  • where our visibility is limited

  • to maybe a few cars in front of us

  • or it's at least heavily restricted.

  • Whereas if you look at the Lidar,

  • the Lidar can see further out,

  • and we're at 1550, so we have less effect by weather.

  • And we're able to see objects, obstacles, road edges, lanes,

  • all the things that are important

  • for us to continue driving.

  • Today, I think a lot of people

  • disengage systems when it's raining

  • or when they're not comfortable

  • because they don't trust the system that is driving them,

  • even for a Level 2 system.

  • Our goal is that we continue the same performance,

  • maybe a slower performance.

  • Maybe you want to degrade or slow down

  • because it's not the same safety level

  • to drive at 80 miles per hour

  • as it is at 40 miles per hour in rain.

  • So, we all acknowledge that.

  • But we still want to be able to detect

  • and understand our environment

  • and perform accordingly in an automated driving mode

  • or an autonomous driving mode.

  • The next thing we're looking at is penetrating snow.

  • And this is a really neat picture.

  • What's really unique about our system is at 1550

  • in our high range and our high resolution,

  • what we're able to do is detect the snow

  • and not necessarily be distracted

  • or have our signal dispersed by the snow.

  • So we detect the snow where there is snow.

  • And then where there's not snow, we detect the object.

  • And so what you see here is an image of a snowy day

  • with the home behind that

  • and the trees and the brush there.

  • We can basically detect where the snow falling.

  • You can see that snow falling there.

  • And then we detect the object behind it.

  • So it really does show you

  • that we're not worried about weather

  • degrading our performance in terms of our detection.

  • Of course, heavier snow,

  • you have more detection of snow,

  • but the goal is that we still

  • get enough points on the object,

  • whether it's a vehicle for the lanes,

  • road edge, things of that nature,

  • and we can still perform

  • and deliver the autonomous driving function.

  • Another good example is fog.

  • I really like this one in particular because for fog,

  • there's different varying levels of fog, of course,

  • but the moderate fog is where

  • you have a little bit of confidence in what you see,

  • but you're not really sure about what's out there

  • and you are a little bit more hesitant

  • to maybe move slower,

  • but not as restricted as a heavy fog situation.

  • So in this instance, even with the camera,

  • you can see the vehicle off to the left.

  • You can see some resemblance of lane edge.

  • You may be able to see,

  • may not be able to see road markings.

  • But you definitely usually can't see

  • the divided road sign.

  • And you just have a limitation

  • in terms of your range and your distance.

  • But again, with Lidar, we can see the image.

  • We can see the information,

  • be it the Lidar point cloud directly,

  • be it the reflectance.

  • And we get the right amount of data

  • that says, hey, this is a road.

  • These are the boundaries.

  • Here are the poles.

  • Here's a car that's parked there.

  • We can understand whether it's moving or not.

  • And you can also see the divided road sign,

  • so you know that there's a split coming up.

  • So what's really important is that again,

  • in a moderate scenario,

  • you might have some confidence,

  • but the Lidar has the same amount of confidence.

  • The next image is of, say, penetrating heavy fog.

  • And for heavy fog,

  • if you've ever been in this driving scenario,

  • it makes you a little bit nervous

  • mainly because you really can't see what's in front of you.

  • What's worse is you don't know

  • what the objects in front of you are doing.

  • Are they braking?

  • Are they slowing down?

  • Is somebody getting over in your lane?

  • Things of that nature.

  • For me, this scenario, and say, dust,

  • a heavy dust storm

  • where you really can't see what's in front of you

  • and you have no idea if somebody is braking

  • because they're nervous.

  • You know, those scenarios

  • where you're almost completely blind

  • but you can't stop

  • because you know you have people behind you

  • and they can't see either,

  • so you need to proceed forward.

  • This is where I think our Lidar really shines

  • in terms of being able to see the lane markings

  • and the lanes out to a certain distance

  • to be able to detect objects.

  • If you look, there's a car that's essentially detected

  • because of the signal returns from the Lidar at 80 meters.

  • And so, it's really important that,

  • you know, we would all want to degrade our speed.

  • We would all want to be more cautious,

  • but you definitely want to be able to see

  • beyond what your sight

  • and beyond what the camera can see.

  • I want to know what's going on.

  • I want to know if a vehicle is stopped ahead.

  • I want to know if there's road ahead

  • or something has changed.

  • And so what's really impressive is,

  • and these are real world scenarios.

  • These aren't things that don't happen.

  • And ironically, with the camera,

  • it would basically say, hey, system not available.

  • Driver, you must take control.

  • And if you've ever driven, say,

  • East Coast, Virginia, things of that nature,

  • you can go up a hill and be in fog

  • and then come down and be out of fog.

  • So, that toggling back and forth of the system,

  • system enabled, system disabled,

  • we don't want that.

  • We really want our Lidar to be able to function

  • no matter what the environment states.

  • And this is a good example

  • of even in a heavy fog situation,

  • we can understand what's ahead of us

  • and we can basically enable the vehicle to act accordingly.

  • Now, we want to talk a little bit about perception.

  • So, that's the point cloud information.

  • We get really solid point cloud information,

  • which then helps us deliver the function.

  • And the perception here is understanding a complete scene.

  • This is like a first understanding of what is all around me.

  • What am I looking at?

  • What does the system understand?

  • What do I need to do?

  • And what we do is call that semantic segmentation,

  • which is basically what our mind does anyway.

  • We identify this is the road,

  • that's a rail, up there as a bridge,

  • that is a vehicle, this is a motorcycle.

  • It's not just necessarily classification,

  • but it's the separation of the different objects.

  • Even if you see a pedestrian in front of a vehicle,

  • you understand that there's a pedestrian there

  • and a vehicle,

  • and you don't miss the pedestrian.

  • That's what semantic segmentation is,

  • developing a complete understanding of the scene

  • and the environment around you,

  • so then you know what to do with that data.

  • The other thing about Lidar that is different from camera

  • is we are basically getting information in 3D,

  • so we know the size of an object, angle angle,

  • and we know the range of the distance of that object,

  • which is extremely important.

  • It's not just that it's there

  • and we're looking for an image to get bigger over time

  • or smaller over time.

  • We're measuring the exact distance from that object

  • so we know are we approaching it and it's slowing down,

  • are they moving away and speeding up?

  • Typical systems today use a combination of camera

  • and radar fusion to do that.

  • We can do that all in one sensor.

  • And that 3D data gives us all the information

  • we need to be able to unlock the perception

  • and the control functionality for autonomous driving.

  • The other aspect of our perception software

  • is being able to detect and classify objects.

  • This becomes very important

  • because you want to understand,

  • again, we talked about semantic segmentation

  • and understanding the entire scene.

  • Now, it is, what is it

  • and do I need to pay attention to it or not?

  • Do I care about it

  • or do I allow my algorithm to go look

  • and worry about other things?

  • So it's the detection of vehicles, pedestrians,

  • a road edge, fixed objects, crosswalks,

  • things of that nature

  • where you can actually really detect and classify,

  • and then put, say, a confidence level

  • to whether or not you need to do something about it or not.

  • Again, this is only unlocked

  • and enabled by the point cloud that we provide.

  • It's the same thing with the camera.

  • A camera does the same thing.

  • It sees things,

  • it classifies objects as pedestrians or vehicles.

  • But the nice thing about our data

  • is we get that understanding

  • as well as the range estimation

  • and velocity estimation of that data

  • all on the same sensor.

  • So when you think about the highway,

  • I want to know what's out there.

  • I want to be able to classify it as a vehicle,

  • as a motorcycle, as a pedestrian,

  • that also tells me how quickly

  • things can move out of my way.

  • It tells me a lot of information

  • the same way we use our information today.

  • If I see a pedestrian on the side of the road

  • and I'm traveling at a certain speed

  • and they're far enough away,

  • I know I don't have to worry about them

  • as an obstacle coming in front of my vehicle.

  • Classifying them as that in my mind

  • is the same way that we classify it as a Lidar based system

  • and understand again what are threats and what are not.

  • So, very important that we have

  • that capability within our system.

  • The next set of information is the road information.

  • And we talked a bit about

  • our ability to extend the availability of lanes

  • and be able to see lanes.

  • Ultimately, you want to understand where your lane is going,

  • where it's headed,

  • so that you can control the vehicle safely.

  • Are you coming up on a curve?

  • Is there an intersection?

  • Whatever you need to know about the lane

  • and topology of the road is very important.

  • From a highway perspective, highway driving perspective,

  • I want to know what's in my lane,

  • how far it's out there,

  • and am I approaching it or is it moving away from me?

  • One of the more difficult things for a Level 2 system

  • and for an ADAS system in general

  • is for a camera to understand that there's an object ahead,

  • but not necessarily be able to understand

  • whether or not it's in the lane

  • or if it's in an adjacent lane

  • just because of the egomotion of the vehicle

  • and not necessarily being able to measure the location.

  • It can estimate the location, but it can't measure.

  • Usually, you use radar

  • to understand the steering wheel angle,

  • where you're headed, you're heading,

  • how far an object is.

  • And you try to fuse that data

  • and come up with an estimation for where it is in your lane.

  • The nice thing about our Lidar is

  • we're detecting lanes real time with our 3D data.

  • We're detecting the vehicle real time with our 3D data.

  • So we get coordinate data back

  • that tells us exactly if that vehicle is in our lane

  • and in our egopath

  • based off of the vehicle steering control information.

  • So with that,

  • we have the ability to understand maybe 150 meters away

  • that this object is out there,

  • it's in my lane,

  • and if I'm approaching that vehicle quickly,

  • maybe I need to make a lane change now

  • before I come up on that vehicle.

  • Or if there's a stopped vehicle,

  • or if there's a vehicle that's slowing down,

  • we're coming up on that vehicle too quickly,

  • I don't necessarily have to figure out over time

  • if that vehicle was in my lane.

  • I know exactly that it is

  • and then I can basically enable the vehicle

  • to do something about it.

  • Very unique capability with Lidar.

  • Very unique capability with our Lidar,

  • because again, we see much further

  • and we see in enough time for you to make a safe maneuver

  • and allow that vehicle to make a safe decision.

  • Safe autonomy, don't forget that.

  • Here's an example.

  • High range vehicle to lane association.

  • So this is just a small example

  • where there's a vehicle out there.

  • And again, we're giving it a lane assignment.

  • We're not depending on other sensors

  • to help give that assignment.

  • And again, we can do that further out.

  • You see the further range is 110, 145.

  • We have that capability.

  • And what it really does is makes us a wholly owned sensor

  • that can deliver highway autonomy

  • in terms of lane, object detection,

  • and understanding of the complete scene.

  • So, I've said a lot about Luminar, Lidar, and our system,

  • what we unlock, what we enable,

  • a long range, high resolution sensing,

  • really the only sensor on the market

  • that can really unlock and deliver that capability.

  • If you look at everything we've talked about today,

  • it's what problem do we solve?

  • And the problem we solve is the hands off,

  • eyes off functionality on the highway.

  • We focus on highway

  • because that is the most value to the consumer today.

  • We really feel like we're enabling Level 3,

  • Level 4 driving.

  • And again, with our partnership

  • and our business awards with Volvo, Daimler Truck,

  • and with MobilEye,

  • that should be clear that we are the market leader there

  • and we have a great foundation on which to grow upon.

  • We also talk about the performance of active safety.

  • My focus today was on highway autonomy.

  • Well, what's important to remember

  • is that safety is the underlying requirement there,

  • safe and ubiquitous.

  • So when we talk about high speed driving,

  • we want to make sure

  • that we can safely control the vehicle,

  • that we can safely brake the vehicle.

  • When you talk about existing systems today for ADAS,

  • a lot of those are low speed.

  • We're trying to mitigate the accident,

  • lessen the severity.

  • We want to eliminate things.

  • If the Lidar can see in inclement weather,

  • if it can see during nighttime conditions,

  • if it can see in fog and things of that nature,

  • why shouldn't your AEB system

  • work in those conditions as well?

  • A matter of fact, it may be more pertinent

  • that it works and detects things earlier

  • because the ground is slicker

  • or the system is compromised

  • because of the environmental conditions.

  • So the earlier I can detect something in my way as a threat,

  • the better I can do something about that.

  • So proactive safety is something

  • we don't want to forget about.

  • And lastly, I want to reiterate the all requirements matter.

  • Again, you'll hear a lot of noise in the market

  • around range and resolution

  • and what these different capabilities are.

  • We talked about the top row,

  • which is really the performance,

  • every aspect of performance,

  • range, resolution, fidelity,

  • field of view, weather performance.

  • It's all important.

  • And then you can't forget about the bottom row.

  • We are the automotive grade,

  • serious production development program,

  • serious production program for Volvo.

  • So that is going to be the product

  • that I would say is going to really unlock

  • the capability at a consumer level

  • that is robust, safe, and has high quality.

  • We have the advanced manufacturability there

  • so that we'll get the yield

  • and the performance that we need

  • in order continue to supply the automotive market

  • with scale as we continue to grow our volume

  • and our capability there.

  • And it's also important that we understand

  • in order to deliver that,

  • it takes an organization that is sized and skilled for that.

  • So you have to have the right capability

  • in terms of quality systems.

  • You have to have the right engineering resources.

  • You have to have the right leadership and mindset.

  • We are delivering highway autonomy.

  • Thank you for joining us in this special presentation today.

  • To learn more, please visit LuminarTech.com.

  • Thank you.

(wondrous electronic music)

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it