Ex-Apple engineers unveil a next-generation sensor for self-driving cars

Aeva, a Mountain View, California-based startup founded only just last year, has built what its two-cofounders claim is a next-generation version of LIDAR, the 3D mapping technology that has become instrumental for how self-driving cars measure the distance of objects and effectively see the road in front of them. And today, the company is officially unveiling its product, a tiny box that can more directly measure objects in a given scene and the distance and velocity of those objects relative to one another. This, the company says, will help power future self-driving systems as autonomous cars become more commonplace and the software that powers them grows increasingly more demanding and data-driven.

In an expansive and empty warehouse on the San Francisco waterfront last week, Aeva strapped its product to the top of a commercial vehicle and demoed its ability to visualize the scene in front of it as a small team of people rode scooters and golf carts in zig zagging lines in front of it. Aeva’s technology is able to separate objects based on distance and whether the object is moving away from or toward it. It’s also able to measure the velocity of the object, which enables the software to predict where cars and pedestrians are going. The company even says its sensing system is capable of completely shutting out interference from other, similar sensors — including those from other companies — and operating in all weather conditions and in the dark, thanks to a reflectivity sensor.

self-driving cars
Image: Aeva

“Typically you have separate LIDAR, separate camera, and separate motion sensors and fuse them in a central compute box,” says Aeva co-founder Soroush Salehian. “Our product has access to the lowest levels of data. We can measure pixels on certain objects, like a human limb. We can measure the velocity and motion of a pedestrian or object, and we can predict the future motion of those objects pretty accurately.” Salehian says Aeva’s product isn’t an advancement in artificial intelligence per se. Rather, it is a mix of hardware and software that does a superior job of capturing data about its surroundings, he says.

Not only is Aeva’s version of LIDAR superior to the variety found in most self-driving test vehicles on the road today, the company says, but the lightweight, low-power box it’s housed in also contains all the other types of sensors and cameras necessary for an autonomous vehicle to see and make sense of every component within its field of vision. It is essentially a better pair of eyes for the AI algorithms that power self-driving systems on the road today. Aeva is hoping its product, which it’s referring to as a 4D LIDAR system, will become an industry-standard hardware and software package for use in commercial applications.

Normally, claims like this would invite some serious skepticism. Yet Aeva’s co-founders, Salehian and his business partner Mina Rezk, are former Apple engineers who come with a unique pedigree: both worked on the company’s “Special Projects” team and, although they will not say so, likely had a hand in the company’s secretive autonomous car division. Prior to that, Salehian worked on developing the first Apple Watch and the iPhone 6, while Rezk is a veteran of Nikon where he worked on optical hardware.

That experience may give them a leg up over other startups trying to enter the self-driving scene. Yet that market is crowding quickly. It’s already home to large tech companies like Alphabet’s Waymo, Uber, Tesla, and Apple, as well as a number of existing companies like leading LIDAR provider Velodyne and, these days, pretty much every carmaker in the auto industry. Aeva is hoping to carve out its own space with what its co-founders are calling a next-generation advancement.

Image: Aeva

Both men left Apple last year to found Aeva, and they’ve already attracted interest from a number of venture capital firms — the startup is announcing a $45 million round of funding today. They’ve also caught the eye of carmakers, some of which Salehian and Rezk say are already taking into account the company’s new sensor when designing future commercial vehicles. The company is not yet ready to announce those business partners in the auto industry, though Salehian and Rezk say they’ll reveal some names soon.

But what exactly makes Aeva’s product better, and how does it work? Aeva won’t say exactly what the secret sauce is — though Salehian told me it is a mix of hardware and software working in tandem that allows the sensor to capture data more directly. The way the company describes it, the Aeva box is “able to not only improve the performance of traditional LIDARs, but also give each LIDAR point an instantaneous measurement of velocity, unlocking the temporal element that is often overlooked and missed.” By capturing depth and visual information, performing positional tracking, and capturing velocity, Aeva says it is able to build a stronger model for how a self-driving car sees the road.

Another aspect of the product is how the LIDAR portion of the sensor functions, which in Aeva’s case is not the same as other LIDAR products on the market. Initially conceived as a surveying laser technology for creating topographically maps of hard-to-reach areas — including the surface of the Moon in the Apollo 15 mission — LIDAR has become pivotal to self-driving cars because it turns out that pulses of light are great for measuring the distance of objects from one another. Send out a pulsed laser beam, and the time it takes the light to bounce back and its wavelength can help create accurate 3D maps of an environment. So if a sensor can measure how far away another car is from its own autonomous one, the AI algorithms can identify and even predict when that car is slowing down or recognize that it’s changing lanes when combined with a visual feed, as well as depth and motion sensors.

Rezk tells me that with Aeva’s product, it is not sending out pulses of light at various intervals. Instead, the company’s LIDAR is continuous, allowing it to gather data faster and more directly, which is one of the key differentiating factors for its product, the company claims. “Everything here is a direct measurement rather than any algorithms involved,” Rezk says. “Our 4D LIDAR tech can measure the depth, reflectivity, and velocity in a single laser shot, all instantaneous. No need to spend time looking at frames.” Essentially, instead of feeding data into a central AI system that then tries to make sense of it all before making a decision, Aeva says its product does a lot of the laborious, measurement work right away.

Both men were careful to stress that this is not at all designed to be a fully comprehensive self-driving system. Although there is an element of computer vision working behind the scenes to dissect the live video and merge that information with the data from the other sensors, Aeva says its product will not operate a vehicle. It is not designed to do so, and any company that uses Aeva’s product will still need to develop the AI software necessary to operate a vehicle with little to no human intervention on its own.

Image: Aeva

Aeva’s pitch, however, is that its sensors are better than the ones you can get from competitors, and that self-driving systems today that are cobbled together using third-party hardware and custom software would operate more efficiently and safely using the company’s box. That’s another benefit the company says it offers: a low-power box that is affordable, easy to manufacture, and will work with pretty much any self-driving software from any company.

Salehian tells me that the goal is to have the box cost no more than a standard in-car sensor, although he clarified that you’d need more than one of the boxes to enable higher levels of autonomous driving. Presumably, you’d need at least two boxes — one facing forward and the other backward — to gather the data necessary for the kind of self-driving that would allow a human being to take their hands off the wheel and foot off the pedal. There isn’t a concrete price to the device right now. 

Aeva’s technology does sound impressive, and seeing the visual feed operating in real time makes a convincing case that Salehian and Rezk have indeed come up with a truly novel system here that may be superior to existing tech. Just comparing images Aeva has shared of what its system can capture with what, say, Tesla’s Autopilot system sees when it looks at the road illustrates a stark difference in the level of sophistication. (Tesla notoriously does not use LIDAR of any kind.) But it’s hard to think that Aeva’s system will be unique for very long, or that industry leaders like Waymo or Velodyne aren’t developing similar products of their own.

Still, for the time being, it would seem that Aeva does have a state of the art system on its hands, at least before competitors start showing off new laser sensing systems of their own. If the company secures deals with automakers, Salehian and Rezk have a real shot at realizing their dream of becoming a new foundational backbone to the emerging autonomous vehicle market. For now, however, the team is hard at work improving its system. When I left the warehouse as Aeva’s last meeting of the day, the duo said they were eager to use the remaining rental time to do more field testing. It seems their product, just like the AI algorithms that may one day rely on it, will always have room for improvement.

Source: https://www.theverge.com/2018/10/1/17915276/aeva-4d-lidar-technology-next-gen-self-driving-car-sensor-system


Posted in: Branding Daily Digital Marketing Marketing PPC SEM SEO Social Media Tech Uncategorized Web Design Web Hosting Website Tags:
Exit mobile version