Autonomous vehicles (AVs) are rapidly moving from hype to reality. Emerj’s recent report documents the plans of the 11 largest automakers, with Honda, Toyota and Renault-Nissan starting plans as early as next year. However, it is clear that deploying mass-produced autonomous vehicles has more requirements than conventional vehicles. Autonomous driving requires active interaction with the driver, other vehicles, infrastructure, and more validation.

By Joseph Notaro, Vice President, Global Automotive Strategy and Business Development, ON semiconductor

Technological collaboration for autonomous driving can help facilitate autonomous driving and demonstrate that it is safe, efficient, and feasible.

Autonomous vehicles (AVs) are rapidly moving from hype to reality. Emerj’s recent report documents the plans of the 11 largest automakers, with Honda, Toyota and Renault-Nissan starting plans as early as next year. However, it is clear that deploying mass-produced autonomous vehicles has more requirements than conventional vehicles. Autonomous driving requires active interaction with the driver, other vehicles, infrastructure, and more validation. It cannot be done by just one player, and cooperation between different players in the autonomous driving ecosystem is also required. Recent technology collaborations, such as 3M’s next-generation digitally intelligent code signing technology and NVIDIA’s DRIVE Constellation™ simulation platform, have demonstrated the importance of ecosystems in enabling autonomous vehicles.

Technology development ecosystem is critical to continuous improvement of autonomous driving safety

Some progress has been made, and despite one high-profile incident, the safety record of the Level 3+ system remains excellent. In fact, the California Department of Motor Vehicles (DMV) compiles and publishes human-intervention statistics for all companies testing self-driving cars on their roads; last year, Waymo’s vehicles drove 1.2 million miles with an intervention rate of every 11,018 miles 1 time. Not only is this rate close to half what it was in 2017, but it is fast approaching the average annual mileage in the US (13,476 miles) and more than 1.5 times the average annual mileage in the UK (7,134 miles).

Sensor Progress
At the heart of autonomous vehicles is perception technology. This is paired with low latency vehicle-to-vehicle, vehicle-to-infrastructure communication systems and combined data interpreted by a powerful artificial intelligence (AI)-based processor.

There are 3 core sensor technologies:
・ Lidar (LiDAR): used for depth mapping.The current system can exceed 100 meters distance and has a wide field of view
・ Radar: for motion measurement (up to 300km/h), object detection and tracking range up to 300m
・ Camera: for identification and classification

While not all vehicles will use the same combination of sensors (some currently only use radar and imaging, while others use LiDAR and imaging), each additional sensor provides more data and complements each other through sensor fusion, The accuracy, safety and reliability of the entire system and vehicle are greatly improved.

Every core sensor technology is constantly improving. Thanks to next-generation silicon photomultiplier (SiPM) and single-photon avalanche diode (SPAD) solutions, ON Semiconductor is enabling LiDAR systems to detect longer distances, even low-reflectivity targets, while reducing system size and cost. The company is developing radar technology that operates in both short-range and long-range modes with the same IC, improving accuracy, reducing power consumption, and reducing part count. On the imaging side, sensors such as ON Semiconductor’s Hayabusa series are offering a wider range of resolution options to meet the diverse needs of autonomous vehicles.

Thanks to the development of an advanced pixel structure, the Hayabusa series also features an industry-leading Super Exposure mode that supports a high dynamic range of over 140 dB (providing high-quality images in challenging scenes containing very dark and very bright areas) , while suppressing LED flicker (LFM) to reduce the flicker of increasingly popular LED vehicles, road signs and street light sources.

Another important example of progress in sensor technology and the autonomous vehicle ecosystem is that vehicles will be able to communicate with the road infrastructure itself. This can be critical, for example, to be able to alert the vehicle to dangerous road conditions or an accident ahead.

An autonomous driving ecosystem can improve the efficiency and safety of autonomous vehicles by defining and facilitating the way vehicles communicate with the road network to receive warnings of dangerous situations or accidents ahead. Short-range wireless communication is a key part of achieving this goal, but it is also expensive to deploy across the road network and vulnerable to hacking, so safety mechanisms and cybersecurity solutions need to be established.

So 3M has also turned to a vision-based approach, recently announcing a partnership with ON Semiconductor to help improve navigation in those vehicles equipped with self-driving capabilities. This can be implemented with wireless communication systems on major roads; it may not be feasible to deploy wireless infrastructure on smaller roads and temporary routes.

Image sensors are now able to “see” far beyond a human driver, and through co-development with 3M, image sensors can use signals to convey more information to further assist drivers beyond traditional advanced driver assistance systems (ADAS), And pave the way towards autonomous driving. The results of the collaboration were exhibited at CES in January, and ON Semiconductor’s AR0234AT CMOS image sensor integrates 3M’s smart code-signing technology.

Technology development ecosystem is critical to continuous improvement of autonomous driving safety

The addition of ON Semiconductor’s vision technology not only improves accuracy, provides redundancy, and enables the deployment of vehicle-to-infrastructure communications where wireless systems are impractical; Demystifying this type of technology could help increase consumer trust in self-driving car technology.

Processors in autonomous vehicles face considerable computational challenges, not only to fuse the outputs of different sensors, but also to process the vast amounts of data generated by those sensors (especially vision systems). Therefore, the ecosystem is crucial to guide the technology development of various companies and relieve the pressure on automotive processors.

A prime example of such a development ecosystem platform is NVIDIA DRIVE, a complete hardware and software ecosystem that enables system developers to collaborate and leverage advanced development systems to accelerate the design and production of autonomous vehicles. DRIVE combines deep learning, sensor fusion and peripheral vision to transform the driving experience and meets the highest possible safety standard ISO 26262 ASIL-D functional safety.

An example of implementing this ecosystem was presented at the GPU Technology Conference in March, where NVIDIA and ON Semiconductor demonstrated an open, cloud-based platform that provides real-time data from image sensors to NVIDIA DRIVE Constellation. This supports simulation for large-scale testing and validation to accelerate progress in developing safe, robust driverless vehicles.

The Links:   NL10276BC24-19D CM150DC1-24NF