Navigating Towards an Autonomous Vehicle Future
Driverless cars are doing the rounds in the news. As technology advances, there are chances that we might see a few on roads soon. Though driverless cars have achieved exemplary results in simulation tests & trials, it is still challenging to replicate the success in real-world scenarios where they must interact with unpredictable situations that arise from other vehicles, pedestrians, and various other variables in their surroundings. And navigating through these seamlessly and safely still remains a point of concern. Such scenarios are very much possible when encountering multiple lanes & intersections, it becomes even more difficult to navigate safely.
Hence to address such issues aggressive R&D is being witnessed at many places in the world.
One of the recent developments by Stanford University looks quite promising
LUCIDGames by Stanford University recently created an algorithm that can predict and plan adaptive trajectories for autonomous vehicles. This technique uses an algorithm based on game theory and an estimation method. LUCIDGames can quickly identify the objectives of both cars and pedestrians in their vicinity. This allows them to predict what these different elements will do in the future and even in complex situations.
Now when it comes to autonomous vehicles, there are certain levels that define the extent of automation with level 0 being the traditional cars that we are all used to until now. This taxonomy is given by SAE international and here is what it means:
Levels of autonomous vehicle technology
Cruise or adaptive control is an example of level 1 where the car & the driver share control. The driver takes care of steering while the autonomous system manages the throttle and braking at a predetermined speed. Parking assist & lane keep assist are examples of level 1.
The automated system can fully manage steering, acceleration & braking, though the driver would need to be on standby in case the system fails. There are sensors to monitor the attentivity of the driver by tracking eye movement.
In level 3 automation the car can drive itself in most scenarios & the driver does not require to pay attention to driving. There are certain set conditions for camera view, weather, GPS, etc. which need to be adhered to for the car to work in fully automated mode. The driver would still need to intervene in case any of these conditions are not met.
Level 4 automation comprises intelligent systems that may not even have a steering wheel, where the driver may not be required to be awake or able to retake control. This level would mostly apply to robotized taxi or delivery systems in specific areas where predetermined optimal conditions are met.
With level 5 automation a car would require no human supervision. It would be capable of navigating through all surfaces, weather & situations. While level 4 is already being explored, level 5 automation is still too complex to implement.
So what is the status when it comes to complex levels that is level 3 and thereafter?
Well, it turns out there are in fact quite a few developments happening in this space.
The Baidu Apollo integrated AI system allows vehicles to operate without a driver inside the vehicle. It makes use of 5G-enabled teleoperation in its vehicles for making sure there are no untoward incidences.
Cruise, the self-driving car company associated with GM and Honda, is testing fully driverless cars without a human safety driver behind the steering wheel, in San Francisco. This is one of the few instances of driverless cars being tested in the complex environment of a bustling city.
Cruise has only recently showcased its Level 4 capabilities while Google (Waymo) announced it would be making its Level 4 taxi service accessible to customers. Cruise had originally planned for a 2019 launch date for its commercial driverless taxis, but it failed to deliver.
However, despite such advanced developments, safety remains the prime concern for consumer acceptability of driverless cars.
Vehicles are being fitted with complex tech to avoid accidents. Multiple LIDARs are distributed over the surface of vehicles along with several cameras & radars to have overlapping sensor fields. This serves not only to increase the visibility of obstacles in the vehicle path but also to avoid blind spots.
With so much happening in the domain of driverless cars, it’s but obvious that IP trends too might be witnessing a lot of activity. And it is expected that many startups too will leverage this space. Further, the domain is witnessing a lot of cross-industry activities and collaborations.
In 2020 Ford was the most active patent filer for vehicle navigation & control systems, followed by Toyota Motor Corp. & LG Electronics Inc. Intel plans to add radar and LIDAR, a laser-based system for a 3D view of the road to boost its self-driving tech by 2025.
Oxbotica – one of the pioneers of autonomous tech has now raised $47 million in Series B funding from BP, BGF, Halma, Tencent, Venture Science & others. Oxbotica is looking to seize opportunities in industrial companies for sectors like mining, port logistics in off-road applications. Autonomous vehicle software can be plugged into industrial vehicles for fleet management, navigation, and perception.
Qualcomm’s Snapdragon Automotive 5G Platform & 3rd gen Snapdragon Automotive Cockpit Platform will be incorporated in NIO’s flagship sedan. NIO which launched the EP9 – one of the fastest electric cars in the world is now leveraging Qualcomm’s expertise in computing & connectivity to improve its smart mobility tech.
Velodyne Lidar has signed a multi-year contract with Motional – the joint venture between Hyundai & Aptiv Autonomous Driving for Alpha Prime sensors to provide a long-range vision for Autonomous level 4 vehicles. Velodyne’s patented 360-degree perception tech enables autonomous driving in the rain, snow, sleet as well as complex urban environments.
Future of Autonomous Vehicles Navigation
Safety remains the topmost concern for automotive manufacturers and consumers. The focus is on solving less common road scenarios different from normal roadway conditions. The existing suite of sensors on AV is still insufficient for dealing with every possibility. Technologies like thermal imaging, longwave IR cameras are being explored by researchers apart from advanced LIDAR, ultrasonic, or radar sensors. Multiple FIR sensors will prove useful as we work towards fully autonomous cars in the future.