According to the global status report on road safety from the World Health Organization, approximately 1.35 million people died in traffic accidents in 2018, and 3,700 people died in car accidents every day, that’s to say, around 3,700 people died in car accidents every day. Nevertheless, almost all of these injuries and deaths were caused by human error. As one of the most important future technologies in the 21st century, we believe that self-driving cars will effectively eliminate accidents that are caused by human error.LiDAR is the most important and indispensable part of the environmental perception of autonomous vehicles. It enables robots or vehicles to have perception capability more superior to humans and ensures the safety of future mobility. Most of the current autonomous driving solutions, due to the limited vertical field of view (FOV) and the vehicle roof-top installation of the LiDAR, have blind spot areas around the vehicle body which are difficult to be scanned by the LiDAR, and may result in a large number of undetectable dangerous corner cases and objects (such as pets, children, etc.). Today we will introduce three common LiDAR solutions tackling the near-field blind spot detection.
Vehicle roof-top installation of the LiDAR (the red highlights the area that can be detected while the yellow indicates the undetectable blind spot zone)
Plan A: Fusion of Primary and Auxiliary LiDAR
(The red highlights the detectable area of primary LiDAR and the green indicates the detectable area of the auxiliary LiDAR)
This is a very common LiDAR setup currently: the main LiDAR is installed on the top, with two obliquely installed auxiliary LiDARs (with fewer laser beams) added on both sides of the roof to assist in the blind spot coverage.
However, the auxiliary LiDAR is not specially designed for blind spots detection. Its vertical FOV is usually between 30 ° to 40 °, therefore there are still small blind areas flanking the two sides of the vehicle body.
In addition, the function of the auxiliary LiDARs in the detection of blind areas below the front and rear of the vehicle is very limited.
(The red highlights the detectable area of the primary LiDAR, the green indicates the detectable area of the auxiliary LiDAR, and the yellow shows the undetectable blind spots area)
Plan B: Add LiDARs “As Much As Possible”
Just add a LiDAR where there is a blind spot. By increasing the number of LiDARs, the blind spots can be reduced. The installation scheme varies according to different vehicle models.
However, due to the limitation of most LiDARs’ vertical field of view (only 30 ° ~ 40 °), to completely eliminate the blind zone in the near-field space, a large number of LiDARs are required, causing extremely high cost and low efficiency. In addition, a large number of LiDARs installed on the vehicle is a sore to the eyes.
Plan C: Specialized LiDAR to Achieve Blind Spots Full Coverage
RS-Bpearl is a new type of short-range LiDAR designed specifically for the detection of near-field blind spots. Loaded with RoboSense's innovative signal processing technology, RS-Bpearl is able to detect objects within a few centimeters, plus an approximately 360°x 90° super-wide field of view, RS-Bpearl can effectively detect the blind spots around the vehicle.
(The red highlights the detectable area of the primary LiDAR, the green indicates the detectable area of the RS-Bpearl. The blind-spot zone is fully covered)
Super Wide Hemispheric FOV Coverage of 90 ° * 360 ° Approximately to Completely Solve the Blind Spots Problems
RS-Bpearl has a super-wide hemispheric FOV coverage of 90 ° * 360 ° approximately, which can detect the actual height information in particular scenarios, such as bridge tunnels and culverts, further improving autonomous driving decision-making and driving safety.
RS-Bpearl point cloud image in multiple scenarios
Point cloud image of speed bumps
RS-Bpearl point cloud image of the car crossing the bridge
RS-Bpearl point cloud image of the car passing through tunnel
Reaches the Minimum Detection Range of Less Than 5cm
Currently, the minimum detection distance of LiDAR available on the market is generally ranging from 20cm to 50cm, which means when installed on the autonomous vehicle, it cannot guarantee complete detection of obstacles near the vehicle body.
The RS-Bpearl with the minimum detection range of less than 5 centimeters, can precisely identify objects around the vehicle body, and assist the vehicle to easily handle with corner cases such as detection of pets, children and navigation in narrow lanes and dense traffic flow. It can further comprehensively achieve zero blind spots in the sensing zone to ensure the safety of autonomous driving.
RS-Bpearl point cloud image of traffic roadblock beside the car
RS-Bpearl point cloud image of a vehicle passing by
Small Size, Vehicle Friendly
Image of RoboSense RS-Bpearl （φ100mm * H111mm）
The compact size of the RoboSense RS-Bpearl (100mm * 111 mm) and the top located hemispherical optical window, guarantee that the non-optical part of the product can be completely embedded in the vehicle body. In addition, the innovative modular design of the RS-Bpearl dramatically reduces costs while making the product more flexible, compact and customizable.
Cost-effective, High Performance
- Laser Lines: 32
- Laser Wavelength: 905nm
- Points Per Second: 576,000pts/s (single return mode)
- Points Per Second: 1,152,000pts/s (dual return mode)
- Weight (without cabling): ~0.92 kg
- Dimension: φ100mm * H111 mm
- Operating Temperature: -30°C ~ +60°C
4 RS-Bpearls embedded sideways around the vehicle, with each to provide a hemisphere scanning area relative to the vehicle's perspective, and the total four RS-BPearls will guarantee a complete 360° surrounding view as well as full coverage of the sensing area with zero blind spots in the vehicle's driving space.
For more information about RS-pearl, please visit https://www.robosense.ai/rslidar/RS-Bpearl
Founded in 2014, RoboSense (Suteng Innovation Technology Co., Ltd.) is the leading provider of Smart LiDAR Sensor Systems incorporating LiDAR sensors, AI algorithms and IC chipsets, that transform conventional 3D LiDAR sensors to full data analysis and comprehension systems. The company's mission is to possess outstanding hardware and artificial intelligence capabilities to provide smart solutions that enable robots (including vehicles) to have perception capability more superior to humans.
Attracted an all-star team from leading corporations and institutions around the world，there are 500+ employees in 6 global locations-Shenzhen, Beijing, Shanghai, Suzhou, Stuttgart, and Silicon Valley to support RoboSense's fast-growing in innovation and development. Until 2019, RoboSense owns more than 500 patents globally.
Market-oriented, the company provides customers with various Smart LiDAR perception system solutions, including the MEMS, mechanical LiDAR HWs, fusion HW unit, and the AI-based fusion systems.
Garnered the AutoSens Awards, Audi Innovation Lab Champion and twice the CES Innovation Award, RoboSense has laid a solid foundation for market success. To date, RoboSense LiDAR systems have been widely applied to the future mobility, including autonomous driving passenger cars, RoboTaxi, RoboTruck, automated logistics vehicles, autonomous buses and intelligent road by domestic and international autonomous driving technology companies, OEMs, and Tier1 suppliers.