WO2020147485A1 - 一种信息处理方法、系统、设备和计算机存储介质 - Google Patents

一种信息处理方法、系统、设备和计算机存储介质 Download PDF

Info

Publication number
WO2020147485A1
WO2020147485A1 PCT/CN2019/126008 CN2019126008W WO2020147485A1 WO 2020147485 A1 WO2020147485 A1 WO 2020147485A1 CN 2019126008 W CN2019126008 W CN 2019126008W WO 2020147485 A1 WO2020147485 A1 WO 2020147485A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinates
lidar
ultrasonic radar
obstacle
vehicle
Prior art date
Application number
PCT/CN2019/126008
Other languages
English (en)
French (fr)
Inventor
朱晓星
刘祥
杨凡
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Priority to US17/251,169 priority Critical patent/US20210263159A1/en
Priority to EP19910614.7A priority patent/EP3812793B1/en
Priority to JP2020569997A priority patent/JP7291158B2/ja
Publication of WO2020147485A1 publication Critical patent/WO2020147485A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/487Extracting wanted echo signals, e.g. pulse detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/539Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • This application relates to the field of automatic control, and in particular to an information processing method, system, equipment and computer storage medium.
  • GPS-IMU Inertial Measurement Unit
  • lidar installed in the center front end of unmanned vehicles has a wide detection range, higher precision of the sensed data, and long detection distance in the longitudinal direction. Due to the principle of laser ranging, there is a detection blind zone in the short range. In order to fill the blind spot for detection, the industry generally installs ultrasonic radar and forward Lidar at the front bumper of the car body. When the ultrasonic radar is measuring a distant target, its echo signal will be relatively weak, which affects the measurement accuracy. But in short-distance measurement, ultrasonic radar has a very big advantage. However, due to the measurement accuracy of the ultrasonic radar, the specific location of the obstacle cannot be described. For example, its FOA is 45 degrees.
  • the ultrasonic radar will return obstacle information, but it cannot be determined that the obstacle is detecting
  • the specific location of the sector may cause misjudgments and affect the driving of unmanned vehicles. For example, the presence of obstacles in front of the unmanned vehicle does not affect the driving. However, based on the obstacle information returned by the ultrasonic radar, it is impossible to determine whether there is an obstacle in the front side or directly in front of the driver. Therefore, the unmanned vehicle will automatically Stop to avoid collision and affect normal driving.
  • Various aspects of this application provide an information processing method, system, equipment, and computer storage medium to reduce misjudgments caused by ultrasonic radar.
  • One aspect of this application provides an information processing method, including:
  • the obstacle information collected by the lidar is fused with the obstacle information collected by the ultrasonic radar.
  • an implementation manner is further provided, wherein the ultrasonic radar is installed in front of the vehicle body of an unmanned vehicle, and is used to detect obstacle information in front and side of the vehicle;
  • the lidar is installed in front of the vehicle body of an unmanned vehicle, and is used to detect obstacle information in the front and side of the vehicle.
  • the above aspects and any possible implementation manners further provide an implementation manner.
  • the fusion of obstacle information collected by lidar and obstacle information collected by ultrasonic radar includes:
  • the superimposed lidar coordinates and ultrasonic radar coordinates are fused.
  • the reference coordinate system is a geodetic coordinate system or a vehicle coordinate system.
  • the obstacle recognition results in the detection overlap area are rasterized, and the unified lidar coordinates and ultrasonic radar coordinates are superimposed on the grid.
  • the above aspects and any possible implementation manners further provide an implementation manner.
  • the fusion of the superimposed lidar coordinates and ultrasonic radar coordinates includes:
  • an implementation manner is further provided, and the method further includes:
  • an implementation manner is further provided, and the method further includes:
  • an information processing system including:
  • the acquisition module is used to separately acquire obstacle information collected by ultrasonic radar and lidar;
  • the fusion module is used to fuse the obstacle information collected by the laser radar and the obstacle information collected by the ultrasonic radar.
  • an implementation manner is further provided, wherein the ultrasonic radar is installed in front of the vehicle body of an unmanned vehicle, and is used to detect obstacle information in front and side of the vehicle;
  • the lidar is installed in front of the vehicle body of an unmanned vehicle, and is used to detect obstacle information in the front and side of the vehicle.
  • the fusion module includes:
  • the unification sub-module is used to unify the coordinates in the laser radar coordinate system and the coordinates in the ultrasonic radar coordinate system into the reference coordinate system;
  • the superposition sub-module is used to superimpose the unified lidar coordinates and ultrasonic radar coordinates in the rasterized detection overlap area;
  • the fusion sub-module is used to fuse the superimposed lidar coordinates and ultrasonic radar coordinates.
  • the reference coordinate system is a geodetic coordinate system or a vehicle coordinate system.
  • the obstacle recognition results in the detection overlap area are rasterized, and the unified lidar coordinates and ultrasonic radar coordinates are superimposed on the grid.
  • an implementation manner is further provided, and the fusion sub-module is specifically used for:
  • the system further includes a decision-making module for determining the vehicle decision based on the fused obstacle information
  • a computer device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the processor executes the program as described above. The method described.
  • Another aspect of the present invention provides a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the method as described above is implemented.
  • the embodiment of the present application integrates the lidar coordinates and the ultrasonic radar coordinates to avoid the impact that the ultrasonic radar can only determine the distance of the obstacle and cannot determine the direction of the obstacle on the driving of the unmanned vehicle.
  • the accuracy of obstacle recognition is improved, and the safety and smooth driving of unmanned vehicles are guaranteed.
  • FIG. 1 is a schematic flowchart of an information processing method provided by an embodiment of this application.
  • Fig. 2 is a schematic structural diagram of an information processing system provided by an embodiment of the application.
  • Figure 3 shows a block diagram of an exemplary computer system/server 012 suitable for implementing embodiments of the present invention.
  • FIG. 1 is a schematic diagram of an information processing method provided by an embodiment of this application. As shown in FIG. 1, it includes the following steps:
  • Step S11 Obtain obstacle information collected by ultrasonic radar and lidar respectively;
  • Step S12 fusing the obstacle information collected by the lidar and the obstacle information collected by the ultrasonic radar to determine the obstacle recognition result.
  • step S11 In a preferred implementation of step S11,
  • the lidar is a single-line lidar, which is installed in front of the unmanned vehicle body, such as in the center of the air intake grille, with a height of about 40cm; there is only one transmitter and one receiver, the structure is relatively simple, easy to use, and costly Low; high scanning speed, high angular resolution, sensitive ranging; in terms of pedestrian detection, obstacle detection (small target detection), and front obstacle detection, single-line laser has many advantages over multi-line lidar because of single-line The angular resolution of lidar can be made higher than that of multi-line lidar, which is very useful in detecting small objects or pedestrians.
  • single-line lidar Because the angular resolution of single-line lidar can be higher than that of multi-line lidar, pedestrians can be detected at a longer distance in advance, leaving more warning time for the control system.
  • the detection area of single-line lidar is 0.5-8m in front and side front of the unmanned vehicle body.
  • the ultrasonic radar is symmetrically distributed on the left and right sides of the front of the vehicle, and its detection area is 0-3.5m in front of and side front of the unmanned vehicle body.
  • the electronic equipment on which the ultrasonic radar and lidar information fusion method runs can control the lidar and the ultrasonic radar through a wired connection or a wireless connection.
  • the trip computer or the vehicle-mounted terminal can control the laser radar to collect laser point cloud data in a certain area at a certain frequency, and control the ultrasonic radar to collect echo data in a certain area at a certain frequency.
  • the above-mentioned target area may be an area of an obstacle to be detected.
  • wireless connection methods can include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other currently known or future wireless connection methods .
  • the lidar obtains the lidar point cloud information of the obstacles in front of the vehicle.
  • the lidar rotates at a constant angular velocity. During this process, it continuously emits laser light and collects the information of the reflection point in order to obtain comprehensive environmental information.
  • the lidar will also record the time and horizontal angle of the point at the same time, and each laser transmitter has a number and a fixed vertical angle. Based on these data, the coordinates of all reflection points can be calculated .
  • the point cloud is formed by the collection of all the reflection point coordinates collected by the lidar every revolution.
  • the preset point cloud recognition model can be various pre-trained algorithms that can recognize obstacles in the point cloud data, such as ICP algorithm (Iterative Closest Point, nearest point search method), random forest algorithm, etc.
  • Ultrasonic radar obtains the echo information of obstacles in front and side of the vehicle, and ultrasonic radar can obtain the echo information of obstacles within the range of 0-3.5m at a close distance.
  • data fusion After obtaining the laser point cloud information of obstacles in front of the vehicle and the front side of the vehicle by laser radar, and obtaining the distance information of obstacles in front of the vehicle and the side front of the vehicle by ultrasonic radar, data fusion can be performed.
  • step S12 In a preferred implementation of step S12,
  • Sub-step S121 unify the coordinates in the laser radar coordinate system and the ultrasonic radar coordinate system into the reference coordinate system.
  • the coordinates in the laser radar coordinate system and the coordinates in the ultrasonic radar coordinate system can be uniformly converted to the geodetic coordinate system.
  • the initial spatial configuration of the lidar and ultrasonic radar on the unmanned vehicle is known in advance, and can be obtained based on the measurement data on the body of the unmanned vehicle. Transform the coordinates of obstacles in each coordinate system to a consistent geodetic coordinate system.
  • the unmanned vehicle may further include an attitude positioning system for collecting position information and attitude information of the attitude positioning system, that is, its coordinates in the geodetic coordinate system.
  • the position information and attitude information of the attitude positioning system are used to obtain the space coordinate data of the obstacle in combination with the lidar coordinates, and the space distance data of the obstacle in combination with the ultrasonic radar coordinates.
  • the attitude positioning system may include a GPS positioning device and an IMU, which are respectively used to collect the position information and attitude information of the attitude positioning system.
  • the position information may include the center coordinates (x, y, z) of the attitude positioning system, and the attitude information may include the three attitude angles ( ⁇ , ⁇ ) of the attitude positioning system.
  • the relative position between the attitude positioning system and the lidar is constant, so the position information and attitude information of the lidar can be determined according to the position information and attitude information of the attitude positioning system.
  • the three-dimensional laser scan data can then be corrected according to the position information and attitude information of the lidar to determine the spatial coordinate data of the obstacle.
  • the relative position between the ultrasonic probes of the attitude positioning system and the ultrasonic radar is constant, so the position information and attitude information of each ultrasonic probe can be determined according to the position information and attitude information of the attitude positioning system. Then the ultrasonic distance data can be corrected according to the position information and posture information of each ultrasonic probe to determine the spatial distance data of the obstacle.
  • the lidar coordinates and ultrasonic radar coordinates may also be unified into the vehicle coordinate system.
  • the matrix relationship between the lidar coordinate system and the vehicle coordinates is calibrated through the initial space configuration of the lidar on the unmanned vehicle.
  • Matrix conversion is performed according to the relationship between the initial spatial configuration of each ultrasonic radar on the unmanned vehicle and the vehicle coordinate system.
  • Sub-step S122 superimpose the unified lidar coordinates and ultrasonic radar coordinates in the rasterized detection overlap area.
  • lidar and ultrasonic radar Due to the difference between the detection areas of lidar and ultrasonic radar, the overlapping detection areas of lidar and ultrasonic radar are fused to determine the obstacle recognition result.
  • obstacle recognition is still performed according to respective coordinates.
  • the detection overlap area is within 0.5-3.5 m in front and side front of the vehicle body, and the obstacle recognition result in the detection overlap area is rasterized, and the grid attributes are set.
  • the detection accuracy of the lidar is plus or minus 3 cm, and the distance accuracy of the ultrasonic radar detection is 10 cm.
  • set the grid size to be a unit grid of 20 cm ⁇ 20 cm.
  • the lidar coordinates and ultrasonic radar coordinates unified into the geodetic coordinate system or the vehicle coordinate system are superimposed on the grid.
  • Sub-step S123 fusing the superimposed lidar coordinates and ultrasonic radar coordinates to determine the obstacle recognition result.
  • the coordinates output by the ultrasonic radar are the distance data of the obstacle, that is, the arc range with the ultrasonic radar as the center and the distance data as the radius is recognized as an obstacle by the ultrasonic radar, but in fact the obstacle may It is located at any point or multiple points on the arc range; this requires the lidar coordinates to determine where the obstacle is located on the arc range.
  • the following determination method is adopted: for a grid point with both lidar coordinates and ultrasonic radar coordinates, it is determined that the grid point is occupied; for a grid point with only ultrasonic radar coordinates, it is determined that the grid point is not occupied. It avoids the situation that the obstacle is located on the side of the unmanned vehicle and the entire arc is output, which causes the unmanned vehicle to think that there is an obstacle in front and brake or avoid the situation. If the grid points of the detection overlap area are not occupied, the position of the obstacle is considered to be the part of the arc range outside the detection overlap area.
  • the method further includes the following step S13: determining the vehicle decision based on the fused obstacle information.
  • the vehicle decision is determined based on the obstacle recognition result after the fusion of the detection overlap area and the obstacle recognition result outside the detection overlap area. If there is an obstacle in front of the vehicle, control the vehicle to slow down. If there is an obstacle in front of the vehicle, continue driving.
  • the obstacle information collected by the laser radar is affected by the ultrasonic wave.
  • the obstacle information collected by radar is fused to determine the obstacle recognition result. In order to reduce the amount of calculation of the system and improve the response speed.
  • FIG. 2 is a schematic structural diagram of an information processing system provided by an embodiment of the application, as shown in Figure 2, including:
  • the obtaining module 21 is used to obtain obstacle information collected by ultrasonic radar and lidar respectively;
  • the fusion module 22 is used for fusing the obstacle information collected by the laser radar and the obstacle information collected by the ultrasonic radar to determine the obstacle recognition result.
  • the lidar is a single-line lidar, which is installed in front of the unmanned vehicle body, such as in the center of the air intake grille, with a height of about 40cm; there is only one transmitter and one receiver, the structure is relatively simple, easy to use, and costly Low; high scanning speed, high angular resolution, sensitive ranging; in terms of pedestrian detection, obstacle detection (small target detection), and front obstacle detection, single-line laser has many advantages over multi-line lidar because of single-line The angular resolution of lidar can be made higher than that of multi-line lidar, which is very useful in detecting small objects or pedestrians.
  • single-line lidar Because the angular resolution of single-line lidar can be higher than that of multi-line lidar, pedestrians can be detected at a longer distance in advance, leaving more warning time for the control system.
  • the detection area of single-line lidar is 0.5-8m in front and side front of the unmanned vehicle body.
  • the ultrasonic radar is symmetrically distributed on the left and right sides of the front of the vehicle, and its detection area is 0-3.5m in front of and side front of the unmanned vehicle body.
  • the electronic equipment on which the ultrasonic radar and lidar information fusion method runs can control the lidar and the ultrasonic radar through a wired connection or a wireless connection.
  • the trip computer or the vehicle-mounted terminal can control the laser radar to collect laser point cloud data in a certain area at a certain frequency, and control the ultrasonic radar to collect echo data in a certain area at a certain frequency.
  • the above-mentioned target area may be an area of an obstacle to be detected.
  • wireless connection methods can include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other currently known or future wireless connection methods .
  • the lidar obtains the lidar point cloud information of the obstacles in front of the vehicle.
  • the lidar rotates at a constant angular velocity. During this process, it continuously emits laser light and collects the information of the reflection point in order to obtain comprehensive environmental information.
  • the lidar will also record the time and horizontal angle of the point at the same time, and each laser transmitter has a number and a fixed vertical angle. Based on these data, the coordinates of all reflection points can be calculated .
  • the point cloud is formed by the collection of all the reflection point coordinates collected by the lidar every revolution.
  • the preset point cloud recognition model may be various pre-trained algorithms capable of recognizing obstacles in point cloud data, for example, it may be an ICP algorithm (Iterative Closest Point), a random forest algorithm, and the like.
  • Ultrasonic radar obtains the echo information of obstacles in front and side of the vehicle, and ultrasonic radar can obtain the echo information of obstacles within the range of 0-3.5m at a close distance.
  • data fusion After obtaining the laser point cloud information of obstacles in front of the vehicle and the front side of the vehicle by laser radar, and obtaining the distance information of obstacles in front of the vehicle and the side front of the vehicle by ultrasonic radar, data fusion can be carried out.
  • the fusion module 22 includes the following sub-modules:
  • the unification sub-module is used to unify the coordinates in the laser radar coordinate system and the ultrasonic radar coordinate system into the reference coordinate system.
  • the coordinates in the laser radar coordinate system and the coordinates in the ultrasonic radar coordinate system can be uniformly converted to the geodetic coordinate system.
  • the initial spatial configuration of the lidar and ultrasonic radar on the unmanned vehicle is known in advance, and can be obtained based on the measurement data on the body of the unmanned vehicle. Transform the coordinates of obstacles in each coordinate system to a consistent geodetic coordinate system.
  • the unmanned vehicle may further include an attitude positioning system for collecting position information and attitude information of the attitude positioning system, that is, its coordinates in the geodetic coordinate system.
  • the position information and attitude information of the attitude positioning system are used to obtain the space coordinate data of the obstacle in combination with the lidar coordinates, and the space distance data of the obstacle in combination with the ultrasonic radar coordinates.
  • the attitude positioning system may include a GPS positioning device and an IMU, which are respectively used to collect the position information and attitude information of the attitude positioning system.
  • the position information may include the center coordinates (x, y, z) of the attitude positioning system, and the attitude information may include the three attitude angles ( ⁇ , ⁇ ) of the attitude positioning system.
  • the relative position between the attitude positioning system and the lidar is constant, so the position information and attitude information of the lidar can be determined according to the position information and attitude information of the attitude positioning system.
  • the three-dimensional laser scan data can then be corrected according to the position information and attitude information of the lidar to determine the spatial coordinate data of the obstacle.
  • the relative position between the ultrasonic probes of the attitude positioning system and the ultrasonic radar is constant, so the position information and attitude information of each ultrasonic probe can be determined according to the position information and attitude information of the attitude positioning system. Then the ultrasonic distance data can be corrected according to the position information and posture information of each ultrasonic probe to determine the spatial distance data of the obstacle.
  • the lidar coordinates and ultrasonic radar coordinates may also be unified into the vehicle coordinate system.
  • the matrix relationship between the lidar coordinate system and the vehicle coordinates is calibrated through the initial space configuration of the lidar on the unmanned vehicle.
  • Matrix conversion is performed according to the relationship between the initial spatial configuration of each ultrasonic radar on the unmanned vehicle and the vehicle coordinate system.
  • the superposition sub-module is used to superimpose the unified lidar coordinates and ultrasonic radar coordinates in the rasterized detection overlap area.
  • lidar and ultrasonic radar Due to the difference between the detection areas of lidar and ultrasonic radar, the overlapping detection areas of lidar and ultrasonic radar are fused to determine the obstacle recognition result.
  • obstacle recognition is still performed according to respective coordinates.
  • the detection overlap area is within 0.5-3.5 m in front and side front of the vehicle body, and the obstacle recognition result in the detection overlap area is rasterized, and the grid attributes are set.
  • the detection accuracy of the lidar is plus or minus 3 cm, and the distance accuracy of the ultrasonic radar detection is 10 cm.
  • set the grid size to be a unit grid of 20 cm ⁇ 20 cm.
  • the lidar coordinates and ultrasonic radar coordinates unified into the geodetic coordinate system or the vehicle coordinate system are superimposed on the grid.
  • the fusion sub-module is used to fuse the superimposed lidar coordinates and ultrasonic radar coordinates.
  • the coordinates output by the ultrasonic radar are the distance data of the obstacle, that is, the arc range with the ultrasonic radar as the center and the distance data as the radius is recognized as an obstacle by the ultrasonic radar, but in fact the obstacle may It is located at any point or multiple points on the arc range; this requires the lidar coordinates to determine where the obstacle is located on the arc range.
  • the following determination method is adopted: for a grid point with both lidar coordinates and ultrasonic radar coordinates, it is determined that the grid point is occupied; for a grid point with only ultrasonic radar coordinates, it is determined that the grid point is not occupied. It avoids the situation that the obstacle is located on the side of the unmanned vehicle and the entire arc is output, which causes the unmanned vehicle to think that there is an obstacle in front and brake or avoid the situation. If the grid points of the detection overlap area are not occupied, the position of the obstacle is considered to be the part of the arc range outside the detection overlap area.
  • the method further includes a decision-making module 23 for determining a vehicle decision based on the fused obstacle information.
  • the vehicle decision is determined based on the obstacle recognition result after the fusion of the detection overlap area and the obstacle recognition result outside the detection overlap area. If there is an obstacle in front of the vehicle, control the vehicle to decelerate. If there is an obstacle in front of the vehicle, continue driving.
  • the obstacle information collected by the laser radar is affected by the ultrasonic wave.
  • the obstacle information collected by radar is fused to determine the obstacle recognition result. In order to reduce the amount of calculation of the system and improve the response speed.
  • the disclosed method and device can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical, or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • FIG. 3 shows a block diagram of an exemplary computer system/server 012 suitable for implementing embodiments of the present invention.
  • the computer system/server 012 shown in FIG. 3 is only an example, and should not bring any limitation to the function and application scope of the embodiment of the present invention.
  • the computer system/server 012 is represented in the form of a general-purpose computing device.
  • the components of the computer system/server 012 may include, but are not limited to: one or more processors or processing units 016, a system memory 028, and a bus 018 connecting different system components (including the system memory 028 and the processing unit 016).
  • the bus 018 represents one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of the multiple bus structures.
  • these architectures include, but are not limited to, industry standard architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and peripheral component interconnection ( PCI) bus.
  • ISA industry standard architecture
  • MAC micro channel architecture
  • VESA Video Electronics Standards Association
  • PCI peripheral component interconnection
  • the computer system/server 012 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by the computer system/server 012, including volatile and nonvolatile media, removable and non-removable media.
  • the system memory 028 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 030 and/or cache memory 032.
  • the computer system/server 012 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • the storage system 034 can be used to read and write to non-removable, non-volatile magnetic media (not shown in Figure 3, usually referred to as a "hard drive").
  • a disk drive for reading and writing to removable non-volatile disks such as "floppy disks"
  • a removable non-volatile disk such as CD-ROM, DVD-ROM
  • other optical media read and write optical disc drives.
  • each drive can be connected to the bus 018 through one or more data media interfaces.
  • the memory 028 may include at least one program product, and the program product has a set (for example, at least one) program modules, which are configured to perform the functions of the embodiments of the present invention.
  • a program/utility tool 040 with a set of (at least one) program module 042 can be stored in, for example, the memory 028.
  • Such program module 042 includes, but is not limited to, an operating system, one or more application programs, and other programs Modules and program data, each of these examples or some combination may include the realization of a network environment.
  • the program module 042 generally executes the functions and/or methods in the described embodiments of the present invention.
  • the computer system/server 012 can also communicate with one or more external devices 014 (such as a keyboard, pointing device, display 024, etc.).
  • the computer system/server 012 communicates with an external radar device, and can also communicate with one or Multiple devices that enable users to interact with the computer system/server 012, and/or communicate with any devices that enable the computer system/server 012 to communicate with one or more other computing devices (such as network cards, modems, etc.) Communication. This communication can be performed through an input/output (I/O) interface 022.
  • I/O input/output
  • the computer system/server 012 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 020.
  • networks such as a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
  • the network adapter 020 communicates with other modules of the computer system/server 012 through the bus 018.
  • other hardware and/or software modules can be used in conjunction with the computer system/server 012, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems , Tape drives and data backup storage systems.
  • the processing unit 016 executes the functions and/or methods in the described embodiments of the present invention by running a program stored in the system memory 028.
  • the above-mentioned computer program may be set in a computer storage medium, that is, the computer storage medium is encoded with a computer program.
  • the program When the program is executed by one or more computers, one or more computers can execute the operations shown in the above embodiments of the present invention. Method flow and/or device operation.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above.
  • computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium containing or storing a program, which may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • the computer program code for performing the operations of the present invention can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages such as Java, Smalltalk, C++, as well as conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass the Internet) connection).
  • LAN local area network
  • WAN wide area network

Abstract

一种信息处理方法、系统、设备和计算机存储介质,方法包括分别获取超声波雷达与激光雷达采集的障碍物信息(S11);对激光雷达采集的障碍物信息及超声波雷达采集的障碍物信息进行融合(S12)。避免了仅根据超声波雷达返回的障碍物信息,无法判断是侧前方存在障碍物还是正前方存在障碍物,因此无人驾驶车辆会自动停车以避免碰撞,影响其正常行驶的问题,提高了障碍物识别精度,保证了无人驾驶车辆的安全及平稳行驶。

Description

一种信息处理方法、系统、设备和计算机存储介质
本申请要求了申请日为2019年01月15日,申请号为201910034417.2发明名称为“一种超声波雷达与激光雷达信息融合方法和系统”的中国专利申请的优先权。
技术领域
本申请涉及自动控制领域,尤其涉及一种信息处理方法、系统、设备和计算机存储介质。
背景技术
在无人驾驶车辆中,集成了多类传感器:GPS-IMU(惯性测量单元,Inertial Measurement Unit)组合导航模块、相机、激光雷达、毫米波雷达等传感器。
不同类型的传感器具有不同的优点和弱点。例如,安装于无人驾驶车辆中央前端的激光雷达优势在于探测范围广,感知的数据精度更高,在纵向可探测距离远。由于激光测距原理,对于近距离范围内存在探测盲区。为了对探测盲区进行补盲,业内一般在车体前保险杠位置安装超声波雷达和前向激光雷达。超声波雷达在测量较远的目标时,其回波信号会比较弱,影响测量精度。但是在短距离测量中,超声波雷达具有非常大的优势。但是,由于超声波雷达测量精度的问题,不能刻画障碍物的具体位置,例如,其FOA为45度,只要在该范围内存在障碍物,超声波雷达都会返回障碍物信息,但是无法确定障碍物在探测扇区的具体位置,则可能会造成误判,影响无人驾驶车辆的行驶。例如,无人驾驶车辆的侧前方存在障碍物,并不影响行驶,但是根据超声波雷达返回的障碍物信息,无法判断是侧前方存在障碍物还是正前方存在障碍物,因 此无人驾驶车辆会自动停车以避免碰撞,影响其正常行驶。
发明内容
本申请的多个方面提供一种信息处理方法、系统、设备和计算机存储介质,用以减少超声波雷达造成的误判。
本申请的一方面,提供一种信息处理方法,包括:
分别获取超声波雷达与激光雷达采集的障碍物信息;
对激光雷达采集的障碍物信息对超声波雷达采集的障碍物信息进行融合。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述超声波雷达安装于无人驾驶车辆车体前方,用于检测车辆前方和侧前方的障碍物信息;
所述激光雷达安装于无人驾驶车辆车体前方,用于检测车辆前方和侧前方的障碍物信息。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,对激光雷达采集的障碍物信息对超声波雷达采集的障碍物信息进行融合包括:
将激光雷达坐标系中的坐标、超声波雷达坐标系中的坐标统一到参考坐标系中;
将统一后的激光雷达坐标和超声波雷达坐标在栅格化的检测重叠区域中进行叠加;
对叠加后的激光雷达坐标和超声波雷达坐标进行融合。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述参考坐标系为大地坐标系或车辆坐标系。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,将统一后的激光雷达坐标和超声波雷达坐标在栅格化的检测重叠区域中进行叠加包括:
对所述检测重叠区域内的障碍物识别结果进行栅格化,将统一后的激光雷达坐标和超声波雷达坐标在栅格中进行叠加。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,对叠加后的激光雷达坐标和超声波雷达坐标进行融合包括:
对于同时存在激光雷达坐标与超声波雷达坐标的栅格,判定栅格被占用;对于仅存在超声波雷达坐标的栅格,判断栅格未被占用。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述方法还包括:
对于检测重叠区域之外,分别根据激光雷达坐标或超声波雷达坐标进行障碍物识别。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述方法还包括:
根据融合后的障碍物信息,确定车辆决策
本申请的另一方面,公开了一种信息处理系统,包括:
获取模块,用于分别获取超声波雷达与激光雷达采集的障碍物信息;
融合模块,用于对激光雷达采集的障碍物信息对超声波雷达采集的障碍物信息进行融合。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述超声波雷达安装于无人驾驶车辆车体前方,用于检测车辆前方和侧前方的障碍物信息;
所述激光雷达安装于无人驾驶车辆车体前方,用于检测车辆前方和侧前方的障碍物信息。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述融合模块包括:
统一子模块,用于将激光雷达坐标系中的坐标、超声波雷达坐标系中的坐标统一到参考坐标系中;
叠加子模块,用于将统一后的激光雷达坐标和超声波雷达坐标在栅格化的检测重叠区域中进行叠加;
融合子模块,用于对叠加后的激光雷达坐标和超声波雷达坐标进行融合。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述参考坐标系为大地坐标系或车辆坐标系。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述叠加子模块具体用于:
对所述检测重叠区域内的障碍物识别结果进行栅格化,将统一后的激光雷达坐标和超声波雷达坐标在栅格中进行叠加。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述融合子模块具体用于:
对于同时存在激光雷达坐标与超声波雷达坐标的栅格,判定栅格被占用;对于仅存在超声波雷达坐标的栅格,判断栅格未被占用。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述系统还用于:
对于检测重叠区域之外,分别根据激光雷达坐标或超声波雷达坐标 进行障碍物识别。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述系统还包括决策模块,用于根据融合后的障碍物信息,确定车辆决策
本发明的另一方面,提供一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如以上所述的方法。
本发明的另一方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现如以上所述的方法。
由所述技术方案可知,本申请实施例通过将激光雷达坐标与超声波雷达坐标进行融合,避免了超声波雷达仅能判断障碍物距离,无法判断障碍物方向对无人驾驶车辆行驶造成的影响。提高了障碍物识别精度,保证了无人驾驶车辆的安全及平稳行驶。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例提供的信息处理方法的流程示意图;
图2为本申请一实施例提供的信息处理系统的结构示意图;
图3示出了适于用来实现本发明实施方式的示例性计算机系统/服务器012的框图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的全部其他实施例,都属于本申请保护的范围。
图1为本申请一实施例提供的信息处理方法的示意图,如图1所示,包括以下步骤:
步骤S11、分别获取超声波雷达与激光雷达采集的障碍物信息;
步骤S12、对激光雷达采集的障碍物信息及超声波雷达采集的障碍物信息进行融合,确定障碍物识别结果。
在步骤S11的一种优选实现方式中,
在本实施例中,激光雷达为单线激光雷达,安装在无人驾驶车辆车体前方,例如在进气格栅中央,高度40cm左右;只有一路发射和一路接收,结构相对简单,使用方便,成本低;扫描速度高、角度分辨率高、测距灵敏;在行人探测、障碍物探测(小目标探测)以及前方障碍物探测等方面上,单线激光比多线激光雷达有优势得多,因为单线激光雷达的角分辨率可以做得比多线激光雷达高,这一点在探测小物体或者行人里非常有用。由于单线激光雷达的角分辨率可以做到比多线激光雷达高,可在更远的距离提前发现行人,为控制系统留出更多的预警时间。单线激光雷达的探测区域为无人驾驶车辆车体前方及侧前方0.5-8m。超声波雷达于车辆前方左右两侧各3个对称分布,其探测区域为无人驾驶车辆车体前方及侧前方0-3.5m。
在本实施例中,所述超声波雷达与激光雷达信息融合方法运行于其 上的电子设备(例如车辆的行车电脑或车载终端)可以通过有线连接方式或者无线连接方式控制激光雷达和超声波雷达。具体地,行车电脑或车载终端可以控制激光雷达以某一频率采集某一区域的激光点云数据,控制超声波雷达以某一频率采集某一区域的回声数据。上述目标区域可以是待检测障碍物的区域。
需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。
设置于激光雷达可获取前方0.5-8m范围内的障碍物点云信息数据,对进入探测区域内的障碍物位置和距离进行实时更新。
激光雷达获得车辆前方的障碍物激光雷达点云信息,激光雷达以一定的角速度匀速转动,在这个过程中不断地发出激光并收集反射点的信息,以便得到全方位的环境信息。激光雷达在收集反射点距离的过程中也会同时记录下该点发生的时间和水平角度,并且每个激光发射器都有编号和固定的垂直角度,根据这些数据可以计算出所有反射点的坐标。激光雷达每旋转一周收集到的所有反射点坐标的集合就形成了点云。
使用滤波器滤除激光点云中的干扰,并依据目标的形状空间位置特征,通过模式聚类分析的方法进行目标检测;通过调整距离阈值的方法,将聚类分成的子群重新合并,确定新的聚类中心实现目标定位,得到目标坐标。
或利用预设的点云识别模型识别点云数据中的障碍物后,得到障碍物的有关信息,包括距离、方位、高度、速度、姿态、形状等参数。由此获得无人驾驶车辆前方和侧前方的障碍物坐标信息。其中,预设的点 云识别模型可以是各种预先训练的能够识别点云数据中障碍物的算法,例如可以是ICP算法(Iterative Closest Point,就近点搜索法)、随机森林算法等。
超声波雷达获得车辆前方和侧前方的障碍物回声信息,超声波雷达可获取近距离0-3.5m范围内的障碍物回声信息数据。所述回声信息数据为发生超声波时刻与收到反射波的时刻的差值t,根据所述差值t,可以测算出从超声波雷达到障碍物之间的距离s=340t/2。由此获取障碍物距离信息数据。
分别通过激光雷达获得车辆前方和侧前方的障碍物激光点云信息和通过超声波雷达获得车辆前方和侧前方的障碍物距离信息后,即可进行数据融合。
在步骤S12的一种优选实现方式中,
包括以下子步骤:
子步骤S121、将激光雷达坐标系中的坐标、超声波雷达坐标系中的坐标统一到参考坐标系中。
由于激光雷达和超声波雷达多个传感器的安装位置的不同,导致需要选择一个参考坐标系将激光雷达坐标系中的坐标与每个超声波雷达坐标系的坐标转换到参考坐标系,本实施例中,可以将激光雷达坐标系中的坐标、超声波雷达坐标系中的坐标统一转换到大地坐标系中。
所述激光雷达和超声波雷达在无人驾驶车辆上的初始空间配置是事先已知的,可以根据其在无人驾驶车辆车体上的测量数据得到。将障碍物在各坐标系中的坐标转换到一致的大地坐标系中。
优选的,所述无人驾驶车辆还可以包括定姿定位系统,用于采集定 姿定位系统的位置信息和姿态信息,即其在大地坐标系中的坐标。定姿定位系统的位置信息和姿态信息用于结合激光雷达坐标获得障碍物的空间坐标数据,结合超声波雷达坐标获得障碍物的空间距离数据。
示例性地,定姿定位系统可以包括GPS定位装置和IMU,分别用于采集其定姿定位系统位置信息和姿态信息。位置信息可以包括定姿定位系统的中心坐标(x,y,z),姿态信息可以包括定姿定位系统的三个姿态角(ω,κ)。定姿定位系统和激光雷达之间的相对位置是恒定的,因此可以根据定姿定位系统的位置信息和姿态信息确定激光雷达的位置信息和姿态信息。随后可以根据激光雷达的位置信息和姿态信息对三维激光扫描数据进行校正,以确定障碍物的空间坐标数据。定姿定位系统和超声波雷达的各超声波探头之间的相对位置是恒定的,因此可以根据定姿定位系统的位置信息和姿态信息确定各超声波探头的位置信息和姿态信息。随后可以根据各超声波探头的位置信息和姿态信息对超声波距离数据进行校正,以确定障碍物的空间距离数据。
通过上述变换,使激光雷达坐标与超声波雷达坐标得到了统一,为坐标融合奠定了基础。
在本实施例的一种优选实现方式中,也可以将激光雷达坐标与超声波雷达坐标统一到车辆坐标系。包括激光雷达点云坐标转换,通过激光雷达在无人驾驶车辆上的初始空间配置标定激光雷达坐标系与车辆坐标的矩阵关系。激光雷达在安装中,激光雷达的坐标与车辆坐标在三维空间中存在角度偏差,需要通过修正矩阵进行转换。根据各超声波雷达在无人驾驶车辆上的初始空间配置与车辆坐标系的关系进行矩阵转换。
子步骤S122、将统一后的激光雷达坐标和超声波雷达坐标在栅格化 的检测重叠区域中进行叠加。
由于激光雷达和超声波雷达的检测区域存在区别,对激光雷达和超声波雷达的检测重叠区域进行融合,以便确定障碍物识别结果。
优选地,对于检测重叠区域之外,仍分别根据各自的坐标进行障碍物识别。
优选地,所述检测重叠区域为车体前方及侧前方0.5-3.5m范围内,对所述检测重叠区域内的障碍物识别结果进行栅格化,设定栅格属性。优选地,激光雷达检测精度为正负3cm,超声波雷达检测的距离精度为10cm,考虑栅格总数设定栅格大小为20cm×20cm的单位栅格。
将统一到大地坐标系或车辆坐标系的激光雷达坐标和超声波雷达坐标在栅格中进行叠加。
子步骤S123、对叠加后的激光雷达坐标和超声波雷达坐标进行融合,确定障碍物识别结果。
本实施例中,考虑到超声波雷达输出的坐标是障碍物的距离数据,即以超声波雷达为圆心,以距离数据为半径的圆弧范围都被超声波雷达识别为障碍物,而实际上障碍物可能位于圆弧范围上的任一点或多点;这就需要通过激光雷达坐标来判断障碍物具体位于圆弧范围上的哪一点。
优选地,采用以下判定方式:对于同时存在激光雷达坐标与超声波雷达坐标的网格点,判定网格点被占用;对于仅存在超声波雷达坐标的网格点,判断网格点未被占用。避免了障碍物位于无人驾驶车辆侧方,而对整个圆弧进行输出,导致无人驾驶车辆认为前方存在障碍物而进行刹车或避让的情况发生。若检测重叠区域网格点未被占用,则认为障碍 物的位置是位于圆弧范围位于检测重叠区域之外的部分。
优选地,所述方法还包括以下步骤S13、根据融合后的障碍物信息,确定车辆决策。
优选地,根据检测重叠区域融合后的障碍物识别结果和检测重叠区域之外的障碍物识别结果,确定车辆决策。如果车辆前方存在障碍物,则控制车辆减速。如果车辆侧前方存在障碍物,则继续行驶。
通过本实施例所述方案,避免了仅根据超声波雷达返回的障碍物信息,无法判断是侧前方存在障碍物还是正前方存在障碍物,因此无人驾驶车辆会自动停车以避免碰撞,影响其正常行驶的问题,提高了障碍物识别精度,保证了无人驾驶车辆的安全及平稳行驶。
在本实施例的一种优选实现方式中,在无人驾驶车辆行驶过程中,仅在超声波雷达返回障碍物信息影响无人驾驶车辆正常行驶的情况下,对激光雷达采集的障碍物信息对超声波雷达采集的障碍物信息进行融合,确定障碍物识别结果。以减少系统的运算量,提高反应速度。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
以上是关于方法实施例的介绍,以下通过装置实施例,对本发明所述方案进行进一步说明。
图2为本申请一实施例提供的信息处理系统的结构示意图,如图2 所示,包括:
获取模块21,用于分别获取超声波雷达与激光雷达采集的障碍物信息;
融合模块22,用于对激光雷达采集的障碍物信息对超声波雷达采集的障碍物信息进行融合,确定障碍物识别结果。
在获取模块21的一种优选实现方式中,
在本实施例中,激光雷达为单线激光雷达,安装在无人驾驶车辆车体前方,例如在进气格栅中央,高度40cm左右;只有一路发射和一路接收,结构相对简单,使用方便,成本低;扫描速度高、角度分辨率高、测距灵敏;在行人探测、障碍物探测(小目标探测)以及前方障碍物探测等方面上,单线激光比多线激光雷达有优势得多,因为单线激光雷达的角分辨率可以做得比多线激光雷达高,这一点在探测小物体或者行人里非常有用。由于单线激光雷达的角分辨率可以做到比多线激光雷达高,可在更远的距离提前发现行人,为控制系统留出更多的预警时间。单线激光雷达的探测区域为无人驾驶车辆车体前方及侧前方0.5-8m。超声波雷达于车辆前方左右两侧各3个对称分布,其探测区域为无人驾驶车辆车体前方及侧前方0-3.5m。
在本实施例中,所述超声波雷达与激光雷达信息融合方法运行于其上的电子设备(例如车辆的行车电脑或车载终端)可以通过有线连接方式或者无线连接方式控制激光雷达和超声波雷达。具体地,行车电脑或车载终端可以控制激光雷达以某一频率采集某一区域的激光点云数据,控制超声波雷达以某一频率采集某一区域的回声数据。上述目标区域可以是待检测障碍物的区域。
需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。
设置于激光雷达可获取前方0.5-8m范围内的障碍物点云信息数据,对进入探测区域内的障碍物位置和距离进行实时更新。
激光雷达获得车辆前方的障碍物激光雷达点云信息,激光雷达以一定的角速度匀速转动,在这个过程中不断地发出激光并收集反射点的信息,以便得到全方位的环境信息。激光雷达在收集反射点距离的过程中也会同时记录下该点发生的时间和水平角度,并且每个激光发射器都有编号和固定的垂直角度,根据这些数据可以计算出所有反射点的坐标。激光雷达每旋转一周收集到的所有反射点坐标的集合就形成了点云。
使用滤波器滤除激光点云中的干扰,并依据目标的形状空间位置特征,通过模式聚类分析的方法进行目标检测;通过调整距离阈值的方法,将聚类分成的子群重新合并,确定新的聚类中心实现目标定位,得到目标坐标。
或利用预设的点云识别模型识别点云数据中的障碍物后,得到障碍物的有关信息,包括距离、方位、高度、速度、姿态、形状等参数。由此获得无人驾驶车辆前方和侧前方的障碍物坐标信息。其中,预设的点云识别模型可以是各种预先训练的能够识别点云数据中障碍物的算法,例如可以是ICP算法(Iterative Closest Point,就近点搜索法)、随机森林算法等。
超声波雷达获得车辆前方和侧前方的障碍物回声信息,超声波雷达可获取近距离0-3.5m范围内的障碍物回声信息数据。所述回声信息数据 为发生超声波时刻与收到反射波的时刻的差值t,根据所述差值t,可以测算出从超声波雷达到障碍物之间的距离s=340t/2。由此获取障碍物距离信息数据。
分别通过激光雷达获得车辆前方和侧前方的障碍物激光点云信息和通过超声波雷达获得车辆前方和侧前方的障碍物距离信息后,即可进行数据融合。
在融合模块22的一种优选实现方式中,
所述融合模块22包括以下子模块:
统一子模块,用于将激光雷达坐标系中的坐标、超声波雷达坐标系中的坐标统一到参考坐标系中。
由于激光雷达和超声波雷达多个传感器的安装位置的不同,导致需要选择一个参考坐标系将激光雷达坐标系中的坐标与每个超声波雷达坐标系的坐标转换到参考坐标系,本实施例中,可以将激光雷达坐标系中的坐标、超声波雷达坐标系中的坐标统一转换到大地坐标系中。
所述激光雷达和超声波雷达在无人驾驶车辆上的初始空间配置是事先已知的,可以根据其在无人驾驶车辆车体上的测量数据得到。将障碍物在各坐标系中的坐标转换到一致的大地坐标系中。
优选的,所述无人驾驶车辆还可以包括定姿定位系统,用于采集定姿定位系统的位置信息和姿态信息,即其在大地坐标系中的坐标。定姿定位系统的位置信息和姿态信息用于结合激光雷达坐标获得障碍物的空间坐标数据,结合超声波雷达坐标获得障碍物的空间距离数据。
示例性地,定姿定位系统可以包括GPS定位装置和IMU,分别用于采集其定姿定位系统位置信息和姿态信息。位置信息可以包括定姿定位 系统的中心坐标(x,y,z),姿态信息可以包括定姿定位系统的三个姿态角(ω,κ)。定姿定位系统和激光雷达之间的相对位置是恒定的,因此可以根据定姿定位系统的位置信息和姿态信息确定激光雷达的位置信息和姿态信息。随后可以根据激光雷达的位置信息和姿态信息对三维激光扫描数据进行校正,以确定障碍物的空间坐标数据。定姿定位系统和超声波雷达的各超声波探头之间的相对位置是恒定的,因此可以根据定姿定位系统的位置信息和姿态信息确定各超声波探头的位置信息和姿态信息。随后可以根据各超声波探头的位置信息和姿态信息对超声波距离数据进行校正,以确定障碍物的空间距离数据。
通过上述变换,使激光雷达坐标与超声波雷达坐标得到了统一,为坐标融合奠定了基础。
在本实施例的一种优选实现方式中,也可以将激光雷达坐标与超声波雷达坐标统一到车辆坐标系。包括激光雷达点云坐标转换,通过激光雷达在无人驾驶车辆上的初始空间配置标定激光雷达坐标系与车辆坐标的矩阵关系。激光雷达在安装中,激光雷达的坐标与车辆坐标在三维空间中存在角度偏差,需要通过修正矩阵进行转换。根据各超声波雷达在无人驾驶车辆上的初始空间配置与车辆坐标系的关系进行矩阵转换。
叠加子模块,用于将统一后的激光雷达坐标和超声波雷达坐标在栅格化的检测重叠区域中进行叠加。
由于激光雷达和超声波雷达的检测区域存在区别,对激光雷达和超声波雷达的检测重叠区域进行融合,以便确定障碍物识别结果。
优选地,对于检测重叠区域之外,仍分别根据各自的坐标进行障碍物识别。
优选地,所述检测重叠区域为车体前方及侧前方0.5-3.5m范围内,对所述检测重叠区域内的障碍物识别结果进行栅格化,设定栅格属性。优选地,激光雷达检测精度为正负3cm,超声波雷达检测的距离精度为10cm,考虑栅格总数设定栅格大小为20cm×20cm的单位栅格。
将统一到大地坐标系或车辆坐标系的激光雷达坐标和超声波雷达坐标在栅格中进行叠加。
融合子模块,用于对叠加后的激光雷达坐标和超声波雷达坐标进行融合。
本实施例中,考虑到超声波雷达输出的坐标是障碍物的距离数据,即以超声波雷达为圆心,以距离数据为半径的圆弧范围都被超声波雷达识别为障碍物,而实际上障碍物可能位于圆弧范围上的任一点或多点;这就需要通过激光雷达坐标来判断障碍物具体位于圆弧范围上的哪一点。
优选地,采用以下判定方式:对于同时存在激光雷达坐标与超声波雷达坐标的网格点,判定网格点被占用;对于仅存在超声波雷达坐标的网格点,判断网格点未被占用。避免了障碍物位于无人驾驶车辆侧方,而对整个圆弧进行输出,导致无人驾驶车辆认为前方存在障碍物而进行刹车或避让的情况发生。若检测重叠区域网格点未被占用,则认为障碍物的位置是位于圆弧范围位于检测重叠区域之外的部分。
优选地,所述方法还包括决策模块23,用于根据融合后的障碍物信息,确定车辆决策。
优选地,根据检测重叠区域融合后的障碍物识别结果和检测重叠区域之外的障碍物识别结果,确定车辆决策。如果车辆前方存在障碍物, 则控制车辆减速。如果车辆侧前方存在障碍物,则继续行驶。
通过本实施例所述方案,避免了仅根据超声波雷达返回的障碍物信息,无法判断是侧前方存在障碍物还是正前方存在障碍物,因此无人驾驶车辆会自动停车以避免碰撞,影响其正常行驶的问题,提高了障碍物识别精度,保证了无人驾驶车辆的安全及平稳行驶。
在本实施例的一种优选实现方式中,在无人驾驶车辆行驶过程中,仅在超声波雷达返回障碍物信息影响无人驾驶车辆正常行驶的情况下,对激光雷达采集的障碍物信息对超声波雷达采集的障碍物信息进行融合,确定障碍物识别结果。以减少系统的运算量,提高反应速度。
在所述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单 元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。所述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
图3示出了适于用来实现本发明实施方式的示例性计算机系统/服务器012的框图。图3显示的计算机系统/服务器012仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图3所示,计算机系统/服务器012以通用计算设备的形式表现。计算机系统/服务器012的组件可以包括但不限于:一个或者多个处理器或者处理单元016,系统存储器028,连接不同系统组件(包括系统存储器028和处理单元016)的总线018。
总线018表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
计算机系统/服务器012典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机系统/服务器012访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器028可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)030和/或高速缓存存储器032。计算机系统/服务器012可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统034可以用于读写 不可移动的、非易失性磁介质(图3未显示,通常称为“硬盘驱动器”)。尽管图3中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线018相连。存储器028可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块042的程序/实用工具040,可以存储在例如存储器028中,这样的程序模块042包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块042通常执行本发明所描述的实施例中的功能和/或方法。
计算机系统/服务器012也可以与一个或多个外部设备014(例如键盘、指向设备、显示器024等)通信,在本发明中,计算机系统/服务器012与外部雷达设备进行通信,还可与一个或者多个使得用户能与该计算机系统/服务器012交互的设备通信,和/或与使得该计算机系统/服务器012能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口022进行。并且,计算机系统/服务器012还可以通过网络适配器020与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图3所示,网络适配器020通过总线018与计算机系统/服务器012的其它模块通信。应当明白,尽管图3中未示出,可以结合计算机系统/服务器012使用其它硬件和/或软件模块,包括但不 限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元016通过运行存储在系统存储器028中的程序,从而执行本发明所描述的实施例中的功能和/或方法。
上述的计算机程序可以设置于计算机存储介质中,即该计算机存储介质被编码有计算机程序,该程序在被一个或多个计算机执行时,使得一个或多个计算机执行本发明上述实施例中所示的方法流程和/或装置操作。
随着时间、技术的发展,介质含义越来越广泛,计算机程序的传播途径不再受限于有形介质,还可以直接从网络下载等。可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的 任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (18)

  1. 一种信息处理方法,其特征在于,包括:
    分别获取超声波雷达与激光雷达采集的障碍物信息;
    对激光雷达采集的障碍物信息及超声波雷达采集的障碍物信息进行融合。
  2. 根据权利要求1所述的方法,其特征在于,
    所述超声波雷达安装于无人驾驶车辆车体前方,用于检测车辆前方和侧前方的障碍物信息;
    所述激光雷达安装于无人驾驶车辆车体前方,用于检测车辆前方和侧前方的障碍物信息。
  3. 根据权利要求1或2所述的方法,其特征在于,对激光雷达采集的障碍物信息及超声波雷达采集的障碍物信息进行融合包括:
    将激光雷达坐标、超声波雷达坐标统一到参考坐标系中;
    将统一后的激光雷达坐标和超声波雷达坐标在栅格化的检测重叠区域中进行叠加;
    对叠加后的激光雷达坐标和超声波雷达坐标进行融合,确定障碍物识别结果。
  4. 根据权利要求3所述的方法,其特征在于,所述参考坐标系为大地坐标系或车辆坐标系。
  5. 根据权利要求3所述的方法,其特征在于,将统一后的激光雷达坐标和超声波雷达坐标在栅格化的检测重叠区域中进行叠加包括:
    对所述检测重叠区域内的障碍物识别结果进行栅格化,将统一后的激光雷达坐标和超声波雷达坐标在栅格中进行叠加。
  6. 根据权利要求5所述的方法,其特征在于,对叠加后的激光雷达坐标和超声波雷达坐标进行融合包括:
    对于同时存在激光雷达坐标与超声波雷达坐标的栅格,判定栅格被占用;对于仅存在超声波雷达坐标的栅格,判定栅格未被占用。
  7. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    对于检测重叠区域之外,分别根据激光雷达坐标或超声波雷达坐标进行障碍物识别。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:
    根据融合后的障碍物信息,确定车辆决策。
  9. 一种信息处理系统,其特征在于,包括:
    获取模块,用于分别获取超声波雷达与激光雷达采集的障碍物信息;
    融合模块,用于对激光雷达采集的障碍物信息及超声波雷达采集的障碍物信息进行融合。
  10. 根据权利要求9所述的系统,其特征在于,
    所述超声波雷达安装于无人驾驶车辆车体前方,用于检测车辆前方和侧前方的障碍物信息;
    所述激光雷达安装于无人驾驶车辆车体前方,用于检测车辆前方和侧前方的障碍物信息。
  11. 根据权利要求9或10所述的系统,其特征在于,所述融合模块包括:
    统一子模块,用于将激光雷达坐标、超声波雷达坐标统一到参考坐标系中;
    叠加子模块,用于将统一后的激光雷达坐标和超声波雷达坐标在栅格化的检测重叠区域中进行叠加;
    融合子模块,用于对叠加后的激光雷达坐标和超声波雷达坐标进行融合,确定障碍物识别结果。
  12. 根据权利要求11所述的系统,其特征在于,所述参考坐标系为大地坐标系或车辆坐标系。
  13. 根据权利要求11所述的系统,其特征在于,所述叠加子模块具体用于:
    对所述检测重叠区域内的障碍物识别结果进行栅格化,将统一后的激光雷达坐标和超声波雷达坐标在栅格中进行叠加。
  14. 根据权利要求13所述的系统,其特征在于,所述融合子模块具体用于:
    对于同时存在激光雷达坐标与超声波雷达坐标的栅格,判定栅格被占用;对于仅存在超声波雷达坐标的栅格,判定栅格未被占用。
  15. 根据权利要求11所述的系统,其特征在于,所述系统还用于:
    对于检测重叠区域之外,分别根据激光雷达坐标或超声波雷达坐标进行障碍物识别。
  16. 根据权利要求9至15中任一项所述的系统,其特征在于,所述系统还包括决策模块,用于根据融合后的障碍物信息,确定车辆决策。
  17. 一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1~8中任一项所述的方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在 于,所述程序被处理器执行时实现如权利要求1~8中任一项所述的方法。
PCT/CN2019/126008 2019-01-15 2019-12-17 一种信息处理方法、系统、设备和计算机存储介质 WO2020147485A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/251,169 US20210263159A1 (en) 2019-01-15 2019-12-17 Information processing method, system, device and computer storage medium
EP19910614.7A EP3812793B1 (en) 2019-01-15 2019-12-17 Information processing method, system and equipment, and computer storage medium
JP2020569997A JP7291158B2 (ja) 2019-01-15 2019-12-17 情報処理方法、システム、デバイス、プログラム及びコンピュータ記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910034417.2 2019-01-15
CN201910034417.2A CN109814112A (zh) 2019-01-15 2019-01-15 一种超声波雷达与激光雷达信息融合方法和系统

Publications (1)

Publication Number Publication Date
WO2020147485A1 true WO2020147485A1 (zh) 2020-07-23

Family

ID=66604373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126008 WO2020147485A1 (zh) 2019-01-15 2019-12-17 一种信息处理方法、系统、设备和计算机存储介质

Country Status (5)

Country Link
US (1) US20210263159A1 (zh)
EP (1) EP3812793B1 (zh)
JP (1) JP7291158B2 (zh)
CN (1) CN109814112A (zh)
WO (1) WO2020147485A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802092A (zh) * 2021-01-29 2021-05-14 深圳一清创新科技有限公司 一种障碍物感知方法、装置以及电子设备
CN113109821A (zh) * 2021-04-28 2021-07-13 武汉理工大学 一种基于超声波雷达与激光雷达的建图方法、装置及系统
CN113296118A (zh) * 2021-05-24 2021-08-24 福建盛海智能科技有限公司 一种基于激光雷达与gps的无人驾驶绕障方法与终端
CN113442915A (zh) * 2021-08-17 2021-09-28 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) 一种自动避障天线
CN113589321A (zh) * 2021-06-16 2021-11-02 浙江理工大学 视觉障碍人员的智能导航助手
CN114179785A (zh) * 2021-11-22 2022-03-15 岚图汽车科技有限公司 一种基于面向服务的融合泊车控制系统、电子设备和车辆
CN114475603A (zh) * 2021-11-19 2022-05-13 纵目科技(上海)股份有限公司 自动倒车方法、系统、设备及计算机可读存储介质
CN116047537A (zh) * 2022-12-05 2023-05-02 北京中科东信科技有限公司 基于激光雷达的道路信息生成方法及系统

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109814112A (zh) * 2019-01-15 2019-05-28 北京百度网讯科技有限公司 一种超声波雷达与激光雷达信息融合方法和系统
CN112084810B (zh) * 2019-06-12 2024-03-08 杭州海康威视数字技术股份有限公司 一种障碍物检测方法、装置、电子设备及存储介质
CN110187410B (zh) * 2019-06-18 2021-05-18 武汉中海庭数据技术有限公司 一种自动驾驶中的人体检测装置及方法
CN110471066B (zh) * 2019-07-25 2022-04-05 东软睿驰汽车技术(沈阳)有限公司 一种位置确定方法及装置
CN110674853A (zh) * 2019-09-09 2020-01-10 广州小鹏汽车科技有限公司 超声波数据处理方法、装置及车辆
CN110579765B (zh) * 2019-09-19 2021-08-03 中国第一汽车股份有限公司 障碍物信息确定方法、装置、车辆及存储介质
CN111142528B (zh) * 2019-12-31 2023-10-24 天津职业技术师范大学(中国职业培训指导教师进修中心) 车用危险场景感知方法、装置和系统
CN111257892A (zh) * 2020-01-09 2020-06-09 武汉理工大学 一种用于车辆自动驾驶的障碍物检测方法
CN111273268B (zh) * 2020-01-19 2022-07-19 北京百度网讯科技有限公司 自动驾驶障碍物类型的识别方法、装置及电子设备
CN111272183A (zh) * 2020-03-16 2020-06-12 达闼科技成都有限公司 一种地图创建方法、装置、电子设备及存储介质
CN111477010A (zh) * 2020-04-08 2020-07-31 图达通智能科技(苏州)有限公司 一种用于路口全息感知的装置及其控制方法
CN112639821B (zh) * 2020-05-11 2021-12-28 华为技术有限公司 一种车辆可行驶区域检测方法、系统以及采用该系统的自动驾驶车辆
CN111913183A (zh) * 2020-07-27 2020-11-10 中国第一汽车股份有限公司 车辆侧向避障方法、装置、设备及车辆
CN112001287B (zh) * 2020-08-17 2023-09-12 禾多科技(北京)有限公司 障碍物的点云信息生成方法、装置、电子设备和介质
CN112015178B (zh) * 2020-08-20 2022-10-21 中国第一汽车股份有限公司 一种控制方法、装置、设备及存储介质
CN112505724A (zh) * 2020-11-24 2021-03-16 上海交通大学 道路负障碍检测方法及系统
CN112596050B (zh) * 2020-12-09 2024-04-12 上海商汤临港智能科技有限公司 一种车辆及车载传感器系统、以及行车数据采集方法
CN112731449B (zh) * 2020-12-23 2023-04-14 深圳砺剑天眼科技有限公司 一种激光雷达障碍物识别方法和系统
CN112835045A (zh) * 2021-01-05 2021-05-25 北京三快在线科技有限公司 一种雷达探测方法、装置、存储介质及电子设备
CN113111905B (zh) * 2021-02-25 2022-12-16 上海水齐机器人有限公司 一种融合多线激光雷达与超声波数据的障碍物检测方法
CN113077520A (zh) * 2021-03-19 2021-07-06 中移智行网络科技有限公司 碰撞预测方法、装置及边缘计算服务器
CN115236672A (zh) * 2021-05-12 2022-10-25 上海仙途智能科技有限公司 障碍物信息生成方法、装置、设备及计算机可读存储介质
CN113687337A (zh) * 2021-08-02 2021-11-23 广州小鹏自动驾驶科技有限公司 车位识别性能测试方法、装置、测试车辆及存储介质
CN113670296B (zh) * 2021-08-18 2023-11-24 北京经纬恒润科技股份有限公司 基于超声波的环境地图生成方法及装置
CN113777622B (zh) * 2021-08-31 2023-10-20 通号城市轨道交通技术有限公司 轨道障碍物辨识的方法及装置
CN113920735B (zh) * 2021-10-21 2022-11-15 中国第一汽车股份有限公司 一种信息融合方法、装置、电子设备及存储介质
CN114274978B (zh) * 2021-12-28 2023-12-22 深圳一清创新科技有限公司 一种无人驾驶物流车的避障方法
CN115523979B (zh) * 2022-09-19 2023-05-23 江西索利得测量仪器有限公司 一种基于5g通信的罐内物体的雷达避障测距方法
CN116106906B (zh) * 2022-12-01 2023-11-21 山东临工工程机械有限公司 工程车辆避障方法、装置、电子设备、存储介质及装载机
CN116039620B (zh) * 2022-12-05 2024-04-19 北京斯年智驾科技有限公司 基于自动驾驶感知的安全冗余处理系统
CN116279454B (zh) * 2023-01-16 2023-12-19 禾多科技(北京)有限公司 车身装置控制方法、装置、电子设备和计算机可读介质
CN116453087B (zh) * 2023-03-30 2023-10-20 无锡物联网创新中心有限公司 一种数据闭环的自动驾驶障碍物检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6671582B1 (en) * 2002-08-26 2003-12-30 Brian P. Hanley Flexible agricultural automation
CN105955257A (zh) * 2016-04-29 2016-09-21 大连楼兰科技股份有限公司 基于固定路线的公交车自动驾驶系统及其驾驶方法
CN105974920A (zh) * 2016-06-14 2016-09-28 北京汽车研究总院有限公司 一种无人驾驶系统
CN106394545A (zh) * 2016-10-09 2017-02-15 北京汽车集团有限公司 驾驶系统、无人驾驶车辆及车辆远程控制端
CN108037515A (zh) * 2017-12-27 2018-05-15 清华大学苏州汽车研究院(吴江) 一种激光雷达和超声波雷达信息融合系统及方法
CN109814112A (zh) * 2019-01-15 2019-05-28 北京百度网讯科技有限公司 一种超声波雷达与激光雷达信息融合方法和系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008038731A1 (de) * 2008-08-12 2010-02-18 Continental Automotive Gmbh Verfahren zur Erkennung ausgedehnter statischer Objekte
WO2015192117A1 (en) * 2014-06-14 2015-12-17 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
DE102014220687A1 (de) * 2014-10-13 2016-04-14 Continental Automotive Gmbh Kommunikationsvorrichtung für ein Fahrzeug und Verfahren zum Kommunizieren
JP6292097B2 (ja) * 2014-10-22 2018-03-14 株式会社デンソー 側方測距センサ診断装置
US10229363B2 (en) * 2015-10-19 2019-03-12 Ford Global Technologies, Llc Probabilistic inference using weighted-integrals-and-sums-by-hashing for object tracking
US9964642B2 (en) * 2016-02-04 2018-05-08 Toyota Motor Engineering & Manufacturing North America, Inc. Vehicle with system for detecting arrival at cross road and automatically displaying side-front camera image
DE102016210534A1 (de) * 2016-06-14 2017-12-14 Bayerische Motoren Werke Aktiengesellschaft Verfahren zum Klassifizieren einer Umgebung eines Fahrzeugs
WO2018042498A1 (ja) * 2016-08-29 2018-03-08 マツダ株式会社 車両制御装置
US10629079B2 (en) * 2016-12-05 2020-04-21 Ford Global Technologies, Llc Vehicle collision avoidance
CN110036308B (zh) 2016-12-06 2023-04-18 本田技研工业株式会社 车辆周边信息获取装置以及车辆
CN106767852B (zh) 2016-12-30 2019-10-11 东软集团股份有限公司 一种生成探测目标信息的方法、装置和设备
US10444759B2 (en) * 2017-06-14 2019-10-15 Zoox, Inc. Voxel based ground plane estimation and object segmentation
CN108831144B (zh) * 2018-07-05 2020-06-26 北京智行者科技有限公司 一种碰撞避险处理方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6671582B1 (en) * 2002-08-26 2003-12-30 Brian P. Hanley Flexible agricultural automation
CN105955257A (zh) * 2016-04-29 2016-09-21 大连楼兰科技股份有限公司 基于固定路线的公交车自动驾驶系统及其驾驶方法
CN105974920A (zh) * 2016-06-14 2016-09-28 北京汽车研究总院有限公司 一种无人驾驶系统
CN106394545A (zh) * 2016-10-09 2017-02-15 北京汽车集团有限公司 驾驶系统、无人驾驶车辆及车辆远程控制端
CN108037515A (zh) * 2017-12-27 2018-05-15 清华大学苏州汽车研究院(吴江) 一种激光雷达和超声波雷达信息融合系统及方法
CN109814112A (zh) * 2019-01-15 2019-05-28 北京百度网讯科技有限公司 一种超声波雷达与激光雷达信息融合方法和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3812793A4

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802092B (zh) * 2021-01-29 2024-04-09 深圳一清创新科技有限公司 一种障碍物感知方法、装置以及电子设备
CN112802092A (zh) * 2021-01-29 2021-05-14 深圳一清创新科技有限公司 一种障碍物感知方法、装置以及电子设备
CN113109821A (zh) * 2021-04-28 2021-07-13 武汉理工大学 一种基于超声波雷达与激光雷达的建图方法、装置及系统
CN113296118B (zh) * 2021-05-24 2023-11-24 江苏盛海智能科技有限公司 一种基于激光雷达与gps的无人驾驶绕障方法与终端
CN113296118A (zh) * 2021-05-24 2021-08-24 福建盛海智能科技有限公司 一种基于激光雷达与gps的无人驾驶绕障方法与终端
CN113589321A (zh) * 2021-06-16 2021-11-02 浙江理工大学 视觉障碍人员的智能导航助手
CN113442915A (zh) * 2021-08-17 2021-09-28 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) 一种自动避障天线
CN113442915B (zh) * 2021-08-17 2022-07-15 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) 一种自动避障天线
CN114475603A (zh) * 2021-11-19 2022-05-13 纵目科技(上海)股份有限公司 自动倒车方法、系统、设备及计算机可读存储介质
CN114179785B (zh) * 2021-11-22 2023-10-13 岚图汽车科技有限公司 一种基于面向服务的融合泊车控制系统、电子设备和车辆
CN114179785A (zh) * 2021-11-22 2022-03-15 岚图汽车科技有限公司 一种基于面向服务的融合泊车控制系统、电子设备和车辆
CN116047537A (zh) * 2022-12-05 2023-05-02 北京中科东信科技有限公司 基于激光雷达的道路信息生成方法及系统
CN116047537B (zh) * 2022-12-05 2023-12-26 北京中科东信科技有限公司 基于激光雷达的道路信息生成方法及系统

Also Published As

Publication number Publication date
CN109814112A (zh) 2019-05-28
US20210263159A1 (en) 2021-08-26
JP7291158B2 (ja) 2023-06-14
JP2021526278A (ja) 2021-09-30
EP3812793B1 (en) 2023-02-01
EP3812793A1 (en) 2021-04-28
EP3812793A4 (en) 2021-08-18

Similar Documents

Publication Publication Date Title
WO2020147485A1 (zh) 一种信息处理方法、系统、设备和计算机存储介质
JP6845894B2 (ja) 自動運転車両におけるセンサー故障を処理するための方法
US20200348408A1 (en) Vehicle Positioning Method and Vehicle Positioning Apparatus
EP3361278B1 (en) Autonomous vehicle localization based on walsh kernel projection technique
RU2694154C2 (ru) Формирование моделированных данных датчиков для обучения и проверки достоверности моделей обнаружения
US10262234B2 (en) Automatically collecting training data for object recognition with 3D lidar and localization
KR102614323B1 (ko) 수동 및 능동 측정을 이용한 장면의 3차원 지도 생성
US20210276589A1 (en) Method, apparatus, device and computer storage medium for vehicle control
EP3629233A1 (en) Method and apparatus for detecting obstacle, electronic device, vehicle and storage medium
US20200125845A1 (en) Systems and methods for automated image labeling for images captured from vehicles
JP2020079781A (ja) 相対的位置姿勢の決定方法、装置、機器及び媒体
CN112015178B (zh) 一种控制方法、装置、设备及存储介质
JP2020021471A (ja) 自動運転車(adv)のサブシステムによるパトロールカーのパトロール
CN110606071A (zh) 一种泊车方法、装置、车辆和存储介质
JP2019182402A (ja) 自動運転車両に用いられる検知支援
US10860868B2 (en) Lane post-processing in an autonomous driving vehicle
CN113561963B (zh) 一种泊车方法、装置及车辆
US11076022B2 (en) Systems and methods for implementing robotics frameworks
RU2769921C2 (ru) Способы и системы для автоматизированного определения присутствия объектов
US20200233418A1 (en) Method to dynamically determine vehicle effective sensor coverage for autonomous driving application
WO2020147518A1 (zh) 一种超声波雷达阵列、障碍物检测方法及系统
RU2767949C2 (ru) Способ (варианты) и система для калибровки нескольких лидарных датчиков
JP2019079397A (ja) 車載装置、情報処理システム、及び情報処理方法
CN110333725B (zh) 自动驾驶避让行人的方法、系统、设备及存储介质
US11221405B2 (en) Extended perception based on radar communication of autonomous driving vehicles

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19910614

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019910614

Country of ref document: EP

Effective date: 20201209

ENP Entry into the national phase

Ref document number: 2020569997

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE