CN112835029A - Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system - Google Patents

Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system Download PDF

Info

Publication number
CN112835029A
CN112835029A CN201911079214.1A CN201911079214A CN112835029A CN 112835029 A CN112835029 A CN 112835029A CN 201911079214 A CN201911079214 A CN 201911079214A CN 112835029 A CN112835029 A CN 112835029A
Authority
CN
China
Prior art keywords
data information
sensor
information
weight
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911079214.1A
Other languages
Chinese (zh)
Inventor
王清品
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hella Shanghai Electronics Co Ltd
Original Assignee
Hella Shanghai Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hella Shanghai Electronics Co Ltd filed Critical Hella Shanghai Electronics Co Ltd
Priority to CN201911079214.1A priority Critical patent/CN112835029A/en
Publication of CN112835029A publication Critical patent/CN112835029A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles

Abstract

The invention provides a method and a system for fusing unmanned-oriented multi-sensor obstacle detection data, which comprises the steps that a plurality of sensors of an unmanned automobile detect a target object to obtain a plurality of groups of data information of the target object; detecting and acquiring the reliability of each sensor; and calculating the position information of the target object according to the weight and the data information. According to the technical scheme, the plurality of sensors are adopted to detect the obstacle, the weight is introduced to quantify the reliability of the sensors, and then the data information detected by the plurality of sensors is fused according to the weight to obtain the accurate position information of the obstacle, including the coordinate and the outline information, so that the obstacle is accurately detected and avoided, and the accuracy of the coordinates of the obstacle is improved to the maximum extent.

Description

Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system
Technical Field
The invention relates to the field of unmanned driving, in particular to a multi-sensor obstacle detection data fusion method and system for unmanned driving.
Background
In the field of unmanned driving, the automobile needs to detect the obstacle so that the automobile can avoid the obstacle in time. In order to improve the accuracy of obstacle detection in the existing unmanned detection method, a plurality of sensors are often adopted for detection, and data information detected by the sensors is fused into the same coordinate system. However, when multiple sensors are fused to the same coordinate system in the existing detection method, the accuracy of data fusion is affected by many interference factors, for example, when there are many target objects and the target objects detected by the multiple sensors are matched to the same coordinate system, a large deviation exists, and the phenomenon of being worn by the user of the canopy is likely to occur. Meanwhile, the reliability of different sensors may be reduced in bad weather conditions such as rain and snow, and other external factors.
Therefore, a more accurate unmanned-oriented multi-sensor obstacle detection method is required. The invention provides a method and a system for fusing unmanned-oriented multi-sensor obstacle detection data, which are used for detecting an obstacle by adopting a plurality of sensors, and introducing weights to quantify the reliability of the sensors, so that data information detected by the sensors is fused according to the weights, and accurate position information of the obstacle, including coordinates and contour information, can be obtained even in rainy and snowy weather, so that the obstacle is accurately detected and avoided, and the accuracy of coordinates of the obstacle is improved to the maximum extent.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a method and a system for fusing unmanned multi-sensor obstacle detection data, in which coordinate position information of an obstacle is detected by a plurality of sensors, reliability of the obstacle detected by the sensors is quantified by introducing weights, data information of the plurality of sensors is fused, accurate position information of the obstacle is determined, and a surrounding environment of a vehicle is accurately detected to avoid the obstacle.
Specifically, the invention provides a multi-sensor obstacle detection data fusion method facing to unmanned driving, which comprises the following steps that a plurality of sensors of an unmanned automobile detect a target object to obtain a plurality of groups of data information of the target object; calculating the weight of each group of data information according to the reliability of each sensor; and calculating the position information of the target object according to the weight and the data information.
Preferably, in the above method, the plurality of sensors include a left radar, a right radar, a binocular camera, and a lidar; the multiple groups of data information comprise first data information, second data information, third data information and fourth data information; the data information acquired by the left radar is the first data information, the data information acquired by the right radar is the second data information, the data information acquired by the binocular camera is the third data information, and the data information acquired by the laser radar is the fourth data information.
Preferably, in the above method, the left radar and the right radar are millimeter wave radars, and the first data information and the second data information both include one or more of radar imaging data information and coordinate information.
Preferably, in the above method, the third data information includes one or more of image, distance, angle and height information of the target object.
Preferably, the influence factor for calculating the weight of the data information in the above method includes one or more of sensor characteristics, sensor operating state, and environmental factors.
Preferably, in the above method, when the operating state of the sensor is a fault, the weight of the data information corresponding to the sensor is 0.
Preferably, in the above method, the sensor performs automatic calibration according to the deviation between the corresponding data information and the position information, and adjusts the parameter of the sensor.
The invention also provides a multi-sensor obstacle detection data fusion system for unmanned driving, which comprises a detection module, a weight calculation module and a fusion module; the detection module comprises a plurality of sensors, is in communication connection with the fusion module, detects and acquires a plurality of groups of data information of the target object, and transmits the data information to the fusion module; the weight calculation module is in communication connection with the fusion module, calculates the weight of each group of data information according to the reliability of each sensor, and transmits the weight to the fusion module; the fusion module is in communication connection with the detection module and the weight calculation module, and calculates the position information of the target object according to the weight and the data information.
Preferably, the detection module in the system comprises a left radar, a right radar, a binocular camera and a laser radar; the weight calculation module calculates influence factors of the weight of the data information, wherein the influence factors comprise one or more of sensor characteristics, sensor operating states and environmental factors.
Preferably, the system further includes an automatic calibration module, and the sensor performs automatic calibration according to a deviation between the corresponding data information and the position information, and adjusts parameters of the sensor.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
1. the accuracy of detecting the obstacle by the unmanned automobile is improved;
2. when some sensors have faults, accurate information of the obstacle can be obtained through fusion of data information of other sensors, and the obstacle can be avoided accurately;
3. and an automatic calibration function is provided, and the sensor can automatically calibrate the parameters according to the deviation between the detection data information and the accurate information of the obstacle.
Drawings
FIG. 1 is a schematic flow diagram of a method for unmanned-oriented multi-sensor obstacle detection data fusion in accordance with a preferred embodiment of the present invention;
FIG. 2 is a schematic structural view of an unmanned vehicle according to a preferred embodiment of the present invention;
fig. 3 is a flow chart of an unmanned-oriented multi-sensor obstacle detection data fusion method according to another preferred embodiment of the invention.
Reference numerals:
1-left radar
2-Right radar
3-binocular camera
4-lidar
5-ECU
6-V-CANFD
7-P-CAN
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The unmanned vehicle referred to in the present invention is not necessarily unmanned, and vehicles having an automatic driving function are within the scope of the present invention.
Referring to fig. 1, a flow chart of a method for fusing unmanned-oriented multi-sensor obstacle detection data according to a preferred embodiment of the invention is shown.
S1: detecting a target object by a plurality of sensors of the unmanned automobile to acquire a plurality of groups of data information of the target object;
the unmanned vehicle detects the target object through the sensor, and in the embodiment, the unmanned vehicle is provided with a plurality of sensors which respectively detect the target object and acquire a plurality of groups of sensor information of the target object. The sensor in the present embodiment includes various sensors that can be used for obstacle detection, such as a camera, an ultrasonic radar, a millimeter wave radar, a laser radar, or the like. The type or content of the data information detected by the sensor is related to the type of the sensor, for example, the data information detected by the camera may include image information of a target object, and the like, and the data information detected by the radar may include the distance of the target object, and the like. For simplicity of explanation, the obstacle object is also referred to as a target object in the embodiments of the present application. In this embodiment, the plurality of sensors are at least two sensors, which may be three, four or more, the types of the sensors are not limited, and may be the same type or different types of sensors, and when different types of sensors are used, the advantages of the sensors complement each other to achieve a synergistic effect, which is more favorable for improving the detection accuracy.
S2: calculating the weight of each group of data information according to the reliability of each sensor;
under the condition of bad weather such as rain and snow and other external factors, the reliability of different sensors is reduced, so that after a plurality of sensors of the unmanned automobile detect a target object to acquire a plurality of groups of data information of the target object, the current reliability of each sensor needs to be acquired, and the weight of each group of data information is calculated according to the current reliability of the sensor. The different sensors may detect the target object with a deviation, and the current environment may also affect the accuracy of the sensors, so that the reliability of each sensor needs to be obtained first. The influence factors of the sensor reliability include the characteristics of the sensor itself, environmental factors, the operating state of the sensor, and the like. For example, the three-dimensional imaging effect of the laser radar is good, but the three-dimensional imaging effect of the millimeter wave radar is not as good as that of the laser radar, and the three-dimensional imaging effect of the millimeter wave radar is affected by the weather, but the influence is relatively small, and the influence of the camera on the weather is very large, so that the weight of the laser radar is higher than that of the millimeter wave radar under the condition of good weather, and the weight of the laser radar and the weight of the camera are reduced when the weather is severe. When the running state of a certain sensor is abnormal, namely when a fault occurs, the reliability is 0, and the weight is 0. Therefore, the reliability of different sensors is influenced by various factors, and the weight of each sensor under different conditions can be calculated through an algorithm such as a deep learning algorithm, so that the reliability of the operation of the sensor subjected to multi-factor interference is effectively represented by the weight.
S3: and calculating the position information of the target object according to the weight and the data information.
After the weights and the data information are obtained, data fusion is carried out according to a plurality of groups of data information of the sensors and the weights corresponding to the data information, target information detected by the sensors is integrated on a unified coordinate system, a three-dimensional image of the accurate unified coordinate system is established, the position information of the target object is calculated, and the accuracy of the target object coordinate is improved to the maximum extent. The position information of the target object comprises coordinate information and outline information, namely the outline of the obstacle object and the position of the obstacle object can be accurately obtained, and the obstacle can be accurately avoided.
According to the technical scheme, the target object/obstacle is detected by adopting the multi-sensor synergistic action, the reliability of the target object detected by the multiple sensors is quantified by introducing the weight concept, the reliability of different sensors in different scenes or environments is calculated by a deep learning method and further expressed as the weight, the accurate position information of the target object is calculated by combining different data information detected by the different sensors and the corresponding weight of the different sensors, the reliability of the different sensors in different scenes is quantified by adopting the technical scheme, the data information detected by the multiple sensors is fused, the multiple data information is integrated into a unified coordinate system, the accurate position of the target object is obtained, and the unmanned automobile can accurately avoid the target object.
The invention is not limited to use in unmanned vehicles and may be used in any situation where it is desirable to detect obstacles.
Based on the above embodiments, in a preferred embodiment consistent with the present invention, referring to fig. 2, which is a schematic structural diagram of an unmanned vehicle according to a preferred embodiment of the present invention, the plurality of sensors include a left radar 1, a right radar 2, a binocular camera 3 and a laser radar 4; the multiple groups of data information comprise first data information, second data information, third data information and fourth data information; the data information acquired by the left radar 1 is first data information, the data information acquired by the right radar 2 is second data information, the data information acquired by the binocular camera 3 is third data information, and the data information acquired by the laser radar 4 is fourth data information.
The left radar 1 is arranged in the left front of the vehicle and is used for detecting obstacles in the area in the left front and the right front of the vehicle, the right radar 2 is arranged in the right front of the vehicle and is used for detecting obstacles in the area in the right front and the right front of the vehicle, the types of the left and right radars are not limited, and the obstacles can be detected. The binocular camera 3 is arranged in the middle of the vehicle, can surround the detection target object by 360 degrees, and the third data information obtained by the detection of the binocular camera can not only comprise image information, but also can comprise distance information of the barrier and the binocular camera and height information of the barrier, which are obtained by calculation according to the image information, and the angle between the connecting line of the barrier and the binocular camera and the central axis of the binocular camera, and can obtain the coordinate and contour information of the barrier according to the information. And acquiring radar imaging data information as fourth data information between the laser radar and the vehicle. The first data information, the second data information, the third data information and the fourth data information can be converted into coordinate information by combining with vehicle body calibration parameters, no matter whether the first data information, the second data information, the third data information and the fourth data information comprise imaging information, distance information or other information, for example, a three-dimensional coordinate system is established by taking the most center of a vehicle as a coordinate origin of a three-dimensional coordinate system, any point of a vehicle body can be represented by a three-dimensional coordinate, and the first data information, the second data information, the third data information and the fourth data information can be converted into coordinate information of a target object, namely an obstacle, in the coordinate system, and comprise coordinate information of any point of the obstacle in the coordinate system.
The reliability of the left radar 1, the right radar 2, the binocular camera 3 and the laser radar 4 in the current environment is quantified through a deep learning algorithm, and then the weights of the first data information, the second data information, the third data information and the fourth data information respectively corresponding to the left radar 1, the right radar 2, the binocular camera 3 and the laser radar 4 are calculated. For example, under the condition of bad weather, the weight of the camera for target detection is reduced along with the weather condition due to the influence of the camera on the weather, so that the interference of the camera acquisition information on the target detection is reduced, and when the weather is rainstorm, the weight of the binocular camera 3 approaches to zero, so that the influence of the binocular camera 3 on the target detection result under the function failure state is effectively reduced. For another example, when the binocular camera 3, the left radar 1, the right radar 2, and the lidar 4 are shielded, and the binocular camera 3 is directly irradiated by light, the reliability of the sensor is reduced, resulting in a change in weight.
After the weights are obtained, the first data information, the second data information, the third data information and the fourth data information detected by the sensors are fused, the data information is integrated into a unified coordinate system, the accurate position of the target object is obtained, and then the vehicle can accurately avoid the target object, namely the obstacle.
The vehicle also comprises an electronic control unit ECU5 for data fusion, an automobile communication bus V-CANFD6 with the variable communication rate of the whole automobile data domain is used for data communication between the multi-sensor (the left radar 1, the right radar 2, the binocular camera 3 and the laser radar 4) and the ECU5, and a private communication bus P-CAN7 is used for data communication between the left radar 1 and the right radar 2.
The method is adopted, for example, a left radar and a right radar can be arranged behind the vehicle and are used together with a binocular camera and a laser radar in the middle of the vehicle to detect a target object behind the vehicle, such as a rear automobile driven in an accelerated mode, after the target object is detected, the vehicle can be accelerated or lane-changed to avoid rear-end accidents, and whether obstacles exist in the front or the left or the right or the front of the vehicle is required to be detected.
Based on the above embodiment, in a preferred embodiment consistent with the present invention, the left radar 1 and the right radar 2 are millimeter wave radars, and the first data information and the second data information each include one or more of radar imaging data information and coordinate information. Further, the first data information and the second data information comprise radar imaging data information and coordinate information, and the coordinates and the contours of the target object can be calculated through the radar imaging data information and the coordinate information, namely the coordinates and the contours can be converted into various contour data information such as the length and the width of the target object and the position of any point on the target object relative to the vehicle.
Based on the above embodiments, in a preferred embodiment consistent with the present invention, the third data information includes one or more of image, distance, angle and height information of the target object. The information that binocular camera 3 detected the target object is third data information, binocular camera 3 has the range finding function, consequently, the third data information that binocular camera 3 detected includes the distance information that the target object is apart from binocular camera, the formation of image information, according to distance information and formation of image information, can calculate the angle of target object and 3 lines of binocular camera and the 3 central axes of binocular camera, also can calculate the profile information such as length, width of target object, and the relative binocular camera 3's of any point position information on the target object. And according to the vehicle body calibration parameter values and the positions of the binocular cameras on the vehicle, the positions of any point on the target object relative to the vehicle and the coordinates in the unified coordinate system can be converted.
Based on the above embodiments, in a preferred embodiment consistent with the present invention, the influence factor of the weight of the calculated data information includes one or more of sensor characteristics, sensor operating state, and environmental factors. The weight of the data information depends on the reliability of the corresponding sensor, which is related to various factors such as the characteristics of the sensor, the operation scene of the sensor, the operation state of the sensor and the like. The influence of various factors on the reliability of the sensor is quantified through deep learning, the reliability of the sensor is further quantified, and the weight of each corresponding data information is calculated.
Based on the above embodiments, in a preferred embodiment consistent with the present invention, when the operation state of the sensor is a failure, the weight of the data information corresponding to the sensor is 0. The reliability of the sensor is affected by various factors such as sensor characteristics, sensor operating conditions, environmental factors, etc., and when the operating condition of the sensor is a fault, the sensor cannot detect the target object accurately, so that the weight of the corresponding data information is 0 at this time regardless of other influencing factors. Therefore, when the weight is calculated, firstly, the operation state of the sensor is detected, if the operation state of the sensor is a fault, the weight of the data information corresponding to the sensor is directly determined to be 0, and if the sensor operates well, the weight of the data information is calculated by considering various influence factors and quantifying the reliability of the acquired data information through a deep learning algorithm. And performing data fusion according to different weight values of the data information corresponding to each sensor, and marking the target object in the three-dimensional coordinate system.
Fig. 3 is a flow chart of a method for fusing unmanned-oriented multi-sensor obstacle detection data according to another preferred embodiment of the invention. The method also comprises the steps that the sensor carries out automatic calibration according to the deviation of the corresponding data information and the position information, and the parameters of the sensor are adjusted. The accurate position information of the target object obtained through data fusion and the data information detected by the single sensor may have larger or smaller deviation, the embodiment also has the step of automatic calibration, the sensor performs automatic calibration according to the deviation between the corresponding data information and the position information, the parameters of the sensor are adjusted, the operation accuracy of the sensor is improved, and the accuracy of the sensor for detecting the target object is further improved.
This step can be performed either while the sensor is in operation or when the sensor is applied to a new vehicle, preferably when the sensor is applied to a new vehicle. For example, when the sensor such as a laser radar and a millimeter wave radar is applied to a new vehicle model, since the installation position, the post-protection material and the shape all affect the performance of the radar, the performance of the radar needs to be calibrated, for example, in a unified coordinate system, when the millimeter wave radar is calibrated, the weight setting of the millimeter wave radar is low (if the laser is not calibrated, the weight is also low), the millimeter wave radar is automatically calibrated by using the data fusion result, the automatic calibration is performed according to the detection result, that is, the deviation between the corresponding data information and the position information, and the radar parameter value is adjusted. In the process, the camera sensor can mark video information when the plurality of sensors detect the target, and an engineer can evaluate the correctness of final automatic calibration according to the marking, so that the radar can finally determine parameter values with better performance.
The invention also provides a multi-sensor obstacle detection data fusion system for unmanned driving, which comprises a detection module, a weight calculation module and a fusion module; the detection module comprises a plurality of sensors, is in communication connection with the fusion module, detects and acquires a plurality of groups of data information of the target object, and transmits the data information to the fusion module; the weight calculation module is in communication connection with the fusion module, calculates the weight of each group of data information according to the reliability of each sensor, and transmits the weight to the fusion module; and the fusion module is used for calculating the position information of the target object according to the weight and the data information.
Based on the above embodiment, in a preferred embodiment consistent with the present invention, the detection module includes a left radar, a right radar, a binocular camera, and a laser radar; the weight calculation module calculates influence factors of the weight of the data information, wherein the influence factors comprise one or more of sensor characteristics, sensor operating states and environmental factors.
By adopting the technical scheme, the framework of millimeter wave radar imaging, laser radar imaging and binocular camera ranging is used, so that the accurate and uniform three-dimensional coordinate system can be established, and the deviation in the data fusion process is reduced. According to the invention, under the influence of different environments and factors, a convolutional neural network is used for comprehensively considering a plurality of influence factors according to the influence of a millimeter wave radar, a laser radar and a binocular camera, and a weight value capable of representing the current running state of the sensor and the target detection reliability is output. The millimeter wave radar imaging, the laser radar imaging and the binocular camera can respectively represent the running state of the current sensor and the weight value of the target detection reliability, unified target object information is obtained comprehensively and displayed in a three-dimensional coordinate system, the accuracy of target object or obstacle detection can be improved, obstacles are accurately detected and accurately avoided in the unmanned vehicle, and the safety of the vehicle is guaranteed.
Based on the above embodiment, in a preferred embodiment consistent with the present invention, the sensor further includes an automatic calibration module, where the sensor performs automatic calibration according to a deviation between the corresponding data information and the position information, and adjusts a parameter of the sensor.
By adopting the technical scheme, in a unified coordinate system, the data parameters of the millimeter wave radar and the laser radar are automatically calibrated by utilizing the data fusion result, the radar parameters are adjusted according to the actual road surface detection result, and meanwhile, the video information when a plurality of sensors detect the target is marked by the camera, so that the accuracy of final automatic calibration is evaluated by an engineer, the radar is helped to finally determine the parameter value with better performance, and the accuracy of target object detection is improved.
By adopting the technical scheme of the invention, the coordinate position information of the obstacles is detected by the sensors, the reliability of the obstacles detected by the sensors is quantified by introducing the weight, the data information of the sensors is fused, the accurate position information of the obstacles is determined, and the surrounding environment of the vehicle is accurately detected to avoid the obstacles. Furthermore, the accuracy of the sensor can be effectively improved by combining automatic calibration, the accuracy of target object detection is improved, and the safety performance of the unmanned system is further ensured.
It should be noted that the embodiments of the present invention have been described in terms of preferred embodiments, and not by way of limitation, and that those skilled in the art can make modifications and variations of the embodiments described above without departing from the spirit of the invention.

Claims (10)

1. A multi-sensor obstacle detection data fusion method facing unmanned driving is characterized by comprising the following steps that a plurality of sensors of an unmanned automobile detect a target object, and a plurality of groups of data information of the target object are obtained;
calculating the weight of each group of data information according to the reliability of each sensor;
and calculating the position information of the target object according to the weight and the data information.
2. The multi-sensor obstacle detection data fusion method according to claim 1, wherein the plurality of sensors includes a left radar, a right radar, a binocular camera, and a lidar;
the multiple groups of data information comprise first data information, second data information, third data information and fourth data information;
the data information acquired by the left radar is the first data information, the data information acquired by the right radar is the second data information, the data information acquired by the binocular camera is the third data information, and the data information acquired by the laser radar is the fourth data information.
3. The multi-sensor obstacle detection data fusion method of claim 2, wherein the left radar and the right radar are millimeter wave radars, and the first data information and the second data information each include one or more of radar imaging data information and coordinate information.
4. The multi-sensor obstacle detection data fusion method of claim 2, wherein the third data information includes one or more of image, distance, angle, and height information of the target object.
5. The multi-sensor obstacle detection data fusion method of claim 1, wherein the impact factors to calculate the weight of the data information include one or more of sensor characteristics, sensor operating conditions, environmental factors.
6. The multi-sensor obstacle detection data fusion method according to claim 5, wherein when the operation state of the sensor is a failure, the weight of the data information corresponding to the sensor is 0.
7. The multi-sensor obstacle detection data fusion method of claim 1, further comprising automatically calibrating the sensors to adjust parameters of the sensors based on deviations of their corresponding data information from the position information.
8. A multi-sensor obstacle detection data fusion system facing unmanned driving is characterized by comprising a detection module, a weight calculation module and a fusion module;
the detection module comprises a plurality of sensors, is in communication connection with the fusion module, detects and acquires a plurality of groups of data information of the target object, and transmits the data information to the fusion module;
the weight calculation module is in communication connection with the fusion module, calculates the weight of each group of data information according to the reliability of each sensor, and transmits the weight to the fusion module;
the fusion module is in communication connection with the detection module and the weight calculation module, and calculates the position information of the target object according to the weight and the data information.
9. The multi-sensor obstacle detection data fusion system of claim 8, wherein the detection modules include a left radar, a right radar, a binocular camera, and a lidar;
the weight calculation module calculates influence factors of the weight of the data information, wherein the influence factors comprise one or more of sensor characteristics, sensor operating states and environmental factors.
10. The multi-sensor obstacle detection data fusion system of claim 8, further comprising an automatic calibration module, wherein the sensors are automatically calibrated to adjust parameters of the sensors based on deviations of their corresponding data information from the position information.
CN201911079214.1A 2019-11-07 2019-11-07 Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system Pending CN112835029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911079214.1A CN112835029A (en) 2019-11-07 2019-11-07 Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911079214.1A CN112835029A (en) 2019-11-07 2019-11-07 Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system

Publications (1)

Publication Number Publication Date
CN112835029A true CN112835029A (en) 2021-05-25

Family

ID=75921347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911079214.1A Pending CN112835029A (en) 2019-11-07 2019-11-07 Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system

Country Status (1)

Country Link
CN (1) CN112835029A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920782A (en) * 2021-10-08 2022-01-11 安徽江淮汽车集团股份有限公司 Multi-sensor fusion method applied to parking space detection
CN115469277A (en) * 2022-03-16 2022-12-13 北京罗克维尔斯科技有限公司 Vehicle radar detection information display method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107042824A (en) * 2015-10-23 2017-08-15 哈曼国际工业有限公司 System and method for detecting the accident in vehicle
CN107202983A (en) * 2017-05-19 2017-09-26 深圳佑驾创新科技有限公司 The self-actuating brake method and system merged based on image recognition and millimetre-wave radar
CN107703935A (en) * 2017-09-12 2018-02-16 安徽胜佳和电子科技有限公司 Multiple data weighting fusions carry out method, storage device and the mobile terminal of avoidance
CN108008412A (en) * 2017-10-18 2018-05-08 南京安捷智造科技有限公司 A kind of more curvature fusion protecting against collision of driving vehicles systems
WO2018196001A1 (en) * 2017-04-28 2018-11-01 SZ DJI Technology Co., Ltd. Sensing assembly for autonomous driving
US20190057263A1 (en) * 2017-08-21 2019-02-21 2236008 Ontario Inc. Automated driving system that merges heterogenous sensor data
US20190173196A1 (en) * 2017-12-02 2019-06-06 Metawave Corporation Method and apparatus for object detection with integrated environmental information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107042824A (en) * 2015-10-23 2017-08-15 哈曼国际工业有限公司 System and method for detecting the accident in vehicle
WO2018196001A1 (en) * 2017-04-28 2018-11-01 SZ DJI Technology Co., Ltd. Sensing assembly for autonomous driving
CN107202983A (en) * 2017-05-19 2017-09-26 深圳佑驾创新科技有限公司 The self-actuating brake method and system merged based on image recognition and millimetre-wave radar
US20190057263A1 (en) * 2017-08-21 2019-02-21 2236008 Ontario Inc. Automated driving system that merges heterogenous sensor data
CN107703935A (en) * 2017-09-12 2018-02-16 安徽胜佳和电子科技有限公司 Multiple data weighting fusions carry out method, storage device and the mobile terminal of avoidance
CN108008412A (en) * 2017-10-18 2018-05-08 南京安捷智造科技有限公司 A kind of more curvature fusion protecting against collision of driving vehicles systems
US20190173196A1 (en) * 2017-12-02 2019-06-06 Metawave Corporation Method and apparatus for object detection with integrated environmental information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920782A (en) * 2021-10-08 2022-01-11 安徽江淮汽车集团股份有限公司 Multi-sensor fusion method applied to parking space detection
CN115469277A (en) * 2022-03-16 2022-12-13 北京罗克维尔斯科技有限公司 Vehicle radar detection information display method and device

Similar Documents

Publication Publication Date Title
WO2022022694A1 (en) Method and system for sensing automated driving environment
CN106991389B (en) Device and method for determining road edge
WO2018079297A1 (en) Malfunction detecting device
EP3792660B1 (en) Method, apparatus and system for measuring distance
CN108509972A (en) A kind of barrier feature extracting method based on millimeter wave and laser radar
US20090122136A1 (en) Object detection device
CN112083441B (en) Obstacle detection method and system for depth fusion of laser radar and millimeter wave radar
EP3712556A1 (en) Sensor verification
CN112116031B (en) Target fusion method, system, vehicle and storage medium based on road side equipment
JP2021510227A (en) Multispectral system for providing pre-collision alerts
WO2019208101A1 (en) Position estimating device
CN110220500B (en) Binocular camera-based distance measurement method for unmanned driving
CN111413983A (en) Environment sensing method and control end of unmanned vehicle
CN111142528B (en) Method, device and system for sensing dangerous scene for vehicle
CN110341621B (en) Obstacle detection method and device
CN112666535A (en) Environment sensing method and system based on multi-radar data fusion
CN112835029A (en) Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system
CN111222441A (en) Point cloud target detection and blind area target detection method and system based on vehicle-road cooperation
CN112835030A (en) Data fusion method and device for obstacle target and intelligent automobile
CN111578894A (en) Method and device for determining heading angle of obstacle
EP2913999A1 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, disparity value deriving method, and computer-readable storage medium
CN113432615B (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
US20200062252A1 (en) Method and apparatus for diagonal lane detection
CN113741388A (en) Safety diagnosis system and method based on automatic driving perception failure
US11914028B2 (en) Object detection device for vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination