CN109829386B - Intelligent vehicle passable area detection method based on multi-source information fusion - Google Patents

Intelligent vehicle passable area detection method based on multi-source information fusion Download PDF

Info

Publication number
CN109829386B
CN109829386B CN201910007212.5A CN201910007212A CN109829386B CN 109829386 B CN109829386 B CN 109829386B CN 201910007212 A CN201910007212 A CN 201910007212A CN 109829386 B CN109829386 B CN 109829386B
Authority
CN
China
Prior art keywords
target
frame
obstacle
information
wave radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910007212.5A
Other languages
Chinese (zh)
Other versions
CN109829386A (en
Inventor
李克强
熊辉
余大蒙
王建强
王礼坤
许庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910007212.5A priority Critical patent/CN109829386B/en
Publication of CN109829386A publication Critical patent/CN109829386A/en
Application granted granted Critical
Publication of CN109829386B publication Critical patent/CN109829386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an intelligent vehicle passable area detection method based on multi-source information fusion, which comprises the following steps: s100, acquiring obstacle target information around a vehicle detected by a vehicle-mounted sensor, and outputting a static obstacle target library; s200, receiving obstacle target information around a vehicle, performing space-time synchronization on the obstacle target information detected by a vehicle-mounted sensor, performing single-frame target fusion on all the detected obstacle information around the vehicle, performing continuous inter-frame multi-target tracking by using motion prediction and multi-frame target association, and outputting a dynamic obstacle target library; and S300, receiving the static obstacle target library and the dynamic obstacle target library output in the S200, updating the dynamic obstacle target library according to the information of the static obstacle target library to form real-time obstacle target information, and generating a passable area. The method can accurately acquire the position, the scale, the category and the motion information of the obstacles around the vehicle and the binary rasterized map in the driving process of the vehicle, track the motion trail of multiple targets and form the intelligent vehicle passable area which comprises the binary rasterized map and the dynamic obstacle information and is updated in real time.

Description

Intelligent vehicle passable area detection method based on multi-source information fusion
Technical Field
The invention relates to the technical field of automatic driving, in particular to an intelligent vehicle passable area detection method based on multi-source information fusion.
Background
The intelligent vehicle realizes automatic driving under various traffic scenes through technical means such as environment perception, map navigation, track planning, decision control and the like. The popularization and application of the intelligent vehicle play an active role in relieving traffic jam, improving traffic safety, improving fuel consumption rate and reducing environmental pollution. Governments, related enterprises, scientific research institutions, colleges and universities and the like all invest in a large amount of manpower and material resources to academic theoretical research and engineering practical research of the automatic driving related technology, hope that automatic driving vehicles enter daily lives of people early, and let people share welfare brought by the automatic driving technology. The intelligent vehicle realizes sensing and positioning of the surrounding environment of the vehicle and positioning and navigation information of the vehicle through vehicle-mounted sensors such as a camera, a millimeter wave radar, a laser radar, a Global Positioning System (GPS) and an Inertial Measurement Unit (IMU), real-time track planning is carried out based on map information and obstacle information, decision information such as transverse and longitudinal speed and steering wheel turning angle is issued to a vehicle bottom layer control unit through a Controller Area Network (CAN) bus, and specific operations such as acceleration and deceleration, braking and steering are realized.
The intelligent vehicle passable area detection comprises road boundary detection and obstacle detection, and both are based on results obtained after environment sensing and fusion are carried out by vehicle-mounted sensors, and are the basis for trajectory planning of intelligent vehicles. The detection of the passable area of the intelligent vehicle is an important component of the automatic driving environment perception technology, and can also provide a basis for the real-time trajectory planning of the intelligent vehicle. The accuracy degree of the detection of the passable area of the intelligent vehicle is beneficial to improving the intelligent level of the trajectory planning subsystem, and has important guiding significance for the decision control of the follow-up intelligent vehicle, so that the overall intelligent level of the intelligent vehicle is improved, the individual passing among various traffic users is realized, and the safe and orderly traffic environment is guaranteed. Therefore, the research on the detection method of the passable area of the intelligent vehicle can provide real-time track planning information for the intelligent vehicle, so that the intelligent vehicle can safely and orderly run in the passable area, collision accidents are prevented, and the traffic safety of various traffic participants is guaranteed.
At present, researches are more aiming at the detection method of the passable area of the intelligent vehicle, the adopted sensor types comprise a monocular camera, a binocular camera and a laser radar, the related road types comprise a structured road with clear lanes or fuzzy lane lines and an unstructured road without lane lines, and the detected target is lane boundary or road surface and obstacles. On a structured road, a traditional nonparametric learning or machine learning or deep learning method is adopted based on a camera sensor detection method, feature extraction and classification of targets such as lane lines, pedestrians and vehicles are carried out simultaneously, position and category information of road surface boundaries and obstacle targets is obtained, and dynamic motion information of the targets such as the pedestrians and the vehicles is lacked; the method based on the laser radar sensor firstly utilizes the reflection intensity of the lane lines and the height information of the road surface to segment the road boundary, then screens out the obstacles through a clustering method, and finally fuses the road boundary and the obstacles to output the passable area. On an unstructured road, particularly a road lacking lane marks, a road pixel point segmentation method based on deep learning is more common, but the image needs to be marked at a pixel level in advance.
In general, the detection of passable areas for smart vehicles at present has several problems: 1) are not universally applicable to structured and unstructured roads; 2) the motion information characteristics of the obstacle targets of different types are not fully considered, the target motion is simply predicted by using a linear or nonlinear model, and the obstacle information cannot be effectively updated in real time; 3) the method has the defects of lacking of multi-target direction and track tracking function and lacking of tracking and management of multi-target tracks in a real road scene; 4) the vehicle-mounted sensors commonly arranged on intelligent vehicles are not fully utilized, such as the use of a laser radar depth map in target detection and the use of millimeter wave radar return width and speed information; 5) on the non-structural road, the road surface pixel point segmentation method based on deep learning needs to carry out pixel-level marking on the image, and labor cost is high.
It is therefore desirable to have a solution that overcomes or at least alleviates at least one of the above-mentioned drawbacks of the prior art.
Disclosure of Invention
It is an object of the present invention to provide an intelligent vehicle passable area detection method based on multi-source information fusion to overcome or at least alleviate at least one of the above-mentioned drawbacks of the prior art.
In order to achieve the above object, the present invention provides a method for detecting a passable area of an intelligent vehicle based on multi-source information fusion, wherein the method for detecting a passable area of an intelligent vehicle based on multi-source information fusion comprises:
s100, acquiring obstacle target information around a vehicle detected by a vehicle-mounted sensor, and outputting a static obstacle target library;
s200, receiving obstacle target information around the vehicle collected in S100, performing space-time synchronization on the obstacle target information detected by the vehicle-mounted sensor, performing single-frame target fusion on all the detected obstacle information around the vehicle, performing continuous inter-frame multi-target tracking by utilizing motion prediction and multi-frame target association, and outputting a dynamic obstacle target library; and
and S300, receiving the static obstacle target library output in the S100 and the dynamic obstacle target library output in the S200, updating the dynamic obstacle target library according to the information of the static obstacle target library to form real-time obstacle target information, and generating a passable area.
Further, S100 specifically includes:
collecting and analyzing a three-dimensional point cloud image output by a laser radar to generate a two-dimensional overlook point cloud image;
obtaining an obstacle target detection frame and a binary rasterized map comprising road boundary point information according to the two-dimensional overlook point cloud picture; and
and updating the binary rasterized map by combining with obstacle target information generated by a YOLOv3_ LiDAR target detection model.
Further, the method for obtaining the obstacle target detection frame specifically includes:
s1141a, performing parameter learning on the YOLOv3 model according to the point cloud target frame truth value database DB1 to generate a YOLOv3_ LiDAR target detection model;
s1141b, using the YOLOv3_ LiDAR target detection model obtained in S1141a to perform obstacle target detection on the two-dimensional overhead cloud image, and outputting obstacle target information, where the obstacle target information includes the position and the large category of the obstacle target.
Further, the method for acquiring the binary rasterized map specifically comprises the following steps:
s1142a, performing binarization obstacle target detection in the two-dimensional overhead point cloud picture by using a Euclidean point clustering method, and outputting an initial binarization rasterized map formed by the area where the obstacle target is located;
s1142b, finding out possible road boundary points according to the height information and the reflection intensity of the three-dimensional point cloud scanning points obtained through analysis, fitting local road boundaries by adopting a quadratic curve, and generating a binary rasterized map comprising road boundary point information.
Further, S100 specifically includes:
s122, analyzing the information of the CAN-format obstacle target received in S121 by using the special DBC file to obtain M millimeter wave radar target data;
s123, acquiring an initialized millimeter wave radar target frame according to the following formulas (1) to (3) by using the M millimeter wave radar target data output in the S122, wherein (x)j,yj) The position of the central point of the millimeter wave radar target frame corresponding to any obstacle target, the speed v of any obstacle targetjPi is a constant:
xj=rangej*sin(angle_rad*pi/180.0) (1)
yj=rangej*cos(angle_rad*pi/180.0) (2)
vj=range_ratej (3)
if the millimeter wave radar does not return the width information widthjIf so, then assume widthjLength of millimeter wave radar target of 1 meterj=widthjRemember lj=wjCompleting the initialization of the millimeter wave radar target frame;
s124, collecting coordinates of K points in a shared area of the millimeter wave radar coordinate system and the image coordinate system, and obtaining calibration parameters of the millimeter wave radar-camera;
and S125, converting the M millimeter wave radar target data output by the S122 from a millimeter wave radar coordinate system to an image coordinate system according to the obtained millimeter wave radar-camera calibration parameters obtained in the S124, and forming M image target frames.
Further, S125 specifically includes:
s125a, calculating to obtain an image target frame marked in the image target frame truth value database DB2 by using the formula (7) for learning the position mapping relation { lambda [ lambda ] of the millimeter wave radar target output frame and the image target frame converted from the millimeter wave radar coordinate system to the image coordinate systemx,λy,λw,λh,bx,by};
Figure GDA0002733665550000041
In formula (7) { lambda ]x,λy,λw,λh,bx,byIs a learning parameter; the coordinate point of the obstacle target detected by the millimeter wave radar corresponding to the real obstacle target in the image is expressed as (x)gt,ygt,wgt,hgt),xgtIs the abscissa, y, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtIs the ordinate, w, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtWidth, h, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtThe height of the center of the millimeter wave radar target frame in a millimeter wave radar coordinate system; the coordinate point at which the obstacle target detected by the millimeter wave radar is converted from the millimeter wave radar coordinate system into the image coordinate system is expressed as (x)cam,ycam,wcam,hcam,),xcamIs the abscissa, y, of the center of the image target frame in the image coordinate systemcamIs the ordinate, w, of the center of the image target frame in the image coordinate systemcamIs the width, h, of the image object frame in the image coordinate systemcamThe height of the image target frame in the image coordinate system;
s125b, by using the RPN in the Faster R-CNN target detection model for reference, the length and width distribution rule of the image target frame marked in the image target frame truth database DB2 is utilized, the k-means clustering algorithm is adopted to design the length and width of the target candidate frame adapted to the image target frame truth database DB2, the extension learning of the millimeter wave radar target output frame is carried out, and the millimeter wave radar target extension frame which is as many and accurate as possible and comprises the real obstacle target is output.
Further, S100 specifically includes:
s131, collecting image data returned by the camera;
s132, analyzing the image data received in the S131 to obtain a PNG image of BGR three channels;
s133, acquiring laser radar-camera calibration parameters;
s134, converting the binary rasterized map comprising the road boundary point information from a laser radar coordinate system to a public area in an image coordinate system according to the laser radar-camera calibration parameters obtained in S133, and generating an interested area;
s135, performing parameter learning on the YOLOv3 model according to the image target frame truth value database DB2 to generate a YOLOv3_ Camera target detection model for performing multi-target detection on the image;
and S136, performing multi-target detection on the image plane shown by the region of interest generated in S134 by using the YOLOv3_ Camera target detection model obtained in S135, and outputting image data, wherein the information of each obstacle target in the image data is recorded as { x, y, w, h, c, o }, (x, y) is a coordinate point of the upper left corner of the image target frame in an image coordinate system, w is the width of the image target frame, h is the height of the image target frame, c is the large category and the small category of the obstacle target, and o is the orientation information of the obstacle target.
Further, the "single-frame target fusion of the obstacle information around all the detected vehicles" in S200 includes:
acquiring camera-vehicle calibration parameters, and converting a target frame in an image coordinate system into a target frame of a vehicle coordinate system;
according to the millimeter wave radar-camera calibration parameters and the laser radar-camera calibration parameters, after space synchronization is carried out on obstacle target information around the vehicle detected by a vehicle-mounted sensor of a single-frame image under the same timestamp, the obstacle target information is sequentially converted into an image coordinate system and a vehicle coordinate system; and
and matching corresponding millimeter wave radar and laser radar information based on a global nearest neighbor algorithm by taking a camera detection result as a reference to obtain the same obstacle target information, wherein the information comprises the position, the distance, the category and the speed of the obstacle target.
Further, the "performing multi-target tracking between consecutive frames using motion prediction and multi-frame target association" in S200 includes:
aiming at Car, Pedestrian and Rider in the barrier target in S221, three independent long-time and short-time memory networks are respectively designed for motion prediction, and position information (x, y) and size information (w, h) of the target are related;
training by using a long-time memory network designed by S222 according to the category o E { Car, Pedestrian, Rider }, wherein the first N frames are input data, and the (N + 1) th frame is prediction/output data to form an LSTM motion prediction model;
matching data (x, y, w, h) of the same obstacle target in the image target frame truth value database DB2 in the continuous N +1 frames according to different tracking IDs in the three determined types of obstacle targetsi-N+1~i+1(x, y) is position information of the predicted target frame, and (w, h) is size information of the predicted target frame;
testing the motion data (x, y, w, h) of the same obstacle target in N consecutive frames by using the trained LSTM modeli-N+1~iPredicting motion information (x, y, w, h) of the obstacle object of the next framei+1
Taking the position and scale information of the barrier target and the attributes such as the speed, the category, the distance and the orientation corresponding to the fused barrier target as correlation attributes, performing correlation matching on multiple targets among continuous frames by using a Hungarian algorithm, giving the same tracking ID number to the same barrier target, and outputting a correlated dynamic barrier target library { x, y, w, h, c, ID, v, o }; wherein N is the frame number input by the LSTM motion prediction model; i is the frame number.
Further, S300 specifically includes:
s310, receiving the updated binary rasterized map output by the laser radar detection unit 21 in the multi-source multi-target detection module 2 and a dynamic obstacle target library formed by the multi-frame target association unit 33;
s320, updating a dynamic obstacle target library by using the updated information of the binary rasterized map;
and S330, updating the real-time obstacle target position and the motion information according to the dynamic obstacle target library updated in the S320, and outputting the passable area of the vehicle.
The method can accurately acquire the position, the scale, the category and the motion information of the obstacles around the vehicle and the binary rasterized map in the driving process of the vehicle, track the motion trail of multiple targets and form the intelligent vehicle passable area which comprises the binary rasterized map and the dynamic obstacle information and is updated in real time.
Drawings
FIG. 1 is a schematic block diagram of an intelligent vehicle passable area detection method based on multi-source information fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the target box categories in the offline database unit shown in FIG. 1;
FIG. 3 is a functional block diagram of the multi-target tracking module shown in FIG. 1.
Detailed Description
In the drawings, the same or similar reference numerals are used to denote the same or similar elements or elements having the same or similar functions. Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
In this context, "front" is to be understood as corresponding to a direction pointing towards the vehicle head, and "rear" is opposite to "front". "right" may be understood as the right direction with the driver facing forward, and "left" is opposite to "right". "upper" is to be understood as corresponding to the direction pointing towards the roof of the vehicle, and "lower" is the opposite of "front".
The method for detecting the passable area of the intelligent vehicle based on the multi-source information fusion is suitable for sensor combinations with different configurations, for example, the vehicle-mounted sensor related to the embodiment includes a camera, a laser radar and a millimeter wave radar. The lidar may adopt a Velodyne VLP-16 line lidar, and the obstacle target information detected by the lidar is located in a lidar coordinate system, which specifically includes a target frame and coordinates thereof, a large category of obstacle targets (such as Car, people, Rider, and the like mentioned below), and a relative distance between the obstacle target and the vehicle. The millimeter wave radar can adopt a Delphi ESR millimeter wave radar (the returned radar target number M is 64), and the obstacle target information detected by the millimeter wave radar is located in a millimeter wave radar coordinate system, which specifically comprises a target frame and coordinates thereof and the speed of the vehicle. The video camera adopts an IDS UI-5250CP-C-HQ monocular camera, and the obstacle target information detected by the video camera is positioned in an image coordinate system, which specifically comprises a target frame and coordinates thereof, a large category and a small category of the obstacle target, and the orientation of the obstacle target. Preferably, the entire intelligent vehicle passable area detection method as shown in fig. 1 may be implemented in a robot development platform (ROS), different modules are composed of different packages (packages), and a plurality of sub-functions in a module are composed of corresponding nodes (nodes).
As shown in fig. 1, the apparatus corresponding to the intelligent vehicle passable area detection method based on multi-source information fusion provided by this embodiment includes a basic function module 1, a multi-source multi-target detection module 2, a multi-target tracking module 3, and a passable area generation module 4.
The basic function module 1 is configured to perform spatial synchronization between multiple vehicle-mounted sensors (such as vehicle-mounted sensors such as a camera, a laser radar, and a millimeter wave radar in the above embodiments) and a coordinate system corresponding to a vehicle, perform time synchronization on obstacle target information, and generate an image database. "between each other" is to be understood as between the on-board sensors and the vehicle.
The multi-source multi-target detection module 2 is configured to collect obstacle target information around the vehicle detected by the vehicle-mounted sensors, and output a static obstacle target library and obstacle detection information output by the three vehicle-mounted sensors (e.g., an input block shown in fig. 3). The static obstacle target library is a binary rasterized map which is detected by a laser radar and comprises road boundary information.
The multi-target tracking module 3 is used for receiving the obstacle target information collected by the multi-source multi-target detection module 2, combining the space-time synchronization function of the basic function module 1, performing single-frame target fusion on the obstacle target information detected by different vehicle-mounted sensors, performing continuous inter-frame multi-target tracking by utilizing motion prediction and multi-frame target association, and outputting a dynamic obstacle target library. The dynamic obstacle target library comprises the position, the size, the category and the tracking ID of the obstacle target, and the movement speed and the direction of the movement speed.
The passable area generating module 4 is used for receiving the static obstacle target library output by the multi-source multi-target detection module 2 and the dynamic obstacle target library output by the multi-target tracking module 3, updating the dynamic obstacle target library according to the information of the static obstacle target library, forming real-time obstacle target information and generating a passable area.
The method for detecting the passable area of the intelligent vehicle based on the multi-source information fusion can provide real-time updated passable area information for the intelligent vehicle, and can also be used for collision early warning or active collision avoidance of the intelligent vehicle due to the fact that the moving tracks of the multi-vehicle-mounted targets can be output, and basis is provided for decision making of the intelligent vehicle.
The respective modules in the above embodiments will be explained in detail below.
The basic function module 1 includes a spatiotemporal synchronization unit 11, a sensor driving unit 12, and an offline database unit 13.
The space-time synchronization unit 11 is used to space-time synchronize the spatial scaling and data of the multiple on-board sensors and the vehicle with each other. That is, the space-time synchronization unit 11 has a laser radar-camera calibration function, a millimeter wave radar-camera calibration function, a camera-vehicle calibration function, and a data space-time synchronization function. The space calibration between the multiple vehicle-mounted sensors and the vehicles is carried out by performing space calibration between different coordinate systems through the rotation and translation mapping matrix relation between corresponding points of the different coordinate systems. The "different coordinate systems" include a laser radar coordinate system, a millimeter wave radar coordinate system, an image coordinate system, and a vehicle coordinate system. And time synchronization among data collected by each vehicle-mounted sensor is realized by utilizing the time stamp and the frame rate.
The sensor driving unit 12 is used for driving analysis and data distribution of the in-vehicle sensor. In this embodiment, the sensor driving unit 12 establishes a driving analysis and data topic (topic) publishing node (node) of the laser radar, the millimeter wave radar, and the camera based on the ROS robot development platform, and has a laser radar driving function, a millimeter wave radar driving function, and a camera driving function.
The offline database unit 13 is used to generate an offline database, which includes a point cloud target box truth database DB1 and an image target box truth database DB 2. Wherein:
the point cloud target box truth database DB1 is used to mark a two-dimensional target box on a two-dimensional overhead point cloud map generated from lidar data. The point cloud target box true value database DB1 is obtained by the following steps: using the existing marking software, three types of point cloud target frames including Car, Pedstrian and Rider are marked in the two-dimensional overlook point cloud picture obtained in the following step S113, and a point cloud target frame truth value database DB1 is formed.
The image object box truth database DB2 is used to mark a two-dimensional object box on an image plane in image data. The image target box true value database DB2 is obtained in the following manner: three types of two-dimensional target frames of Car, Pedestrian, Rider, are marked on an image plane in the image data, and each two-dimensional target frame is marked with the movement orientation and tracking ID of each obstacle target, forming an image target frame truth database DB 2. As shown in fig. 2, Car ═ { Car, bus, van, truck, otherCar }, in braces { }: car refers to a common passenger car, bus refers to a bus and a bus, van refers to a truck and a van, truck refers to a truck, and otherCar is another type of motor vehicle. Pedestrian { Pedestrian, dummy }, in braces { }: pedestrian refers to pedestrian and dummy refers to dummy.
Rider { circle, moted, scooter, tricycle, motorcycle, otherier }, in parenthesis { }: the bicycle is referred to as a bicycle, the motorcycle is referred to as a motor-driven and pedal dual-purpose electric vehicle with pedals, the scooter is referred to as an electric vehicle without pedals, the tricycle is referred to as an express tricycle, the motorcycler is referred to as a motorcycle, and the otherroder is other types of riding tools.
Because the two-dimensional overhead cloud images generated by the laser radar data and the images collected by the camera are two-dimensional images, the same database marking tool and method can be used. Meanwhile, because the two-dimensional overhead point cloud image and the image acquired by the camera have the same category of the obstacle target to be detected, the same deep learning target detection YOLOv3 frame can be adopted to pre-train the target detection model, different target learning categories are designed aiming at different databases (DB1 and DB2) (the two-dimensional overhead point cloud image only comprises three categories, the monocular image comprises three categories and 13 subclasses, as shown in fig. 2), different model parameters are learned, and the YOLOv3 target detection model aiming at the two-dimensional overhead point cloud image and the monocular image is obtained, wherein: the YOLOv3 target detection model for the two-dimensional overhead point cloud image is hereinafter referred to as the YOLOv3_ LiDAR target detection model, and the YOLOv3 target detection model for the monocular image is hereinafter referred to as the YOLOv3_ Camera target detection model.
The multi-source multi-target detection module 2 includes a laser radar detection unit 21, a millimeter wave radar detection unit 22, and an image detection unit 23.
The laser radar detection unit 21 is configured to collect a three-dimensional point cloud image output by a laser radar, analyze the three-dimensional point cloud image, generate a two-dimensional overhead point cloud image, perform target detection through a pre-trained target detection model, and generate an obstacle target detection frame with position, category, and depth information and a binary rasterized map.
In one embodiment, the specific operation process of the lidar detection unit 21 includes the following steps S111 to S115:
s111, collecting data returned by the laser radar: after the laser radar is driven by the sensor driving unit 22 in the basic function module 1, a three-dimensional point cloud image returned by the laser radar is obtained from the ethernet interface.
And S112, analyzing the three-dimensional point cloud image received in the S111 to obtain three-dimensional point cloud scanning points. Wherein the three-dimensional point cloud scanning points are expressed as vectors Li={Xi,Yi,Zi,ri}, wherein: xiWhich represents the lateral offset of the ith scanning point with respect to the origin of the lidar coordinate system, and positive on the right. Y isiIndicating a longitudinal offset of the ith scanning point with respect to the origin of the lidar coordinate system, the front side being positive. ZiAnd the vertical deviation of the ith scanning point relative to the origin of the laser radar coordinate system is shown, and the vertical deviation is positive. r isiThe reflection intensity of the ith scanning point is shown, and the laser radar pulse echo intensity of the point is reflected to a certain extent.
S113, converting the three-dimensional point cloud scanning points obtained by the analysis in the S112 into a two-dimensional overlook point cloud picture: in order to guarantee the real-time performance of the detection of the passable area, the Yolov3 target detection model is shared with the image plane acquired by the camera, and the coordinate conversion between the image plane and the laser radar and the camera is facilitated, the three-dimensional point cloud scanning point (OXYZ three-dimensional coordinate system) obtained by S112 analysis is projected onto the expandable OXY two-dimensional plane, the planarization of the three-dimensional point cloud scanning point is realized, and the two-dimensional overhead point cloud image { X } cloud image is generatedi,Yi}。
And S114, obtaining an obstacle target detection frame and a binary rasterized map comprising road boundary point information according to the two-dimensional overhead point cloud map obtained in the S113.
S115, updating the binarized rasterized map in S114 in combination with the obstacle target information generated by the YOLOv3_ LiDAR target detection model.
In one embodiment, the obtaining method of the obstacle target detection box in S114 specifically includes S1141a and S1141 b:
s1141a, training and generating a YOLOv3_ LiDAR target detection model: and (3) performing parameter learning on the YOLOv3 model according to the point cloud target frame truth value database DB1 to generate a YOLOv3_ LiDAR target detection model.
S1141b, detecting an obstacle target: and detecting the obstacle target on the two-dimensional overhead cloud image by using a YOLOv3_ LiDAR target detection model obtained in S1141a, and outputting obstacle target information, wherein the obstacle target information comprises the position and the large category of the obstacle target.
In one embodiment, the acquiring method of the binarized rasterized map in S114 specifically includes S1142a and S1142 b:
s1142a, detecting an obstacle target: 0/1 binarization obstacle target detection is carried out in the two-dimensional overlook point cloud picture obtained in S113 by using a Euclidean point clustering method, and an initial binarization grid map composed of the area where the obstacle target is located is output.
S1142b, generating a binarized rasterized map including road boundary point information: according to the height information Z of the three-dimensional point cloud scanning points obtained by the analysis of S112iAnd reflection intensity riAnd finding out possible road boundary points, fitting local road boundaries by adopting a quadratic curve, and generating a binary rasterized map comprising road boundary point information.
The millimeter wave radar detection unit 22 is configured to collect target information in a CAN format output by the millimeter wave radar, and perform target point analysis, target frame initialization, millimeter wave radar-camera calibration, and mapping parameter self-learning (DB2) on the target information to obtain extension of the target frame detected by the millimeter wave radar.
In one embodiment, the specific operation process of the millimeter wave radar detection unit 22 includes the following steps S121 to S126:
s121, collecting data returned by the millimeter wave radar: after millimeter wave radar driving is performed through the sensor driving unit 22 in the basic function module 1, obstacle target information in a CAN format returned by the millimeter wave radar is obtained from the CAN-Ethernet interface, the obstacle target information is presented in a millimeter wave radar target frame, and the millimeter wave radar target frame comprises the position and the speed of the target frame.
S122, analyzing the radar target: analyzing the information of the CAN-format obstacle target received in S121 by using the special DBC file to obtain M pieces of millimeter wave radar target data (M is 64), wherein each millimeter wave radar target data is expressed as a vector Rj,Rj={rangej,angle_radj,range_ratej,lat_ratej,idj,widthj}, wherein: rangejRepresents the relative distance between the center of the jth millimeter wave radar target frame and the origin of the millimeter wave radar coordinate system, angle _ radjIndicating the jth millimeter wave radar meshThe relative angle, range-rate, between the connecting line of the center of the frame and the origin of the millimeter-wave radar coordinate system and the longitudinal direction (forward direction of the millimeter-wave radar)jRepresents the relative speed, lat _ rate, of the jth millimeter wave radar target frame and the origin of the millimeter wave radar coordinate systemjRepresents the lateral speed, id, of the jth millimeter-wave radar target frame and the origin of the millimeter-wave radar coordinate systemjID number, width, indicating the jth millimeter wave radar target framejIndicating the width of the jth millimeter wave radar target frame.
S123, initializing a millimeter wave radar target frame: and acquiring the initialized millimeter wave radar target frame by using the M millimeter wave radar target data output by the S122. The embodiment takes the jth millimeter wave radar target frame (x)j,yj,vj) For example, the method for acquiring the initialized millimeter wave radar target frame is described as follows:
the position (x) of the millimeter wave radar target with respect to the origin of the millimeter wave radar coordinate system is obtained from the following equations (1) to (3)j,yj) And velocity vjWherein (x)j,yj) For the central point position of the millimeter wave radar target frame, pi is a constant, and values such as 3.1415926:
xj=rangej*sin(angle_rad*pi/180.0) (1)
yj=rangej*cos(angle_rad*pi/180.0) (2)
vj=range_ratej (3)
if the millimeter wave radar does not return the width information widthjIf so, then assume widthjLength of millimeter wave radar target of 1 meterj=widthjRemember lj=wjAnd finishing the initialization of the millimeter wave radar target frame.
S124, calibrating the millimeter wave radar-camera: collecting K points in the shared area of the millimeter wave radar coordinate system and the image coordinate system, wherein for one point, the coordinate point returned by the millimeter wave radar is (x)rad,yrad) The coordinate point returned by the camera is (x)cam,ycam),Obtaining millimeter wave radar-camera calibration parameters (A)rad2cam,Lrad2cam),Arad2camTransformation matrix parameter-rotation matrix, L, in 2 x 3 dimensionsrad2camA translation matrix in 2 x 1 dimensions.
For example: establishing an equation (shown in the following formula (4)) for converting points of the millimeter wave radar coordinate system into the image coordinate system by using the following perspective transformation relation, and solving the optimal parameters by using a least square method to obtain the calibration parameters (A) of the millimeter wave radar-camerarad2cam,Lrad2cam). Since the expressions (5) and (6) have 8 parameters in total, the value of K is not less than 8, in practice, K is 64, and a can be obtained by calculation by combining the expressions (4) to (6)rad2camAnd Lrad2cam
Figure GDA0002733665550000121
Figure GDA0002733665550000122
Figure GDA0002733665550000123
And S125, converting the M millimeter wave radar target data output by the S122 from a millimeter wave radar coordinate system to an image coordinate system according to the obtained millimeter wave radar-camera calibration parameters obtained in the S124, and forming M image target frames. It specifically includes the following S125a and S125 b:
s125a, self-learning of mapping parameters: the image target frames marked in the image target frame truth value database DB2 are used to learn the position mapping relationship { λ ] of the millimeter wave radar target output frame and the image target frame in which the millimeter wave radar coordinate system is converted to the image coordinate systemx,λy,λw,λh,bx,byAs shown in formula (7), further updating the information of the millimeter wave radar target output frame, correcting the conversion deviation between the millimeter wave radar coordinate system and the image coordinate system, and the bit detected by the millimeter wave radar itselfErrors in position and width information, and estimating the length of multiple targets.
Figure GDA0002733665550000131
In formula (7) { lambda ]x,λy,λw,λh,bx,byIs a learning parameter; the coordinate point of the obstacle target detected by the millimeter wave radar corresponding to the real obstacle target in the image is expressed as (x)gt,ygt,wgt,hgt),xgtIs the abscissa, y, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtIs the ordinate, w, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtWidth, h, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtThe height of the center of the millimeter wave radar target frame in a millimeter wave radar coordinate system; the coordinate point at which the obstacle target detected by the millimeter wave radar is converted from the millimeter wave radar coordinate system into the image coordinate system is expressed as (x)cam,ycam,wcam,hcam,),xcamIs the abscissa, y, of the center of the image target frame in the image coordinate systemcamIs the ordinate, w, of the center of the image target frame in the image coordinate systemcamIs the width, h, of the image object frame in the image coordinate systemcamIs the height of the image target frame in the image coordinate system.
S125b, target box expansion: by using the RPN network in the fast R-CNN target detection model for reference, the length and width distribution rule of the image target frame marked in the image target frame truth database DB2 is utilized, a k-means clustering algorithm (k-means) is adopted to design the length and width of the target candidate frame (refer to three sizes and three length-width ratios of the RPN network in the fast R-CNN, and k is set to be 9) adapted to the image target frame truth database DB2, the extension learning of the millimeter wave radar target output frame is carried out, and the millimeter wave radar target extension frame which is as many and accurate as possible and comprises the real obstacle target is output.
The image detection unit 23 is configured to collect image data captured by the camera, perform lidar-camera calibration on the image data via the binarized rasterized map output by the lidar detection unit 21, generate an area of interest, perform target detection by means of the YOLOv3 model trained by the image target value database DB2 in the basic function module 1, and output information of an obstacle target in the image data, where the information includes position, type, and orientation information of the target.
In one embodiment, the specific operation of the image detection unit 23 includes the following steps S131 to S135:
s131, collecting data returned by the camera: after the camera is driven by the sensor driving unit 22 in the basic function module 1, image data returned by the camera is obtained from the ethernet interface.
And S132, analyzing the image data received in the S131 to acquire a PNG image of the BGR three channels.
S133, calibrating the laser radar-camera: obtaining lidar-camera calibration parameters (A) using a method similar to that described above in step S123lid2cam,Llid2cam)。
S134, generating a region of interest: and converting the binarized rasterized map output by the laser radar detection unit 21 in the step S114 from a laser radar coordinate system to a common area in an image coordinate system according to the laser radar-camera calibration parameters obtained in the step S133, and generating an area of interest.
S135, training a YOLOv3 target detection model: according to the image target frame truth database DB2 of the offline database unit 23 in the basic function module 1, the YOLOv3 model is subjected to parameter learning, and a YOLOv3_ Camera target detection model for performing multi-target detection on an image is generated.
S136, detecting an obstacle target: the YOLOv3_ Camera target detection model obtained in S135 is used to perform multi-target detection in the image plane indicated by the region of interest generated in S134, and image data is output. Each obstacle target in the image data is represented in the form of an image target frame (target rectangular position frame), information of each obstacle target is represented as { x, y, w, h, c, o }, (x, y) is a coordinate point of the upper left corner of the image target frame in an image coordinate system, w is the width of the image target frame, h is the height of the image target frame, c (category) is the large category and the small category of the obstacle target, and o (orientation) is orientation information of the obstacle target.
The multi-target tracking module 3 includes a single-frame target fusion unit 31, a target motion prediction unit 32, and a multi-frame target association unit 33.
The single-frame target fusion unit 31 is used for performing space-time synchronization on different vehicle-mounted sensors, and fusing (inputting as shown in fig. 3) the obstacle target information in the current frame image.
In one embodiment, the specific working process of the single-frame target fusion unit 31 includes the following steps S211 to S213:
and S211, receiving the multi-source information output by the multi-source multi-target detection module 2.
S212, calibrating camera-vehicle: the camera-vehicle calibration parameters (A) are obtained in the same manner as in the above step S123cam2veh,Lcam2veh) And converting the target frame in the image coordinate system into a target frame of the vehicle coordinate system.
S213, converting a coordinate system: the millimeter wave radar-camera calibration parameter (A) obtained according to S123rad2cam,Lrad2cam) And the lidar-camera calibration parameters obtained in S133 (A)lid2cam,Llid2cam) After the obstacle target information around the vehicle detected by the vehicle-mounted sensor of the single-frame image under the same timestamp is subjected to spatial synchronization, the obstacle target information is sequentially converted into an image coordinate system and a vehicle coordinate system (according to an ISO standard, the longitudinal direction is x, the transverse direction is y, the vertical direction is z, and the right-hand rule is met). In the coordinate system conversion process, the detection result of the camera is taken as a reference, and the information of the same obstacle target is obtained by matching the corresponding millimeter wave radar and laser radar information based on a Global Nearest Neighbor (GNN) algorithm, wherein the information comprises the position, the distance, the category and the speed of the obstacle target.
The target motion prediction unit 32 performs motion prediction on the obstacle target based on the historical N-frame image data of the obstacle target fused by the single-frame target fusion unit 31.
In one embodiment, the specific operation of the target motion prediction unit 32 includes the following steps S221 to S225:
s221, receives the obstacle target information in the vehicle coordinate system output by the single-frame target fusion unit 31.
S222, aiming at three categories, namely Car, Peer and Rider, in the obstacle target information in S221, three separate long-time and short-time memory networks (LSTM) are respectively designed for motion prediction, wherein the three categories comprise the position (x, y) and the size (w, h) of the obstacle target.
S223, according to the category o e { Car, Pedestrian, Rider }, dividing data samples in the image target frame truth value database DB2 into three categories of Car, Pedestrian and Rider, training by using a long-time memory network (LSTM) designed by S222, wherein the first N frames are input data, and the (N + 1) th frame is prediction/output data, so that an LSTM motion prediction model is formed.
S224, in the three types of obstacle targets determined in S223, matching the data (x, y, w, h) of the same obstacle target in the image target frame truth value database DB2 in continuous N +1 frames according to different tracking IDsi-N+1~i+1. Wherein: n is the number of frames input by the LSTM motion prediction model (data for predicting the next frame (i.e., i +1 frame) with N frames of data including the history of the ith frame); i is an integer whose frame number (i-th frame image) is not less than N (since the number of history frames is less than N when N is less). The frame numbers of the historical N-frame images are: i-N +1, i-N +2, …, i-1, i. For example: if i is the 12 th frame and N is 10, then the next frame can be predicted by using ten consecutive frames of 3,4,5,6,7,8,9,10,11,12, i.e. the (i + 1) -th frame is 13. (x, y) is position information of the prediction target frame, and (w, h) is size information of the prediction target frame. The current frame is the ith frame, the previous N frames including the current frame are used as input data, the (i + 1) th frame is used as prediction/output data, and the training of the LSTM motion prediction model is carried out to form the motion prediction model of the LSTM single step length (only one frame is predicted forwards because the association between the current frame i and the target of the next frame i +1 is processed). Because the lowest frame rate of three vehicle-mounted sensors, namely a laser radar sensor, a millimeter wave radar sensor and a camera is 10Hz, the time length of the LSTM model learning historical data is 1s, 10 frames are counted, and N is 10.
S225Testing the motion data (x, y, w, h) of the same obstacle target in N consecutive frames by using the LSTM model trained in S224i-N+1~NPredicting motion information (x, y, w, h) of the obstacle object of the next framei+1
The multi-frame target associating unit 33 is configured to associate the obstacle target detection information of the current frame determined by the target motion predicting unit 32, provide associated multi-obstacle target information { x, y, w, h, c, id, v, o }, and output a dynamic obstacle target library with continuous frame obstacle target motion information.
In one embodiment, the specific working process of the multi-frame target associating unit 33 is as follows:
receiving the motion information (x, y, w, h) of the obstacle target of the current frame output by the target motion prediction unit 32, using the attributes such as the speed, the category, the distance, the orientation and the like corresponding to the obstacle target after fusion output by the single frame target fusion unit 31 as the associated attributes, performing association matching on multiple targets between successive frames by using Hungarian algorithm (Hungarian), giving the same tracking ID number to the same obstacle target, and outputting the information of the multiple targets after association, namely, a dynamic obstacle target library { x, y, w, h, c, ID, v, o }.
The passable area generating module 4 is used for receiving the static target library or the binary rasterized map output by the multi-source multi-target detection module 2 and the dynamic barrier target library output by the multi-target tracking module, updating the dynamic barrier target library according to the static target library information, forming real-time barrier information and generating a passable area of the vehicle.
The passable area generating module 4 is configured to use the binarized rasterized map output by the laser radar detecting unit 21 as a static target library, use the target with the real-time motion trajectory output by the multi-frame target associating unit 33 as a dynamic obstacle target library, update the dynamic obstacle target library according to the static target library information, and generate a real-time vehicle passable area.
In one embodiment, the passable area generation module 4 specifically works as follows, including the following steps S310 to S330:
s310, receiving the updated binary rasterized map output by the laser radar detection unit 21 in the multi-source multi-target detection module 2 and a dynamic obstacle target library formed by the multi-frame target association unit 33;
and S320, updating the dynamic obstacle target library by using the updated information of the binary rasterized map.
And S330, updating the position and the motion information of the real-time obstacle target according to the updated dynamic obstacle target library of S320, outputting a passable area of the vehicle, wherein the pixel point of the image area with the obstacle target in the passable area is marked as 1, and the pixel point of the image area without the obstacle target is marked as 0, and forming the updated binary rasterized map.
Finally, it should be pointed out that: the above examples are only for illustrating the technical solutions of the present invention, and are not limited thereto. Those of ordinary skill in the art will understand that: modifications can be made to the technical solutions described in the foregoing embodiments, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A method for detecting a passable area of an intelligent vehicle based on multi-source information fusion is characterized by comprising the following steps: the method comprises the following steps:
s100, acquiring obstacle target information around a vehicle detected by a vehicle-mounted sensor, and outputting a static obstacle target library, wherein the static obstacle target library is a binary rasterized map which is detected by a laser radar and comprises road boundary information;
s200, receiving obstacle target information around the vehicle collected in S100, performing space-time synchronization on the obstacle target information detected by the vehicle-mounted sensor, performing single-frame target fusion on all the detected obstacle information around the vehicle, performing continuous inter-frame multi-target tracking by using motion prediction and multi-frame target association, and outputting a dynamic obstacle target library, wherein the dynamic obstacle target library comprises the position, the size, the category and the tracking ID of an obstacle target, the motion speed and the direction of the obstacle target, and the space-time synchronization comprises a laser radar-camera calibration function, a millimeter wave radar-camera calibration function, a camera-vehicle calibration function and data space-time synchronization; and
s300, receiving the static obstacle target library output in the S100 and the dynamic obstacle target library output in the S200, updating the dynamic obstacle target library according to the information of the static obstacle target library to form real-time obstacle target information and generate a passable area;
the "single frame target fusion of the obstacle information around all the detected vehicles" in S200 includes:
acquiring camera-vehicle calibration parameters, and converting a target frame in an image coordinate system into a target frame of a vehicle coordinate system;
according to the millimeter wave radar-camera calibration parameters and the laser radar-camera calibration parameters, after space synchronization is carried out on obstacle target information around the vehicle detected by a vehicle-mounted sensor of a single-frame image under the same timestamp, the obstacle target information is sequentially converted into an image coordinate system and a vehicle coordinate system; and
matching corresponding millimeter wave radar and laser radar information based on a global nearest neighbor algorithm by taking a camera detection result as a reference to obtain the same obstacle target information, wherein the information comprises the position, the distance, the category and the speed of the obstacle target;
the step S200 of "performing multi-target tracking between consecutive frames using motion prediction and multi-frame target association" includes:
aiming at Car, Pedestrian and Rider in the target of the obstacle, three independent long-time and short-time memory networks are respectively designed for motion prediction, and position information and size information of the target are related; car denotes a vehicle, Pedestrian denotes a human, Rider denotes a Rider,
according to the category c belonging to { Car, Pedestrian, Rider }, training by using a long-time memory network, wherein the first N frames are input data, and the N +1 th frame is prediction/output data, so that a long-time memory network motion prediction model is formed;
matching data (x) of the same obstacle target in the image target frame truth value database DB2 in continuous N +1 frames according to different tracking IDs in the three determined types of obstacle targets,y,w,h)i-N+1~i+1(x, y) is position information of the predicted target frame, (w, h) is size information of the predicted target frame, (x, y) is a coordinate point of the upper left corner of the target frame in the image coordinate system, w is the width of the target frame, h is the height of the target frame, and the image target frame truth database DB2 is used to mark a two-dimensional target frame on an image plane in the image data;
testing the motion data (x, y, w, h) of the same barrier target in the continuous N frames by using the trained long-time memory network motion prediction modeli-N+1~iPredicting motion information (x, y, w, h) of the obstacle object of the next framei+1
Using the position and scale information of the barrier target and the corresponding speed, category, distance and orientation attributes of the fused barrier target as correlation attributes, performing correlation matching on multiple targets between continuous frames by using a Hungarian algorithm, giving the same tracking ID number to the same barrier target, and outputting a correlated dynamic barrier target library { x, y, w, h, c, ID, v, o };
wherein N is the frame number input by the long-time memory network motion prediction model; i is a frame number, ID represents an ID number of the millimeter wave radar target frame, v represents a speed of the millimeter wave radar target relative to an origin of the millimeter wave radar coordinate system, c is a large category and a small category of the obstacle target, and o is orientation information of the obstacle target.
2. The method for detecting the passable area of the intelligent vehicle based on the multi-source information fusion of claim 1, wherein the S100 specifically comprises:
collecting and analyzing a three-dimensional point cloud image output by a laser radar to generate a two-dimensional overlook point cloud image;
obtaining an obstacle target detection frame and a binary rasterized map comprising road boundary point information according to the two-dimensional overlook point cloud picture; and
updating the binary rasterized map by combining obstacle target information generated by a YOLOv3_ LiDAR target detection model, wherein the YOLOv3_ LiDAR target detection model performs parameter learning generation on a YOLOv3 model according to a point cloud target frame truth value database DB1, the point cloud target frame truth value database DB1 is used for marking a two-dimensional target frame on a two-dimensional overhead point cloud map generated by laser radar data, the YOLOv3 model pre-trains a target detection model by adopting a deep learning target detection YOLOv3 frame, different target learning types are designed for different databases, different model parameters are learned, and a YOLOv3 target detection model for the two-dimensional overhead point cloud map and a monocular image is obtained.
3. The method for detecting the passable area of the intelligent vehicle based on the multi-source information fusion as claimed in claim 2, wherein the method for obtaining the obstacle target detection frame further comprises:
and detecting the obstacle target on the two-dimensional overhead cloud picture by using a YOLOv3_ LiDAR target detection model, and outputting obstacle target information, wherein the obstacle target information comprises the position and the large category of the obstacle target.
4. The method for detecting the passable area of the intelligent vehicle based on the multi-source information fusion as claimed in claim 2, wherein the method for acquiring the binary rasterized map specifically comprises the following steps:
s1142a, performing binarization obstacle target detection in the two-dimensional overhead point cloud picture by using a Euclidean point clustering method, and outputting an initial binarization rasterized map formed by the area where the obstacle target is located;
s1142b, finding out possible road boundary points according to the height information and the reflection intensity of the three-dimensional point cloud scanning points obtained through analysis, fitting local road boundaries by adopting a quadratic curve, and generating a binary rasterized map comprising road boundary point information.
5. The intelligent vehicle passable area detection method based on multi-source information fusion according to any one of claims 2 to 4, wherein S100 further comprises:
s122, analyzing the information of the obstacle target received in S121 by using the special DBC file to obtain M millimeter wave radar target data;
s123, acquiring an initialized millimeter wave radar target frame according to the following formulas (1) to (3) by using the M millimeter wave radar target data output in the S122, wherein (x)j,yj) The position of the central point of the millimeter wave radar target frame corresponding to any obstacle target, the speed v of any obstacle targetjPi is a constant:
xj=rangej*sin(angle_rad*pi/180.0) (1)
yj=rangej*cos(angle_rad*pi/180.0) (2)
vj=range_ratej (3)
if the millimeter wave radar does not return the width information widthjIf so, then assume widthjLength of millimeter wave radar target of 1 meterj=widthjCompleting the initialization of the millimeter wave radar target frame; rangejThe relative distance between the center of the jth millimeter wave radar target frame and the origin of the millimeter wave radar coordinate system is represented, angle _ rad represents the relative angle between the connecting line of the center of the millimeter wave radar target frame and the origin of the millimeter wave radar coordinate system and the longitudinal direction, and range _ ratejRepresenting the relative speed of the jth millimeter wave radar target frame and the origin of the millimeter wave radar coordinate system;
s124, collecting coordinates of K points in a shared area of the millimeter wave radar coordinate system and the image coordinate system, and obtaining calibration parameters of the millimeter wave radar-camera;
and S125, converting the M millimeter wave radar target data output by the S122 from a millimeter wave radar coordinate system to an image coordinate system according to the millimeter wave radar-camera calibration parameters obtained in the S124, and forming M image target frames.
6. The intelligent vehicle passable area detection method based on multi-source information fusion of claim 5, wherein S125 specifically comprises:
s125a, calculating to obtain an image target frame marked in the image target frame truth value database DB2 by using the formula (7) for learning the millimeter wave radar object for converting the millimeter wave radar coordinate system into the image coordinate systemPosition mapping relation between mark output frame and image target frame { lambdax,λy,λw,λh,bx,by};
Figure FDA0002751649770000041
In formula (7) { lambda ]x,λy,λw,λh,bx,byIs a learning parameter; the coordinate point of the obstacle target detected by the millimeter wave radar corresponding to the real obstacle target in the image is expressed as (x)gt,ygt,wgt,hgt),xgtIs the abscissa, y, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtIs the ordinate, w, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtWidth, h, of the center of the millimeter-wave radar target frame in the millimeter-wave radar coordinate systemgtThe height of the center of the millimeter wave radar target frame in a millimeter wave radar coordinate system; the coordinate point at which the obstacle target detected by the millimeter wave radar is converted from the millimeter wave radar coordinate system into the image coordinate system is expressed as (x)cam,ycam,wcam,hcam,),xcamIs the abscissa, y, of the center of the image target frame in the image coordinate systemcarmIs the ordinate, w, of the center of the image target frame in the image coordinate systemcamIs the width, h, of the image object frame in the image coordinate systemcamThe height of the image target frame in the image coordinate system;
s125b, by using the RPN in the Faster R-CNN target detection model for reference, the length and width distribution rule of the image target frame marked in the image target frame truth database DB2 is utilized, the k-means clustering algorithm is adopted to design the length and width of the target candidate frame adapted to the image target frame truth database DB2, the extension learning of the millimeter wave radar target output frame is carried out, and the millimeter wave radar target extension frame which is as many and accurate as possible and comprises the real obstacle target is output.
7. The method for detecting the passable area of the intelligent vehicle based on the multi-source information fusion of claim 6, wherein the S100 further comprises:
s131, collecting image data returned by the camera;
s132, analyzing the image data received in the S131 to obtain a PNG image of BGR three channels;
s133, acquiring laser radar-camera calibration parameters;
s134, converting the binary rasterized map comprising the road boundary point information from a laser radar coordinate system to a public area in an image coordinate system according to the laser radar-camera calibration parameters obtained in S133, and generating an interested area;
s135, performing parameter learning on the YOLOv3 model according to the image target frame truth value database DB2 to generate a YOLOv3_ Camera target detection model for performing multi-target detection on the image;
and S136, performing multi-target detection on the image plane shown by the region of interest generated in S134 by using the YOLOv3_ Camera target detection model obtained in S135, and outputting image data, wherein the information of each obstacle target in the image data is recorded as { x, y, w, h, c, o }.
8. The method for detecting the passable area of the intelligent vehicle based on the multi-source information fusion of claim 1, wherein the step S300 specifically comprises the following steps:
s310, receiving the updated binary rasterized map output by the laser radar detection unit 21 in the multi-source multi-target detection module 2 and a dynamic obstacle target library formed by the multi-frame target association unit 33;
s320, updating a dynamic obstacle target library by using the updated information of the binary rasterized map;
and S330, updating the real-time obstacle target position and the motion information according to the dynamic obstacle target library updated in the S320, and outputting the passable area of the vehicle.
CN201910007212.5A 2019-01-04 2019-01-04 Intelligent vehicle passable area detection method based on multi-source information fusion Active CN109829386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910007212.5A CN109829386B (en) 2019-01-04 2019-01-04 Intelligent vehicle passable area detection method based on multi-source information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910007212.5A CN109829386B (en) 2019-01-04 2019-01-04 Intelligent vehicle passable area detection method based on multi-source information fusion

Publications (2)

Publication Number Publication Date
CN109829386A CN109829386A (en) 2019-05-31
CN109829386B true CN109829386B (en) 2020-12-11

Family

ID=66860082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910007212.5A Active CN109829386B (en) 2019-01-04 2019-01-04 Intelligent vehicle passable area detection method based on multi-source information fusion

Country Status (1)

Country Link
CN (1) CN109829386B (en)

Families Citing this family (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102592830B1 (en) * 2018-12-05 2023-10-23 현대자동차주식회사 Apparatus and method for predicting sensor fusion target in vehicle and vehicle including the same
CN110390814A (en) * 2019-06-04 2019-10-29 深圳市速腾聚创科技有限公司 Monitoring system and method
CN112069856A (en) * 2019-06-10 2020-12-11 商汤集团有限公司 Map generation method, driving control method, device, electronic equipment and system
CN112084810B (en) * 2019-06-12 2024-03-08 杭州海康威视数字技术股份有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN112179360B (en) * 2019-06-14 2022-12-02 北京京东乾石科技有限公司 Map generation method, apparatus, system and medium
CN112101069A (en) 2019-06-18 2020-12-18 华为技术有限公司 Method and device for determining driving area information
CN110309741B (en) * 2019-06-19 2022-03-08 百度在线网络技术(北京)有限公司 Obstacle detection method and device
CN110533025A (en) * 2019-07-15 2019-12-03 西安电子科技大学 The millimeter wave human body image detection method of network is extracted based on candidate region
CN110286389B (en) * 2019-07-15 2021-05-07 北京智行者科技有限公司 Grid management method for obstacle identification
CN110501700A (en) * 2019-08-27 2019-11-26 四川长虹电器股份有限公司 A kind of personnel amount method of counting based on millimetre-wave radar
CN110781720B (en) * 2019-09-05 2022-08-19 国网江苏省电力有限公司 Object identification method based on image processing and multi-sensor fusion
CN110502018B (en) * 2019-09-06 2022-04-12 百度在线网络技术(北京)有限公司 Method and device for determining vehicle safety area, electronic equipment and storage medium
CN110795819B (en) * 2019-09-16 2022-05-20 腾讯科技(深圳)有限公司 Method and device for generating automatic driving simulation scene and storage medium
CN110738121A (en) * 2019-09-17 2020-01-31 北京科技大学 front vehicle detection method and detection system
CN110827320B (en) * 2019-09-17 2022-05-20 北京邮电大学 Target tracking method and device based on time sequence prediction
CN110568861B (en) * 2019-09-19 2022-09-16 中国电子科技集团公司电子科学研究院 Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
CN110688943A (en) * 2019-09-25 2020-01-14 武汉光庭信息技术股份有限公司 Method and device for automatically acquiring image sample based on actual driving data
CN110677491B (en) * 2019-10-10 2021-10-19 郑州迈拓信息技术有限公司 Method for estimating position of vehicle
CN112154444B (en) * 2019-10-17 2021-12-17 深圳市大疆创新科技有限公司 Target detection and tracking method, system, movable platform, camera and medium
CN110648538B (en) * 2019-10-29 2022-02-01 苏州大学 Traffic information sensing system and method based on laser radar network
CN110927765B (en) * 2019-11-19 2022-02-08 博康智能信息技术有限公司 Laser radar and satellite navigation fused target online positioning method
CN110853393B (en) * 2019-11-26 2020-12-11 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN110969130B (en) * 2019-12-03 2023-04-18 厦门瑞为信息技术有限公司 Driver dangerous action identification method and system based on YOLOV3
CN111027461B (en) * 2019-12-06 2022-04-29 长安大学 Vehicle track prediction method based on multi-dimensional single-step LSTM network
CN111191600B (en) * 2019-12-30 2023-06-23 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111076726B (en) * 2019-12-31 2022-06-21 深圳供电局有限公司 Vision-assisted obstacle avoidance method and device for inspection robot, equipment and storage medium
CN111353481A (en) * 2019-12-31 2020-06-30 成都理工大学 Road obstacle identification method based on laser point cloud and video image
CN111257892A (en) * 2020-01-09 2020-06-09 武汉理工大学 Obstacle detection method for automatic driving of vehicle
CN113177427A (en) * 2020-01-23 2021-07-27 宝马股份公司 Road prediction method, autonomous driving method, vehicle and equipment
CN111338336B (en) * 2020-02-11 2021-07-13 腾讯科技(深圳)有限公司 Automatic driving method and device
CN113256962B (en) * 2020-02-13 2022-12-23 宁波吉利汽车研究开发有限公司 Vehicle safety early warning method and system
CN111311945B (en) * 2020-02-20 2021-07-09 南京航空航天大学 Driving decision system and method fusing vision and sensor information
CN111289969B (en) * 2020-03-27 2022-03-04 北京润科通用技术有限公司 Vehicle-mounted radar moving target fusion method and device
CN111123262B (en) * 2020-03-30 2020-06-26 江苏广宇科技产业发展有限公司 Automatic driving 3D modeling method, device and system
CN113496163B (en) * 2020-04-01 2024-01-16 北京京东乾石科技有限公司 Obstacle recognition method and device
CN111413983A (en) * 2020-04-08 2020-07-14 江苏盛海智能科技有限公司 Environment sensing method and control end of unmanned vehicle
CN111429791B (en) * 2020-04-09 2022-11-18 浙江大华技术股份有限公司 Identity determination method, identity determination device, storage medium and electronic device
CN111507233B (en) * 2020-04-13 2022-12-13 吉林大学 Multi-mode information fusion intelligent vehicle pavement type identification method
CN111192295B (en) * 2020-04-14 2020-07-03 中智行科技有限公司 Target detection and tracking method, apparatus, and computer-readable storage medium
CN111208839B (en) * 2020-04-24 2020-08-04 清华大学 Fusion method and system of real-time perception information and automatic driving map
CN111516605B (en) * 2020-04-28 2021-07-27 上汽大众汽车有限公司 Multi-sensor monitoring equipment and monitoring method
CN111598823B (en) * 2020-05-19 2023-07-25 北京数字绿土科技股份有限公司 Multisource mobile measurement point cloud data space-ground integration method and storage medium
CN113805572B (en) * 2020-05-29 2023-12-15 华为技术有限公司 Method and device for motion planning
CN113744518B (en) * 2020-05-30 2023-04-18 华为技术有限公司 Method and device for detecting vehicle travelable area
CN111680611B (en) * 2020-06-03 2023-06-16 江苏无线电厂有限公司 Road trafficability detection method, system and equipment
CN111880191B (en) * 2020-06-16 2023-03-28 北京大学 Map generation method based on multi-agent laser radar and visual information fusion
JP7425682B2 (en) * 2020-06-25 2024-01-31 株式会社日立製作所 Information management system, information management device, and information management method
CN113932820A (en) * 2020-06-29 2022-01-14 杭州海康威视数字技术股份有限公司 Object detection method and device
CN114067556B (en) * 2020-08-05 2023-03-14 北京万集科技股份有限公司 Environment sensing method, device, server and readable storage medium
CN111898582B (en) * 2020-08-13 2023-09-12 清华大学苏州汽车研究院(吴江) Obstacle information fusion method and system for binocular camera and millimeter wave radar
CN111967374B (en) * 2020-08-14 2021-10-01 安徽海博智能科技有限责任公司 Mine obstacle identification method, system and equipment based on image processing
CN112083400A (en) * 2020-08-21 2020-12-15 达闼机器人有限公司 Calibration method, device and storage medium for moving object and sensor thereof
CN112115819B (en) * 2020-09-03 2022-09-20 同济大学 Driving danger scene identification method based on target detection and TET (transient enhanced test) expansion index
CN111783905B (en) * 2020-09-07 2021-01-08 成都安智杰科技有限公司 Target fusion method and device, storage medium and electronic equipment
CN112130132B (en) * 2020-09-11 2023-08-29 广州大学 Underground pipeline detection method and system based on ground penetrating radar and deep learning
CN112033429B (en) * 2020-09-14 2022-07-19 吉林大学 Target-level multi-sensor fusion method for intelligent automobile
CN112329552B (en) * 2020-10-16 2023-07-14 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile
CN112233097B (en) * 2020-10-19 2022-10-28 中国科学技术大学 Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
CN114500736B (en) * 2020-10-23 2023-12-05 广州汽车集团股份有限公司 Intelligent terminal motion trail decision method and system and storage medium thereof
CN112348848A (en) * 2020-10-26 2021-02-09 国汽(北京)智能网联汽车研究院有限公司 Information generation method and system for traffic participants
CN112348894B (en) * 2020-11-03 2022-07-29 中冶赛迪重庆信息技术有限公司 Method, system, equipment and medium for identifying position and state of scrap steel truck
CN112389440B (en) * 2020-11-07 2021-06-04 吉林大学 Vehicle driving risk prediction method in off-road environment based on vehicle-road action mechanism
CN112558072A (en) * 2020-12-22 2021-03-26 北京百度网讯科技有限公司 Vehicle positioning method, device, system, electronic equipment and storage medium
CN112560974B (en) * 2020-12-22 2021-12-31 清华大学 Information fusion and vehicle information acquisition method and device
CN112763995B (en) * 2020-12-24 2023-09-01 阿波罗智联(北京)科技有限公司 Radar calibration method and device, electronic equipment and road side equipment
CN112764042B (en) * 2020-12-28 2023-11-21 上海汽车集团股份有限公司 Obstacle detection and tracking method and device
CN112767475B (en) * 2020-12-30 2022-10-18 重庆邮电大学 Intelligent roadside sensing system based on C-V2X, radar and vision
CN112686979B (en) * 2021-03-22 2021-06-01 中智行科技有限公司 Simulated pedestrian animation generation method and device and electronic equipment
CN113110424A (en) * 2021-03-26 2021-07-13 大连海事大学 Unmanned ship collision avoidance method based on chart information
CN113093221A (en) * 2021-03-31 2021-07-09 东软睿驰汽车技术(沈阳)有限公司 Generation method and device of grid-occupied map
CN113296118B (en) * 2021-05-24 2023-11-24 江苏盛海智能科技有限公司 Unmanned obstacle detouring method and terminal based on laser radar and GPS
CN113642616B (en) * 2021-07-27 2023-10-31 北京三快在线科技有限公司 Training sample generation method and device based on environment data
CN113379805B (en) * 2021-08-12 2022-01-07 深圳市城市交通规划设计研究中心股份有限公司 Multi-information resource fusion processing method for traffic nodes
CN113671460B (en) * 2021-08-20 2024-03-22 上海商汤临港智能科技有限公司 Map generation method, map generation device, computer equipment and storage medium
CN113657331A (en) * 2021-08-23 2021-11-16 深圳科卫机器人科技有限公司 Warning line infrared induction identification method and device, computer equipment and storage medium
CN113703460B (en) * 2021-08-31 2024-02-09 上海木蚁机器人科技有限公司 Method, device and system for identifying vacant position of navigation vehicle
CN115797900B (en) * 2021-09-09 2023-06-27 廊坊和易生活网络科技股份有限公司 Vehicle-road gesture sensing method based on monocular vision
CN113702967B (en) * 2021-09-24 2023-07-28 中国北方车辆研究所 Method for identifying and tracking guided vehicle target of ground unmanned platform and vehicle-mounted system
CN113917875A (en) * 2021-10-19 2022-01-11 河南工业大学 Open universal intelligent controller, method and storage medium for autonomous unmanned system
CN113962301B (en) * 2021-10-20 2022-06-17 北京理工大学 Multi-source input signal fused pavement quality detection method and system
CN113920735B (en) * 2021-10-21 2022-11-15 中国第一汽车股份有限公司 Information fusion method and device, electronic equipment and storage medium
CN114267191B (en) * 2021-12-10 2023-04-07 北京理工大学 Control system, method, medium, equipment and application for relieving traffic jam of driver
CN114626472A (en) * 2022-03-18 2022-06-14 合众新能源汽车有限公司 Auxiliary driving method and device based on machine learning and computer readable medium
CN114494248B (en) * 2022-04-01 2022-08-05 之江实验室 Three-dimensional target detection system and method based on point cloud and images under different visual angles
CN115222767B (en) * 2022-04-12 2024-01-23 广州汽车集团股份有限公司 Tracking method and system based on space parking space
CN114563007B (en) * 2022-04-28 2022-07-29 新石器慧通(北京)科技有限公司 Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium
CN115100631A (en) * 2022-07-18 2022-09-23 浙江省交通运输科学研究院 Road map acquisition system and method for multi-source information composite feature extraction
CN115100633B (en) * 2022-08-24 2022-12-13 广东中科凯泽信息科技有限公司 Obstacle identification method based on machine learning
CN115965682B (en) * 2022-12-16 2023-09-01 镁佳(北京)科技有限公司 Vehicle passable area determining method and device and computer equipment
CN115691221A (en) * 2022-12-16 2023-02-03 山东矩阵软件工程股份有限公司 Vehicle early warning method, vehicle early warning system and related device
CN115900771B (en) * 2023-03-08 2023-05-30 小米汽车科技有限公司 Information determination method, device, vehicle and storage medium
CN117456108B (en) * 2023-12-22 2024-02-23 四川省安全科学技术研究院 Three-dimensional data acquisition method for line laser sensor and high-definition camera

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9335766B1 (en) * 2013-12-06 2016-05-10 Google Inc. Static obstacle detection
CN106291736A (en) * 2016-08-16 2017-01-04 张家港长安大学汽车工程研究院 Pilotless automobile track dynamic disorder object detecting method
CN106908783B (en) * 2017-02-23 2019-10-01 苏州大学 Based on obstacle detection method combined of multi-sensor information
CN108509918B (en) * 2018-04-03 2021-01-08 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image

Also Published As

Publication number Publication date
CN109829386A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN109829386B (en) Intelligent vehicle passable area detection method based on multi-source information fusion
CN109556615B (en) Driving map generation method based on multi-sensor fusion cognition of automatic driving
US11885910B2 (en) Hybrid-view LIDAR-based object detection
US11836623B2 (en) Object detection and property determination for autonomous vehicles
US10310087B2 (en) Range-view LIDAR-based object detection
CN112700470B (en) Target detection and track extraction method based on traffic video stream
US20190310651A1 (en) Object Detection and Determination of Motion Information Using Curve-Fitting in Autonomous Vehicle Applications
EP2574958B1 (en) Road-terrain detection method and system for driver assistance systems
KR101822373B1 (en) Apparatus and method for detecting object
CN112149550A (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
CN116685874A (en) Camera-laser radar fusion object detection system and method
Zhang et al. A cognitively inspired system architecture for the Mengshi cognitive vehicle
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
Pantilie et al. Real-time obstacle detection using dense stereo vision and dense optical flow
CN112379674B (en) Automatic driving equipment and system
CN111845787A (en) Lane change intention prediction method based on LSTM
Kellner et al. Road curb detection based on different elevation mapping techniques
CN114821507A (en) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving
CN116830164A (en) LiDAR decorrelated object detection system and method
Kellner et al. Multi-cue, model-based detection and mapping of road curb features using stereo vision
CN113252051A (en) Map construction method and device
DE102021132199A1 (en) Determining object mobility parameters using an object sequence
Choi et al. Radar-based lane estimation with deep neural network for lane-keeping system of autonomous highway driving
Kotur et al. Camera and LiDAR sensor fusion for 3d object tracking in a collision avoidance system
Kaida et al. Behavior prediction using 3d box estimation in road environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant