CN109829386A - Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method - Google Patents

Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method Download PDF

Info

Publication number
CN109829386A
CN109829386A CN201910007212.5A CN201910007212A CN109829386A CN 109829386 A CN109829386 A CN 109829386A CN 201910007212 A CN201910007212 A CN 201910007212A CN 109829386 A CN109829386 A CN 109829386A
Authority
CN
China
Prior art keywords
target
frame
information
obstacle
millimetre
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910007212.5A
Other languages
Chinese (zh)
Other versions
CN109829386B (en
Inventor
李克强
熊辉
余大蒙
王建强
王礼坤
许庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910007212.5A priority Critical patent/CN109829386B/en
Publication of CN109829386A publication Critical patent/CN109829386A/en
Application granted granted Critical
Publication of CN109829386B publication Critical patent/CN109829386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a kind of intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, this method comprises: S100, the obstacle target information for the vehicle periphery that acquisition onboard sensor detects exports static-obstacle thing object library;S200, receive the obstacle target information of vehicle periphery, the obstacle target information detected by onboard sensor is subjected to space-time synchronous, the obstacle information of all vehicle peripheries detected is subjected to single frames subject fusion again, the multiple target tracking of continuous interframe is carried out using motion prediction and multiframe target association, exports dynamic barrier object library;S300 receives the dynamic barrier object library of static-obstacle thing object library and S200 output, and according to the information update dynamic barrier object library of static-obstacle thing object library, forms real-time obstacle target information, generation can traffic areas.The present invention can accurately obtain position, scale, classification and the motion information and binaryzation rasterizing map of obstacles around the vehicle in vehicle travel process, the motion profile of multiple target is tracked, forming the intelligent vehicle including binaryzation rasterizing map and dynamic barrier information real-time update can traffic areas.

Description

Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
Technical field
The present invention relates to automatic Pilot technical fields, can more particularly to a kind of intelligent vehicle based on Multi-source Information Fusion Traffic areas detection method.
Background technique
Intelligent vehicle is realized by technological means such as environment sensing, digital map navigation, trajectory planning and Decision Controls each Automatic Pilot under kind traffic scene.The popularization and application of intelligent vehicle improve traffic safety to traffic congestion is alleviated, and improve fuel oil Consumption rate reduces environmental pollution and plays positive effect.National governments, relevant enterprise, scientific research institution and colleges and universities' unit etc. are all thrown Enter the academic theory research and Practical research of a large amount of man power and materials to automatic Pilot the relevant technologies, it is desirable to automatic Pilot vehicle Enter in daily life early, and people is allowed to share automatic Pilot technology bring happiness.Intelligent vehicle is by taking the photograph The onboard sensors such as camera, millimetre-wave radar, laser radar, GPS (global positioning system) and IMU (Inertial Measurement Unit) are realized The perception and positioning of vehicle-periphery, and positioning and navigation information from vehicle are then believed based on cartographic information and barrier Breath carries out real-time track planning, and the decision informations such as transverse and longitudinal speed, steering wheel angle are distributed to vehicle bottom by CAN bus Layer control unit, realizes the concrete operations such as acceleration and deceleration, braking, steering.
Intelligent vehicle can traffic areas detection include road edge identification and detection of obstacles, both be based on vehicle-mounted biography Sensor carries out environment sensing with after fusion as a result, being the basis that intelligent vehicle carries out trajectory planning.Intelligent vehicle can pass through The detection in region is the important component of automatic Pilot environment perception technology, also can carry out real-time track planning for intelligent vehicle Foundation is provided.Intelligent vehicle can the order of accuarcy of traffic areas detection help to improve the intelligent water of trajectory planning subsystem It is flat, there is great importance to the Decision Control of subsequent intelligent vehicle, so that the overall intelligenceization for improving intelligent vehicle is horizontal, it is real Taking their own roads between existing various traffic users, ensures safe and orderly traffic environment.Therefore, intelligent vehicle can traffic areas The research of detection method can provide the information of real-time track planning for intelligent vehicle, intelligent vehicle safety can led in an orderly manner Row region traveling, prevents the generation of collision accident, ensures the traffic safety of various traffic participants.
Currently, for intelligent vehicle can traffic areas detection method research it is more, the sensor type of use has monocular Video camera, binocular camera and laser radar, the road type being related to have the structured road that lane is clear or lane line is fuzzy With the unstructured road of no lane line, the target of detection is lane boundary or road surface and barrier.On structured road, base The method that traditional nonparametric study or machine learning or deep learning are used in the method for camera sensor detection, while into The clarifications of objective such as driveway line, pedestrian and vehicle are extracted and classification, obtain position and the class of pavement boundaries and obstacle target Other information, but lack the dynamic motion information of the targets such as pedestrian and vehicle;Based on the method for laser radar sensor first with Lane line reflected intensity and pavement-height information are partitioned into road boundary, then filter out barrier by clustering method, finally Carry out road boundary and barrier fusion output can traffic areas, such method precision is limited and lacks the classification of barrier letter Breath, does not utilize the track following of obstacle target.On unstructured road, it is especially the absence of the road surface of lane markings, is based on The road surface pixel dividing method of deep learning is more common, but needs in advance to carry out image the label of Pixel-level.
On the whole, at this stage intelligent vehicle can traffic areas detection have the following aspects aiming at the problem that: 1) It cannot be generally applicable to structured road and unstructured road;2) movement of different classes of obstacle target is not fully considered Information characteristics, and it is simple using the movement of linearly or nonlinearly model prediction target, it not can be effectively carried out obstacle information Real-time update;3) lack multiple target direction and track following function, lack the tracking to multi-target track in real roads scene And management;4) onboard sensor for not making full use of intelligent vehicle often to match, if laser radar depth map is in target detection It uses, and returns to the use of width and velocity information to millimetre-wave radar;5) on non-structural road, based on deep learning The method of road surface pixel segmentation, needs to carry out image the label of Pixel-level, high labor cost.
Thus, it is desirable to have a kind of technical solution come overcome or at least mitigate in the drawbacks described above of the prior art at least one It is a.
Summary of the invention
The purpose of the present invention is to provide a kind of intelligent vehicles based on Multi-source Information Fusion can traffic areas detection method To overcome or at least mitigate at least one of the drawbacks described above of the prior art.
To realize above-mentioned target, the present invention provides a kind of intelligent vehicle based on Multi-source Information Fusion can traffic areas detection Method, the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method include:
S100, the obstacle target information for the vehicle periphery that acquisition onboard sensor detects, exports static-obstacle thing mesh Mark library;
S200 receives the obstacle target information of the collected vehicle periphery of S100, will be detected by the onboard sensor The obstacle target information arrived carries out space-time synchronous, then the obstacle information of all vehicle peripheries detected is carried out single frames mesh Mark fusion, finally carries out the multiple target tracking of continuous interframe using motion prediction and multiframe target association, exports dynamic barrier Object library;And
S300 receives the static-obstacle thing object library of S100 output and the dynamic barrier object library of S200 output, and root According to the information update dynamic barrier object library of static-obstacle thing object library, real-time obstacle target information is formed, generation can Traffic areas.
Further, S100 is specifically included:
The three-dimensional point cloud image of laser radar output is acquired and parsed, two dimension is generated and overlooks point cloud chart;
Point cloud chart, acquired disturbance object target detection frame and the two-value including road boundary point information are overlooked according to the two dimension Change rasterizing map;And
In conjunction with the obstacle target information that YOLOv3_LiDAR target detection model generates, the binaryzation rasterizing is updated Map.
Further, the preparation method of the obstacle target detection block specifically includes:
S1141a carries out parameter learning to YOLOv3 model, generates according to a cloud target frame Truth data library DB1 YOLOv3_LiDAR target detection model;
It is enterprising to overlook point cloud chart in two dimension for S1141b, the YOLOv3_LiDAR target detection model obtained using S1141a The detection of row obstacle target, and exports obstacle target information, the obstacle target information include obstacle target position and Big classification.
Further, the acquisition methods of the binaryzation rasterizing map specifically include:
S1142a is overlooked in the two dimension using European clustering method and is carried out binaryzation obstacle target in point cloud chart Detection exports the rasterizing map for the initial binaryzation being made of obstacle target region;
S1142b finds out possibility according to the elevation information and reflected intensity of the three-dimensional point cloud scanning element that parsing obtains Road boundary point, and use conic fitting localized road boundary, generate include road boundary point information binaryzation grid It formats map.
Further, S100 is specific further include:
S122 is parsed using information of the dedicated DBC file to the obstacle target of the S121 CAN format received, Obtain M millimetre-wave radar target data;
S123, the M millimetre-wave radar target data exported using S122 are obtained according to following formula (1) to formula (3) The millimetre-wave radar target frame of initialization, in formula, (xj, yj) it is the corresponding millimetre-wave radar target frame of any one obstacle target Center position, the speed v of any one obstacle targetj, pi is constant:
xj=rangej*sin(angle_rad*pi/180.0) (1)
yj=rangej*cos(angle_rad*pi/180.0) (2)
vj=range_ratej (3)
If millimetre-wave radar does not return to width information widthj, it assumes that width widthjIt is 1 meter, millimetre-wave radar mesh Target length lengthj=widthj, remember lj=wj, complete the initialization of millimetre-wave radar target frame;
S124 acquires the coordinate of the K point in millimetre-wave radar coordinate system and the shared region of image coordinate system, obtains millimeter wave Radar-camera calibration parameter;
S125, according to acquisition millimetre-wave radar-camera calibration parameter that S124 is obtained, the M millimeter that S122 is exported Wave radar target data is transformed into image coordinate system from millimetre-wave radar coordinate system, forms M image object frame.
Further, S125 is specifically included:
The image object frame as marking in target frame Truth data library DB2 is calculated for learning using formula (7) in S125a The position for practising millimetre-wave radar target output box and image object frame that millimetre-wave radar coordinate system is transformed into image coordinate system is reflected Penetrate relationship { λx, λy, λw, λh, bx, by};
In formula (7), { λx, λy, λw, λh, bx, byIt is learning parameter;The obstacle target that millimetre-wave radar detects is corresponding The coordinate points of real obstruction target in image are expressed as (xgt,ygt, wgt,hgt), xgtFor in millimetre-wave radar target frame Abscissa of the heart in millimetre-wave radar coordinate system, ygtFor millimetre-wave radar target frame center in millimetre-wave radar coordinate system Ordinate, wgtFor width of the center in millimetre-wave radar coordinate system of millimetre-wave radar target frame, hgtFor millimetre-wave radar Height of the center of target frame in millimetre-wave radar coordinate system;The obstacle target that millimetre-wave radar detects is from millimeter wave thunder The coordinate points in image coordinate system, which are transformed into, up to coordinate system is expressed as (xcam,ycam, wcam,hcam), xcamFor image object frame Abscissa of the center in image coordinate system, ycamFor ordinate of the center in image coordinate system of image object frame, wcamFor Width of the image object frame in image coordinate system, hcamFor height of the image object frame in image coordinate system;
S125b uses for reference the RPN network in Faster R-CNN target detection model, utilizes image object frame Truth data It is true to be adapted to image object frame using the design of k means clustering algorithm for the length and width regularity of distribution of the image object frame marked in the DB2 of library The target candidate frame length and width of Value Data library DB2, carry out the extension study of millimetre-wave radar target output box, output it is as more as possible and The accurate and millimetre-wave radar target expansion subrack including real obstruction target.
Further, S100 is specific further include:
S131, the image data that acquisition camera returns;
S132 parses the received image data of S131, obtains the PNG image of BGR triple channel;
S133 obtains laser radar-camera calibration parameter;
S134, according to laser radar-camera calibration parameter that S133 is obtained, by the two-value including road boundary point information Change rasterizing map and be transformed into the public domain in image coordinate system from laser radar coordinate system, generates area-of-interest;
S135 carries out parameter learning to YOLOv3 model, generates for figure according to image object frame Truth data library DB2 YOLOv3_Camera target detection model as carrying out multi-target detection;
S136, the YOLOv3_Camera target detection model obtained using S135 are shown in the area-of-interest that S134 is generated Multi-target detection is carried out in the plane of delineation out, exports image data, the information of each of image data obstacle target It is denoted as { x, y, w, h, c, o }, (x, y) is coordinate points of the upper left corner of image object frame in image coordinate system, and w is image object The width of frame, h are the height of image object frame, and c is the big classification and small classification of obstacle target, and o is the direction letter of obstacle target Breath.
Further, " obstacle information of all vehicle peripheries detected is subjected to single frames subject fusion " in S200 Include:
Video camera-vehicle calibration parameter is obtained, the target frame in image coordinate system is converted to the target of vehicle axis system Frame;
It, will be under same timestamp according to millimetre-wave radar-camera calibration parameter and laser radar-camera calibration parameter Single-frame images the obstacle target information of vehicle periphery that detects of onboard sensor carry out spatial synchronization after, successively convert Into image coordinate system, vehicle axis system;And
On the basis of video camera testing result, based on global k-nearest neighbor, corresponding millimetre-wave radar and laser are matched Radar information, obtains same obstacle target information, which includes position, distance, classification and the speed of obstacle target.
Further, " multiple target tracking of continuous interframe is carried out using motion prediction and multiframe target association " in S200 Include:
For Car, Pedestrian, Rider in obstacle target in S221, three individual length are separately designed in short-term Memory network carries out motion prediction, is related to the location information (x, y) of target, dimension information (w, h);
According to classification o ∈ { Car, Pedestrian, Rider }, the length designed using S222 in short-term instructed by memory network Practice, preceding N frame is input data, and N+1 frame is prediction/output data, forms LSTM motion prediction model;
In the three classes obstacle target determined, according to different tracking ID matching image target frame Truth data libraries Data (x, y, w, h) of the same obstacle target in continuous N+1 frame in DB2I-N+1~i+1, (x, y) is the position letter for predicting target frame Breath, (w, h) are the dimension information for predicting target frame;
Using same obstacle target in the continuous N frame of trained LSTM model measurement exercise data (x, y, w, h)I-N+1~i, predict the motion information (x, y, w, h) of next frame obstacle targeti+1
By the position of obstacle target and dimensional information and the corresponding speed of fused obstacle target, classification, away from From, towards etc. attributes as associated attribute, be associated matching using multiple target of the Hungary Algorithm to continuous interframe, imparting The same same tracking ID number of obstacle target, the dynamic barrier object library { x, y, w, h, c, id, v, o } after output association; Wherein, N is the frame number of LSTM motion prediction mode input;I is frame number.
Further, S300 is specifically included:
S310 receives the updated two-value that the laser radar detection unit 21 in multi-source multi-target detection module 2 exports Change the dynamic object library that rasterizing map and multiframe target association unit 33 are formed;
S320 utilizes the information update dynamic barrier object library of updated binaryzation rasterizing map;
S330 updates real-time obstacle target position and movement letter according to the updated dynamic barrier object library of S320 Breath, export vehicle can traffic areas.
The present invention can accurately obtain position, scale, classification and the fortune of obstacles around the vehicle in vehicle travel process Dynamic information and binaryzation rasterizing map, track the motion profile of multiple target, are formed including binaryzation rasterizing map and are moved The intelligent vehicle of state obstacle information real-time update can traffic areas.
Detailed description of the invention
Fig. 1 is that the intelligent vehicle provided in an embodiment of the present invention based on Multi-source Information Fusion can traffic areas detection method Functional block diagram;
Fig. 2 is the target frame classification schematic diagram in off-line data library unit shown in FIG. 1;
Fig. 3 is the functional block diagram of multiple target tracking module shown in FIG. 1.
Specific embodiment
In the accompanying drawings, same or similar element is indicated using same or similar label or there is same or like function Element.The embodiment of the present invention is described in detail with reference to the accompanying drawing.
Wen Zhong, " preceding " can be understood as the corresponding direction for being directed toward headstock, and " rear " is opposite with " preceding "." right side " can be understood as driving The right direction of the person's of sailing face forward, " left side " are opposite with " right side "."upper" can be understood as the corresponding direction for being directed toward roof, "lower" with " preceding " is opposite.
Intelligent vehicle based on Multi-source Information Fusion provided by the present embodiment can traffic areas detection method be suitable for not There are video camera, laser radar and millimeter wave thunder with the sensor combinations of configuration, such as onboard sensor involved in the present embodiment It reaches.Wherein, laser radar can use Velodyne VLP-16 line laser radar, the obstacle target that laser radar detects Information is located in laser radar coordinate system, specifically includes target frame and its big classification of coordinate, obstacle target (as mentioned below Car (vehicle), Pedstrian (people), the big target category of Rider (bicyclist) three) and obstacle target with from the opposite of vehicle Distance.Millimetre-wave radar can use Delphi ESR millimetre-wave radar (the radar target number M=64 of return), millimetre-wave radar The obstacle target information detected is located in millimetre-wave radar coordinate system, specifically includes target frame and its coordinate and from vehicle Speed.Video camera uses IDS UI-5250CP-C-HQ monocular camera, and the obstacle target information that video camera detects is located at figure As specifically including target frame and its coordinate, the big classification of obstacle target and small classification and obstacle target in coordinate system Direction.Preferably, entire intelligent vehicle as described in Figure 1 can traffic areas detection method can be in robot developing platform (ROS) it is realized in, different modules is made of different packets (package), and multiple subfunctions in module are by corresponding node (node) it forms.
As shown in Figure 1, the intelligent vehicle based on Multi-source Information Fusion provided by the present embodiment can traffic areas detection side The corresponding device of method includes fundamental functional modules 1, multi-source multi-target detection module 2, multiple target tracking module 3 and can traffic areas Generation module 4.
Wherein, fundamental functional modules 1 be used for by more onboard sensors (such as above-described embodiment provide have video camera, swash The onboard sensors such as optical radar and millimetre-wave radar) and the coordinate system that corresponds to each other of vehicle between carry out spatial synchronization, by each barrier Hinder object target information to carry out time synchronization and generates offline image database." each other " it can be understood as each vehicle-mounted sensing Between device and between onboard sensor and vehicle.
Multi-source multi-target detection module 2 is used to acquire the obstacle target letter for the vehicle periphery that onboard sensor detects Breath exports the detection of obstacles information (input as shown in Figure 3 of static-obstacle thing object library and the output of three kinds of onboard sensors Box).Wherein, static-obstacle thing object library is the binaryzation rasterizing including road boundary information detected by laser radar Map.
Multiple target tracking module 3 is tied for receiving the collected obstacle target information of multi-source multi-target detection module 2 The space-time synchronous function of fundamental functional modules 1 is closed, will detect that each obstacle target information is carried out from different onboard sensors Single frames subject fusion, recycles motion prediction and multiframe target association to carry out the multiple target tracking of continuous interframe, and output dynamic hinders Hinder object object library.Wherein, dynamic barrier object library includes position, size, classification and the tracking ID and fortune of obstacle target Dynamic speed and its direction.
Can traffic areas generation module 4 be used for receive multi-source multi-target detection module 2 output static-obstacle thing object library The dynamic barrier object library exported with multiple target tracking module 3, and according to the information update of static-obstacle thing object library dynamic Obstacle target library, forms real-time obstacle target information, and generation can traffic areas.
Intelligent vehicle based on Multi-source Information Fusion provided by the present embodiment can traffic areas detection method can be intelligence Can vehicle provide real-time update can traffic areas information, since the motion profile of how vehicle-mounted target can also be exported, therefore can also Anti-collision warning or active collision avoidance for intelligent vehicle, provide foundation for the decision of intelligent vehicle.
The modules in above-described embodiment will be carried out below explanation is developed in details.
Fundamental functional modules 1 include space-time synchronous unit 11, sensor driving unit 12 and off-line data library unit 13.
When space-time synchronous unit 11 is used to carry out the space mark fixed sum data between more onboard sensors and vehicle Sky is synchronous.That is, space-time synchronous unit 11 has the function of laser radar-camera calibration, millimetre-wave radar-video camera mark Determine function, video camera-vehicle calibration function and data space-time synchronous function.Wherein, " between more onboard sensors and vehicle Space calibration " be the rotation and translation mapping matrix relationship to be corresponded to each other between a little by different coordinates, to different coordinates Space calibration is carried out between system." different coordinates " include laser radar coordinate system, millimetre-wave radar coordinate system, image coordinate system And vehicle axis system.By utilizing timestamp and frame per second, the time synchronization between each onboard sensor acquisition data is realized.
Driving parsing and data publication of the sensor driving unit 12 for onboard sensor.In the present embodiment, sensor Driving unit 12 is based on ROS robot developing platform, establishes the driving parsing sum number of laser radar, millimetre-wave radar and video camera According to topic (topic) publisher node (node), there is laser radar driving function, millimetre-wave radar driving function and video camera to drive Dynamic function.
Off-line data library unit 13 includes point cloud target frame Truth data library for generating offline database, offline database DB1 and image object frame Truth data library DB2.Wherein:
Point cloud target frame Truth data library DB1 is used to overlook point cloud chart subscript note in the two dimension generated by laser radar data Two dimension target frame.The acquisition pattern of point cloud target frame Truth data library DB1 are as follows: existing marker software is utilized, in following step The two dimension that rapid S113 is obtained overlooks point cloud chart acceptance of the bid note Car (vehicle), Pedstrian (people), Rider (bicyclist) three classes point cloud mesh Frame is marked, point cloud target frame Truth data library DB1 is formed.
Image object frame Truth data library DB2 marks two dimension target frame on the plane of delineation in image data.Figure As the acquisition pattern of target frame Truth data library DB2 are as follows: on the plane of delineation in image data mark Car (vehicle), Pedestrian (people), Rider (bicyclist) three classes two dimension target frame, each two dimension target frame are marked with each barrier The movement direction and tracking ID of target, form image object frame Truth data library DB2.As shown in Fig. 2, Car=car, bus, Van, truck, otherCar }, in braces { }: car refers to that common passenger car, bus refer to that bus and bus, van refer to lorry And cargo, truck refer to that truck, otherCar are other kinds of motor vehicle.Pedestrian=pedestrian, Dummy }, in braces { }: pedestrian refers to that pedestrian, dummy refer to dummy.
Rider={ cyclist, moped, scooter, tricycle, motorcycle, otherRier }, braces { } In: cyclit refers to bicycle, and moped refers to the motorized electric vehicle for having pedal, and scooter refers to the electronic of not no foot pedal Vehicle, tricycle refer to that express delivery tricycle, motorcycle refer to that motorcycle, otherRider are other kinds of tool of riding.
Since the image that the two dimension that laser radar data generates overlooks point cloud chart and video camera acquisition is X-Y scheme, because This, can be used same set of database flags tool and labeling method.Simultaneously as two dimension overlooks point cloud chart and video camera acquisition Image obstacle target to be detected classification it is consistent, therefore same deep learning target detection YOLOv3 frame can be used Frame pre-training target detection model, for different databases (DB1 and DB2), designing different target study classifications, (two dimension is bowed Viewpoint cloud atlas only includes three categories, and monocular image includes three categories and 13 groups, such as attached drawing 2), learn different model parameters, Obtain the YOLOv3 target detection model that point cloud chart and monocular image are overlooked for two dimension, in which: overlook point cloud chart for two dimension YOLOv3 target detection model hereafter be referred to as YOLOv3_LiDAR target detection model, for the YOLOv3 mesh of monocular image It marks detection model and is hereafter referred to as YOLOv3_Camera target detection model.
Multi-source multi-target detection module 2 includes laser radar detection unit 21, millimetre-wave radar detection unit 22 and image Detection unit 23.
Laser radar detection unit 21 is used to acquire the three-dimensional point cloud image of laser radar output, and to three-dimensional point cloud image Dissection process is carried out, two dimension is generated and overlooks point cloud chart, while target detection is carried out by pre-training target detection model, generate band There are the obstacle target detection block and binaryzation rasterizing map of position, classification and depth information.
In one embodiment, 21 specific work process of laser radar detection unit include the steps that following S111~ S115:
The data that acquisition laser radar returns: S111 is carried out by the sensor driving unit 22 in fundamental functional modules 1 After laser radar driving, the three-dimensional point cloud image that laser radar returns is obtained from Ethernet interface.
S112, the three-dimensional point cloud image that parsing S111 is received, obtains three-dimensional point cloud scanning element." three-dimensional point cloud therein Scanning element " is expressed as vector Li={ Xi, Yi, Zi, ri, in which: XiIndicate i-th of scanning element relative to laser radar coordinate system The lateral shift of origin, right side are positive.YiIndicate that i-th of scanning element is inclined relative to the longitudinal direction of the origin of laser radar coordinate system It moves, front side is positive.ZiIt indicates vertical offset of i-th of scanning element relative to the origin of laser radar coordinate system, is positive upwards.ri It indicates the reflected intensity of i-th of scanning element, reflects the dot laser radar pulse echo strength to a certain extent.
The three-dimensional point cloud scanning element that S112 is parsed is converted into two dimension and overlooks point cloud chart: can led to ensure by S113 The real-time of row region detection, at the same with video camera obtain the plane of delineation share YOLOv3 target detection model, be also convenient for Laser radar and video camera carry out coordinate conversion, and the three-dimensional point cloud scanning element (OXYZ three-dimensional system of coordinate) that S112 parsing obtains is thrown On shadow to the OXY two-dimensional surface that can be unfolded, the planarization of three-dimensional point cloud scanning element is realized, generate two dimension and overlook point cloud chart { Xi, Yi}。
S114 overlooks point cloud chart, acquired disturbance object target detection frame and including road boundary according to the two dimension that S113 is obtained The binaryzation rasterizing map of point information.
S115 updates two in S114 in conjunction with the obstacle target information that YOLOv3_LiDAR target detection model generates Value rasterizing map.
In one embodiment, the preparation method of the obstacle target detection block in S114 specifically include S1141a and S1141b:
S1141a, training simultaneously generate YOLOv3_LiDAR target detection model: according to a cloud target frame Truth data library DB1 carries out parameter learning to YOLOv3 model, generates YOLOv3_LiDAR target detection model.
S1141b detects obstacle target: the YOLOv3_LiDAR target detection model obtained using S1141a, in two dimension It overlooks and carries out obstacle target detection on point cloud chart, and export obstacle target information, which includes obstacle The position of object target and big classification.
In one embodiment, the acquisition methods of the binaryzation rasterizing map in S114 specifically include S1142a and S1142b:
S1142a detects obstacle target: utilizing European clustering method, overlooks in point cloud chart in the two dimension that S113 is obtained The detection of 0/1 binaryzation obstacle target is carried out, the rasterizing for the initial binaryzation being made of obstacle target region is exported Map.
S1142b generates the binaryzation rasterizing map including road boundary point information: three parsed according to S112 The elevation information Z of dimension point cloud scanning elementiWith reflected intensity ri, possible road boundary point is found out, and use conic fitting office Portion's road boundary generates the binaryzation rasterizing map including road boundary point information.
Millimetre-wave radar detection unit 22 is used to acquire the target information of the CAN format of millimetre-wave radar output, and to mesh It marks information and carries out target point parsing, the initialization of target frame, millimetre-wave radar-camera calibration, mapping parameters self study (DB2), To obtain the extension for the target frame that millimetre-wave radar detects.
In one embodiment, 22 specific work process of millimetre-wave radar detection unit include the steps that following S121~ S126:
S121, acquisition millimetre-wave radar return data: by the sensor driving unit 22 in fundamental functional modules 1 into After the driving of row millimetre-wave radar, the obstacle target letter for the CAN format that millimetre-wave radar returns is obtained from CAN- Ethernet interface Breath, obstacle target information are presented with millimetre-wave radar target frame, and millimetre-wave radar target frame includes the position of target frame And speed.
S122 parses radar target: using dedicated DBC file to the obstacle target of the S121 CAN format received Information is parsed, and obtains M millimetre-wave radar target data (M=64), wherein each millimetre-wave radar target matrix It is shown as vector Rj, Rj={ rangej, angle_radj, range-ratej, lat_ratej, idj, widthj, in which: rangej Indicate the center of j-th of millimetre-wave radar target frame and the relative distance of millimetre-wave radar coordinate origin, angle_radjTable Show j-th of millimetre-wave radar target frame center and millimetre-wave radar coordinate origin line and it is longitudinal (millimetre-wave radar just before To) relative angle, range_ratejIndicate the phase of j-th millimetre-wave radar target frame and millimetre-wave radar coordinate origin To speed, lat_ratejIndicate the lateral velocity of j-th millimetre-wave radar target frame and millimetre-wave radar coordinate origin, idj Indicate the ID number of j-th of millimetre-wave radar target frame, widthjIndicate the width of j-th of millimetre-wave radar target frame.
S123 initializes millimetre-wave radar target frame: the M millimetre-wave radar target data exported using S122 is obtained The millimetre-wave radar target frame of initialization.The present embodiment is with j-th of millimetre-wave radar target frame (xj, yj, vj) for, illustrate just The acquisition modes of the millimetre-wave radar target frame of beginningization:
According to following formula (1) to formula (3), origin of the millimetre-wave radar target with respect to millimetre-wave radar coordinate system is obtained Position (xj, yj) and speed vj, wherein (xj, yj) be millimetre-wave radar target frame center position, pi is constant, value ratio Can be such as 3.1415926:
xj=rangej*sin(angle_rad*pi/180.0) (1)
yj=rangej*cos(angle_rad*pi/180.0) (2)
vj=range_ratej (3)
If millimetre-wave radar does not return to width information widthj, it assumes that width widthjIt is 1 meter, millimetre-wave radar mesh Target length lengthj=widthj, remember lj=wj, complete the initialization of millimetre-wave radar target frame.
S124 demarcates millimetre-wave radar-video camera: acquisition millimetre-wave radar coordinate system and image coordinate system share region K point, for one of point, the coordinate points that millimetre-wave radar returns are (xrad,yrad), the coordinate points that camera returns For (xcam,ycam), obtain millimetre-wave radar-camera calibration parameter (Arad2cam, Lrad2cam), Arad2camFor the transformation square of 2*3 dimension Battle array parameter-spin matrix, Lrad2camFor the translation matrix of 2*1 dimension.
Such as: following perspective transform relationship is utilized, the point for establishing millimetre-wave radar coordinate system is transformed into image coordinate system Equation (shown in such as following formula (4)) solves optimized parameter, can be obtained millimetre-wave radar-video camera mark using least square method Determine parameter (Arad2cam, Lrad2cam).Since formula (5) and (6) share 8 parameters, therefore value >=8 K, K=64 is taken in implementation, in conjunction with Formula (4) can calculate to formula (6) and obtain Arad2camAnd Lrad2cam:
S125, according to acquisition millimetre-wave radar-camera calibration parameter that S124 is obtained, the M millimeter that S122 is exported Wave radar target data is transformed into image coordinate system from millimetre-wave radar coordinate system, forms M image object frame.It is specifically included Following S125a and S125b:
S125a, mapping parameters self study: the image object frame marked in image object frame Truth data library DB2 is for learning The position for practising millimetre-wave radar target output box and image object frame that millimetre-wave radar coordinate system is transformed into image coordinate system is reflected Penetrate relationship { λx, λy, λw, λh, bx, by, as shown in formula (7), and then update the information of millimetre-wave radar target output box, amendment milli The position of the converted deviation of metre wave radar coordinate system and image coordinate system, millimetre-wave radar itself detection and the mistake of width information The length of difference and estimation multiple target.
In formula (7), { λx, λy, λw, λh, bx, byIt is learning parameter;The obstacle target that millimetre-wave radar detects is corresponding The coordinate points of real obstruction target in image are expressed as (xgt,ygt, wgt,hgt), xgtFor in millimetre-wave radar target frame Abscissa of the heart in millimetre-wave radar coordinate system, ygtFor millimetre-wave radar target frame center in millimetre-wave radar coordinate system Ordinate, wgtFor width of the center in millimetre-wave radar coordinate system of millimetre-wave radar target frame, hgtFor millimetre-wave radar Height of the center of target frame in millimetre-wave radar coordinate system;The obstacle target that millimetre-wave radar detects is from millimeter wave thunder The coordinate points in image coordinate system, which are transformed into, up to coordinate system is expressed as (xcam,yCam,wcam,hCam,), xcamFor in image object frame Abscissa of the heart in image coordinate system, ycamFor ordinate of the center in image coordinate system of image object frame, wcamFor figure As width of the target frame in image coordinate system, hcamFor height of the image object frame in image coordinate system.
The extension of target frame: S125b uses for reference the RPN network in Faster R-CNN target detection model, utilizes image object The length and width regularity of distribution of the image object frame marked in frame Truth data library DB2 is set using k means clustering algorithm (k-means) Meter is adapted to the target candidate frame length and width of image object frame Truth data library DB2 (with reference to the RPN network three in Faster R-CNN Kind size and three kinds of length-width ratios, setting k are the extension study for 9) carrying out millimetre-wave radar target output box, and output is as more as possible And the accurate and millimetre-wave radar target expansion subrack including real obstruction target.
Image detecting element 23 is used for the image data that acquisition camera takes, and by image data via to laser thunder The binaryzation rasterizing map exported up to detection unit 21 carries out laser radar-camera calibration, generates area-of-interest, by The YOLOv3 model of image object frame Truth data library DB2 training in fundamental functional modules 1 carries out target detection, exports image The information of obstacle target in data, the information include position, type and the orientation information of target.
In one embodiment, 23 specific work process of image detecting element includes the steps that following S131~S135:
The data that acquisition camera returns: S131 is taken the photograph by the sensor driving unit 22 in fundamental functional modules 1 After camera driving, the image data that video camera returns is obtained from Ethernet interface.
S132 parses the received image data of S131, obtains the PNG image of BGR triple channel.
S133, Calibration of Laser radar-video camera: using the method similar with above-mentioned steps S123, obtains laser radar-and takes the photograph Camera calibration parameter (Alid2cam, Llid2cam)。
S134 generates area-of-interest: the laser radar-camera calibration parameter obtained according to S133, will swash in S114 The binaryzation rasterizing map that optical radar detection unit 21 exports is transformed into the public affairs in image coordinate system from laser radar coordinate system Region altogether generates area-of-interest.
S135, training YOLOv3 target detection model: according to the figure of the off-line data library unit 23 in fundamental functional modules 1 As target frame Truth data library DB2, parameter learning is carried out to YOLOv3 model, generates and carries out multi-target detection for image YOLOv3_Camera target detection model.
S136 detects obstacle target: the YOLOv3_Camera target detection model obtained using S135 is generated in S134 Area-of-interest shown in the plane of delineation in carry out multi-target detection, export image data.Each of image data barrier Hinder object target to present in the form of image object frame (target rectangle position frame), the information of each obstacle target be denoted as x, Y, w, h, c, o }, (x, y) is coordinate points of the upper left corner of image object frame in image coordinate system, and w is the width of image object frame, H is the height of image object frame, and c (category) is the big classification and small classification of obstacle target, and o (orientation) is barrier Hinder the orientation information of object target.
Multiple target tracking module 3 includes that single frames subject fusion unit 31, target motion prediction unit 32 and multiframe target are closed Receipts or other documents in duplicate member 33.
Single frames subject fusion unit 31 is used to different onboard sensors carrying out space-time synchronous, to the barrier in current frame image Object target information is hindered to be merged (input as shown in Figure 3).
In one embodiment, 31 specific work process of single frames subject fusion unit include the steps that following S211~ S213:
S211 receives the multi-source information that multi-source multi-target detection module 2 exports.
S212, calibrating camera-vehicle: using method identical with above-mentioned steps S123, obtains video camera-vehicle calibration Parameter (Acam2veh, Lcam2veh), the target frame in image coordinate system is converted to the target frame of vehicle axis system.
S213, coordinate system conversion: the millimetre-wave radar-camera calibration parameter (A obtained according to S123rad2cam, Lrad2cam) Laser radar-camera calibration parameter (the A obtained with S133lid2cam, Llid2cam), by the single-frame images under same timestamp After the obstacle target information for the vehicle periphery that onboard sensor detects carries out spatial synchronization, it is successively transformed into image coordinate In system, vehicle axis system (iso standard, it is laterally y that longitudinal, which is x, and vertical is z, meets right-hand rule).Coordinate system conversion process In, on the basis of video camera testing result, based on global closest (GNN) algorithm, match corresponding millimetre-wave radar and laser Radar information, obtains same obstacle target information, which includes position, distance, classification and the speed of obstacle target.
History N frame figure of the target motion prediction unit 32 based on the fused obstacle target of single frames subject fusion unit 31 As data, motion prediction is carried out to obstacle target.
In one embodiment, 32 specific work process of target motion prediction unit include the steps that following S221~ S225:
S221 receives the obstacle target information in the vehicle axis system that single frames subject fusion unit 31 exports.
S222, for three macrotaxonomies in obstacle target information in S221, i.e. Car (vehicle), Pedestrian (people) and Rider (bicyclist) separately designs the three individually long progress of memory network (LSTM) in short-term motion predictions, including barrier mesh Target position (x, y) and size (w, h).
S223 that is, will be in image object frame Truth data library DB2 according to classification o ∈ { Car, Pedestrian, Rider } Data sample be divided into that Car (vehicle), Pedestrian (people), Rider (bicyclist) three categories are other, utilize the length of S222 design When memory network (LSTM) be trained, preceding N frame be input data, N+1 frame be prediction/output data, formed LSTM movement Prediction model.
S224 is true according to different tracking ID matching image target frames in the three categories obstacle target that S223 is determined Data (x, y, w, h) of the same obstacle target in continuous N+1 frame in the DB2 of Value Data libraryI-N+1~i+1.Wherein: N is LSTM movement The frame number (data of next frame (i.e. i+1 frame) are predicted with the N frame data for the history for including the i-th frame) of prediction model input;I is Integer of frame number (the i-th frame image) value not less than N (because when being less than N, history frame number is less than N frame).The N frame image of history Frame number are as follows: i-N+1, i-N+2 ..., i-1, i.Such as: i is the 12nd frame, and N takes 10, then can be used 3,4,5,6,7,8,9,10,11,12 Continuous ten frame predicts next frame, i.e. i+1=13 frames.(x, y) is the location information for predicting target frame, and (w, h) is prediction target The dimension information of frame.Present frame be the i-th frame, using include present frame preceding N frame as input data, i+1 frame be prediction/output number According to the training of progress LSTM motion prediction model forms long (one frame of forward prediction, because present frame i must be handled of LSTM single step With the target association of next frame i+1) motion prediction model.Since laser radar, millimetre-wave radar and three kinds of video camera are vehicle-mounted The minimum frame per second of sensor is 10Hz, here a length of 1s when LSTM model learning historical data, amounts to 10 frames, takes N=10.
S225, using same obstacle target in the continuous N frame of the trained LSTM model measurement of S224 exercise data (x, Y, w, h)I-N+1~N, predict the motion information (x, y, w, h) of next frame obstacle targeti+1
The obstacle target for the present frame that multiframe target association unit 33 is determined for associated objects motion prediction unit 32 Detection information, and the multi-obstacle avoidance target information { x, y, w, h, c, id, v, o } after association is provided, output hinders with successive frame Hinder the dynamic object library of object target motion information.
In one embodiment, 33 specific work process of multiframe target association unit is as follows:
The motion information (x, y, w, h) of the obstacle target for the present frame that target motion prediction unit 32 exports is received, it will The attributes conducts such as the corresponding speed of fused obstacle target, classification, distance, direction that single frames subject fusion unit 31 exports Associated attribute is associated matching to the multiple target of continuous interframe using Hungary Algorithm (Hungarian), assigns same barrier Multiple-object information after hindering the same tracking ID number of object target, output to be associated with, i.e., dynamic object library { x, y, w, h, c, id, v, o }.
Can traffic areas generation module 4 be used for receive multi-source multi-target detection module 2 output static object library or two-value Change the dynamic object library of rasterizing map and the output of multiple target tracking module, and according to static object library information update dynamic object Library forms real-time obstacle information, and generating vehicle can traffic areas.
Can traffic areas generation module 4 be used for the binaryzation rasterizing map that exports laser radar detection unit 21 as Static object library, the target with real time kinematics track that multiframe target association unit 33 is exported as dynamic object library, and According to static object library information update dynamic object library, generating real-time vehicle can traffic areas.
In one embodiment, can 4 specific work process of traffic areas generation module include the steps that following S310~ S330:
S310 receives the updated two-value that the laser radar detection unit 21 in multi-source multi-target detection module 2 exports Change the dynamic object library that rasterizing map and multiframe target association unit 33 are formed;
S320 utilizes the information update dynamic barrier object library of updated binaryzation rasterizing map.
S330 updates real-time obstacle target position and movement letter according to the updated dynamic barrier object library of S320 Breath, export vehicle can traffic areas, " can traffic areas " has the pixel of the image-region of obstacle target labeled as 1, does not have There is the pixel of the image-region of obstacle target labeled as 0, forms updated binaryzation rasterizing map.
Finally it is noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations.This The those of ordinary skill in field is it is understood that be possible to modify the technical solutions described in the foregoing embodiments or right Part of technical characteristic is equivalently replaced;These are modified or replaceed, and it does not separate the essence of the corresponding technical solution originally Invent the spirit and scope of each embodiment technical solution.

Claims (10)

1. a kind of intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, it is characterised in that: include:
S100, the obstacle target information for the vehicle periphery that acquisition onboard sensor detects, exports static-obstacle thing object library;
S200 receives the obstacle target information of the collected vehicle periphery of S100, by what is detected by the onboard sensor Obstacle target information carries out space-time synchronous, then the obstacle information of all vehicle peripheries detected is carried out single frames target and is melted It closes, the multiple target tracking of continuous interframe is finally carried out using motion prediction and multiframe target association, export dynamic barrier target Library;And
S300 receives the static-obstacle thing object library of S100 output and the dynamic barrier object library of S200 output, and according to quiet The information update dynamic barrier object library in state obstacle target library, forms real-time obstacle target information, and generation can pass through Region.
2. as described in claim 1 the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, feature exists In S100 is specifically included:
The three-dimensional point cloud image of laser radar output is acquired and parsed, two dimension is generated and overlooks point cloud chart;
Point cloud chart, acquired disturbance object target detection frame and the binaryzation grid including road boundary point information are overlooked according to the two dimension It formats map;And
In conjunction with the obstacle target information that YOLOv3_LiDAR target detection model generates, with updating the binaryzation rasterizing Figure.
3. as claimed in claim 2 the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, feature exists In the preparation method of the obstacle target detection block specifically includes:
S1141a carries out parameter learning to YOLOv3 model, generates YOLOv3_ according to a cloud target frame Truth data library DB1 LiDAR target detection model;
S1141b, the YOLOv3_LiDAR target detection model obtained using S1141a are overlooked on point cloud chart in two dimension and are hindered Hinder object target detection, and export obstacle target information, which includes position and the major class of obstacle target Not.
4. as claimed in claim 2 the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, feature exists In the acquisition methods of the binaryzation rasterizing map specifically include:
S1142a is overlooked in the two dimension using European clustering method and is carried out the detection of binaryzation obstacle target in point cloud chart, Export the rasterizing map for the initial binaryzation being made of obstacle target region;
S1142b finds out possible road according to the elevation information and reflected intensity of the three-dimensional point cloud scanning element that parsing obtains Road boundary point, and conic fitting localized road boundary is used, generate the binaryzation rasterizing including road boundary point information Map.
5. the intelligent vehicle based on Multi-source Information Fusion as described in any one of claim 2 to 4 can traffic areas detection side Method, which is characterized in that S100 is specific further include:
S122 is parsed using information of the dedicated DBC file to the obstacle target of the S121 CAN format received, obtains M A millimetre-wave radar target data;
S123, the M millimetre-wave radar target data exported using S122 are obtained initial according to following formula (1) to formula (3) The millimetre-wave radar target frame of change, in formula, (xj, yj) it is in the corresponding millimetre-wave radar target frame of any one obstacle target Heart point position, the speed v of any one obstacle targetj, pi is constant:
xj=rangej*sin(angle_rad*pi/180.0) (1)
yj=rangej*cos(angle_rad*pi/180.0) (2)
vj=range_ratej (3)
If millimetre-wave radar does not return to width information widthj, it assumes that width widthjIt is 1 meter, millimetre-wave radar target Length lengthj=widthj, remember lj=wj, complete the initialization of millimetre-wave radar target frame;
S124 acquires the coordinate of the K point in millimetre-wave radar coordinate system and the shared region of image coordinate system, obtains millimeter wave thunder Up to-camera calibration parameter;
S125, according to acquisition millimetre-wave radar-camera calibration parameter that S124 is obtained, the M millimeter wave thunder that S122 is exported It is transformed into image coordinate system up to target data from millimetre-wave radar coordinate system, forms M image object frame.
6. as claimed in claim 5 the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, feature exists In S125 is specifically included:
The image object frame as marking in target frame Truth data library DB2 is calculated for learning milli using formula (7) in S125a The position mapping that metre wave radar coordinate system is transformed into the millimetre-wave radar target output box and image object frame of image coordinate system is closed It is { λx, λy, λw, λh, bx, by};
In formula (7), { λx, λy, λw, λh, bx, byIt is learning parameter;The obstacle target correspondence image that millimetre-wave radar detects In the coordinate points of real obstruction target be expressed as (xgt,ygt, wgt,hgt), xgtCenter for millimetre-wave radar target frame exists Abscissa in millimetre-wave radar coordinate system, ygtIt is vertical in millimetre-wave radar coordinate system for the center of millimetre-wave radar target frame Coordinate, wgtFor width of the center in millimetre-wave radar coordinate system of millimetre-wave radar target frame, hgtFor millimetre-wave radar target Height of the center of frame in millimetre-wave radar coordinate system;The obstacle target that millimetre-wave radar detects is sat from millimetre-wave radar The coordinate points that mark system is transformed into image coordinate system are expressed as (xcam,yCam,wcam,hCam,), xcamCenter for image object frame exists Abscissa in image coordinate system, ycamFor ordinate of the center in image coordinate system of image object frame, wcamFor image mesh Mark width of the frame in image coordinate system, hcamFor height of the image object frame in image coordinate system;
S125b uses for reference the RPN network in Faster R-CNN target detection model, utilizes image object frame Truth data library DB2 The length and width regularity of distribution of the image object frame of middle label is adapted to image object frame true value number using the design of k means clustering algorithm According to the target candidate frame length and width of library DB2, the extension study of millimetre-wave radar target output box is carried out, output is as more as possible and accurate And millimetre-wave radar target expansion subrack including real obstruction target.
7. as claimed in claim 6 the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, feature exists In S100 is specific further include:
S131, the image data that acquisition camera returns;
S132 parses the received image data of S131, obtains the PNG image of BGR triple channel;
S133 obtains laser radar-camera calibration parameter;
S134, according to laser radar-camera calibration parameter that S133 is obtained, by the binaryzation grid including road boundary point information The public domain that map of formatting is transformed into image coordinate system from laser radar coordinate system generates area-of-interest;
S135 carries out parameter learning to YOLOv3 model according to image object frame Truth data library DB2, generate for image into The YOLOv3_Camera target detection model of row multi-target detection;
S136, shown in the area-of-interest that the YOLOv3_Camera target detection model obtained using S135 is generated in S134 Multi-target detection is carried out in the plane of delineation, exports image data, and the information of each of image data obstacle target is denoted as { x, y, w, h, c, o }, (x, y) are coordinate points of the upper left corner of image object frame in image coordinate system, and w is image object frame Width, h are the height of image object frame, and c is the big classification and small classification of obstacle target, and o is the orientation information of obstacle target.
8. as claimed in claim 7 the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, feature exists In " obstacle information of all vehicle peripheries detected is carried out single frames subject fusion " in S200 includes:
Video camera-vehicle calibration parameter is obtained, the target frame in image coordinate system is converted to the target frame of vehicle axis system;
According to millimetre-wave radar-camera calibration parameter and laser radar-camera calibration parameter, by the list under same timestamp After the obstacle target information for the vehicle periphery that the onboard sensor of frame image detects carries out spatial synchronization, it is successively transformed into figure As in coordinate system, vehicle axis system;And
On the basis of video camera testing result, based on global k-nearest neighbor, corresponding millimetre-wave radar and laser radar are matched Information, obtains same obstacle target information, which includes position, distance, classification and the speed of obstacle target.
9. as claimed in claim 8 the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, feature exists In " carrying out the multiple target tracking of continuous interframe using motion prediction and multiframe target association " in S200 includes:
For Car, Pedestrian, Rider in obstacle target in S221, three individual long short-term memories are separately designed Network carries out motion prediction, is related to the location information (x, y) of target, dimension information (w, h);
According to classification o ∈ { Car, Pedestrian, Rider }, memory network is trained the length designed using S222 in short-term, preceding N frame is input data, and N+1 frame is prediction/output data, forms LSTM motion prediction model;
In the three classes obstacle target determined, according in different tracking ID matching image target frame Truth data library DB2 Data (x, y, w, h) of the same obstacle target in continuous N+1 frameI-N+1~i+1, (x, y) is the location information for predicting target frame, (w, h) is the dimension information for predicting target frame;
Utilize the exercise data (x, y, w, h) of same obstacle target in the continuous N frame of trained LSTM model measurementI-N+1~i, Predict the motion information (x, y, w, h) of next frame obstacle targeti+1
By the position of obstacle target and dimensional information and the corresponding speed of fused obstacle target, classification, distance, court To equal attributes as associated attribute, it is associated matching using multiple target of the Hungary Algorithm to continuous interframe, is assigned same The same tracking ID number of obstacle target, the dynamic barrier object library { x, y, w, h, c, id, v, o } after output association;
Wherein, N is the frame number of LSTM motion prediction mode input;I is frame number.
10. as claimed in claim 11 the intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method, feature It is, S300 is specifically included:
S310 receives the updated binaryzation grid that the laser radar detection unit 21 in multi-source multi-target detection module 2 exports It formats the dynamic object library that map and multiframe target association unit 33 formed;
S320 utilizes the information update dynamic barrier object library of updated binaryzation rasterizing map;
S330 updates real-time obstacle target position and motion information according to the updated dynamic barrier object library of S320, defeated Out vehicle can traffic areas.
CN201910007212.5A 2019-01-04 2019-01-04 Intelligent vehicle passable area detection method based on multi-source information fusion Active CN109829386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910007212.5A CN109829386B (en) 2019-01-04 2019-01-04 Intelligent vehicle passable area detection method based on multi-source information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910007212.5A CN109829386B (en) 2019-01-04 2019-01-04 Intelligent vehicle passable area detection method based on multi-source information fusion

Publications (2)

Publication Number Publication Date
CN109829386A true CN109829386A (en) 2019-05-31
CN109829386B CN109829386B (en) 2020-12-11

Family

ID=66860082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910007212.5A Active CN109829386B (en) 2019-01-04 2019-01-04 Intelligent vehicle passable area detection method based on multi-source information fusion

Country Status (1)

Country Link
CN (1) CN109829386B (en)

Cited By (98)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110286389A (en) * 2019-07-15 2019-09-27 北京智行者科技有限公司 A kind of grid management method for obstacle recognition
CN110309741A (en) * 2019-06-19 2019-10-08 百度在线网络技术(北京)有限公司 Obstacle detection method and device
CN110390814A (en) * 2019-06-04 2019-10-29 深圳市速腾聚创科技有限公司 Monitoring system and method
CN110501700A (en) * 2019-08-27 2019-11-26 四川长虹电器股份有限公司 A kind of personnel amount method of counting based on millimetre-wave radar
CN110502018A (en) * 2019-09-06 2019-11-26 百度在线网络技术(北京)有限公司 Determine method, apparatus, electronic equipment and the storage medium of vehicle safety zone
CN110533025A (en) * 2019-07-15 2019-12-03 西安电子科技大学 The millimeter wave human body image detection method of network is extracted based on candidate region
CN110568861A (en) * 2019-09-19 2019-12-13 中国电子科技集团公司电子科学研究院 Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
CN110648538A (en) * 2019-10-29 2020-01-03 苏州大学 Traffic information sensing system and method based on laser radar network
CN110677491A (en) * 2019-10-10 2020-01-10 郑州迈拓信息技术有限公司 Method for estimating position of vehicle
CN110688943A (en) * 2019-09-25 2020-01-14 武汉光庭信息技术股份有限公司 Method and device for automatically acquiring image sample based on actual driving data
CN110738121A (en) * 2019-09-17 2020-01-31 北京科技大学 front vehicle detection method and detection system
CN110781720A (en) * 2019-09-05 2020-02-11 国网江苏省电力有限公司 Object identification method based on image processing and multi-sensor fusion
CN110795819A (en) * 2019-09-16 2020-02-14 腾讯科技(深圳)有限公司 Method and device for generating automatic driving simulation scene and storage medium
CN110827320A (en) * 2019-09-17 2020-02-21 北京邮电大学 Target tracking method and device based on time sequence prediction
CN110853393A (en) * 2019-11-26 2020-02-28 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN110927765A (en) * 2019-11-19 2020-03-27 博康智能信息技术有限公司 Laser radar and satellite navigation fused target online positioning method
CN110969130A (en) * 2019-12-03 2020-04-07 厦门瑞为信息技术有限公司 Driver dangerous action identification method and system based on YOLOV3
CN111027461A (en) * 2019-12-06 2020-04-17 长安大学 Vehicle track prediction method based on multi-dimensional single-step LSTM network
CN111076726A (en) * 2019-12-31 2020-04-28 深圳供电局有限公司 Vision-assisted obstacle avoidance method and device for inspection robot, equipment and storage medium
CN111123262A (en) * 2020-03-30 2020-05-08 江苏广宇科技产业发展有限公司 Automatic driving 3D modeling method, device and system
CN111192295A (en) * 2020-04-14 2020-05-22 中智行科技有限公司 Target detection and tracking method, related device and computer readable storage medium
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111208839A (en) * 2020-04-24 2020-05-29 清华大学 Fusion method and system of real-time perception information and automatic driving map
CN111257892A (en) * 2020-01-09 2020-06-09 武汉理工大学 Obstacle detection method for automatic driving of vehicle
US20200180646A1 (en) * 2018-12-05 2020-06-11 Hyundai Motor Company Sensor fusion target prediction device and method for vehicles and vehicle including the device
CN111289969A (en) * 2020-03-27 2020-06-16 北京润科通用技术有限公司 Vehicle-mounted radar moving target fusion method and device
CN111311945A (en) * 2020-02-20 2020-06-19 南京航空航天大学 Driving decision system and method fusing vision and sensor information
CN111338336A (en) * 2020-02-11 2020-06-26 腾讯科技(深圳)有限公司 Automatic driving method and device
CN111353481A (en) * 2019-12-31 2020-06-30 成都理工大学 Road obstacle identification method based on laser point cloud and video image
CN111413983A (en) * 2020-04-08 2020-07-14 江苏盛海智能科技有限公司 Environment sensing method and control end of unmanned vehicle
CN111429791A (en) * 2020-04-09 2020-07-17 浙江大华技术股份有限公司 Identity determination method, identity determination device, storage medium and electronic device
CN111507233A (en) * 2020-04-13 2020-08-07 吉林大学 Multi-mode information fusion intelligent vehicle pavement type identification method
CN111516605A (en) * 2020-04-28 2020-08-11 上汽大众汽车有限公司 Multi-sensor monitoring equipment and monitoring method
CN111680611A (en) * 2020-06-03 2020-09-18 江苏无线电厂有限公司 Road trafficability detection method, system and equipment
CN111783905A (en) * 2020-09-07 2020-10-16 成都安智杰科技有限公司 Target fusion method and device, storage medium and electronic equipment
CN111880191A (en) * 2020-06-16 2020-11-03 北京大学 Map generation method based on multi-agent laser radar and visual information fusion
CN111898582A (en) * 2020-08-13 2020-11-06 清华大学苏州汽车研究院(吴江) Obstacle information fusion method and system for binocular camera and millimeter wave radar
CN111967374A (en) * 2020-08-14 2020-11-20 安徽海博智能科技有限责任公司 Mine obstacle identification method, system and equipment based on image processing
CN112033429A (en) * 2020-09-14 2020-12-04 吉林大学 Target-level multi-sensor fusion method for intelligent automobile
CN112069856A (en) * 2019-06-10 2020-12-11 商汤集团有限公司 Map generation method, driving control method, device, electronic equipment and system
CN112083400A (en) * 2020-08-21 2020-12-15 达闼机器人有限公司 Calibration method, device and storage medium for moving object and sensor thereof
CN112084810A (en) * 2019-06-12 2020-12-15 杭州海康威视数字技术股份有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN112115819A (en) * 2020-09-03 2020-12-22 同济大学 Driving danger scene identification method based on target detection and TET (transient enhanced test) expansion index
WO2020253764A1 (en) * 2019-06-18 2020-12-24 华为技术有限公司 Method and apparatus for determining running region information
CN112130132A (en) * 2020-09-11 2020-12-25 广州大学 Underground pipeline detection method and system based on ground penetrating radar and deep learning
CN112179360A (en) * 2019-06-14 2021-01-05 北京京东尚科信息技术有限公司 Map generation method, apparatus, system and medium
CN112215144A (en) * 2020-10-12 2021-01-12 北京四维智联科技有限公司 Method and system for processing lane line
CN112233097A (en) * 2020-10-19 2021-01-15 中国科学技术大学 Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
CN112348894A (en) * 2020-11-03 2021-02-09 中冶赛迪重庆信息技术有限公司 Method, system, equipment and medium for identifying position and state of scrap steel truck
CN112348848A (en) * 2020-10-26 2021-02-09 国汽(北京)智能网联汽车研究院有限公司 Information generation method and system for traffic participants
CN112389440A (en) * 2020-11-07 2021-02-23 吉林大学 Vehicle driving risk prediction method in off-road environment based on vehicle-road action mechanism
CN112560974A (en) * 2020-12-22 2021-03-26 清华大学 Information fusion and vehicle information acquisition method and device
CN112558072A (en) * 2020-12-22 2021-03-26 北京百度网讯科技有限公司 Vehicle positioning method, device, system, electronic equipment and storage medium
CN112686979A (en) * 2021-03-22 2021-04-20 中智行科技有限公司 Simulated pedestrian animation generation method and device and electronic equipment
WO2021072696A1 (en) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 Target detection and tracking method and system, and movable platform, camera and medium
CN112767475A (en) * 2020-12-30 2021-05-07 重庆邮电大学 Intelligent roadside sensing system based on C-V2X, radar and vision
CN112764042A (en) * 2020-12-28 2021-05-07 上海汽车集团股份有限公司 Obstacle detection and tracking method and device
CN112763995A (en) * 2020-12-24 2021-05-07 北京百度网讯科技有限公司 Radar calibration method and device, electronic equipment and road side equipment
CN113093221A (en) * 2021-03-31 2021-07-09 东软睿驰汽车技术(沈阳)有限公司 Generation method and device of grid-occupied map
CN113110424A (en) * 2021-03-26 2021-07-13 大连海事大学 Unmanned ship collision avoidance method based on chart information
CN113177427A (en) * 2020-01-23 2021-07-27 宝马股份公司 Road prediction method, autonomous driving method, vehicle and equipment
CN113256962A (en) * 2020-02-13 2021-08-13 宁波吉利汽车研究开发有限公司 Vehicle safety early warning method and system
CN113296118A (en) * 2021-05-24 2021-08-24 福建盛海智能科技有限公司 Unmanned obstacle-avoiding method and terminal based on laser radar and GPS
CN113344954A (en) * 2021-05-06 2021-09-03 加特兰微电子科技(上海)有限公司 Boundary detection method and device, computer equipment, storage medium and sensor
CN113379805A (en) * 2021-08-12 2021-09-10 深圳市城市交通规划设计研究中心股份有限公司 Multi-information resource fusion processing method for traffic nodes
CN113496163A (en) * 2020-04-01 2021-10-12 北京京东乾石科技有限公司 Obstacle identification method and device
CN113642616A (en) * 2021-07-27 2021-11-12 北京三快在线科技有限公司 Method and device for generating training sample based on environmental data
CN113657331A (en) * 2021-08-23 2021-11-16 深圳科卫机器人科技有限公司 Warning line infrared induction identification method and device, computer equipment and storage medium
CN113671460A (en) * 2021-08-20 2021-11-19 上海商汤临港智能科技有限公司 Map generation method and device, computer equipment and storage medium
WO2021232463A1 (en) * 2020-05-19 2021-11-25 北京数字绿土科技有限公司 Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium
CN113702967A (en) * 2021-09-24 2021-11-26 中国北方车辆研究所 Vehicle target guiding and tracking method of ground unmanned platform and vehicle-mounted system
CN113703460A (en) * 2021-08-31 2021-11-26 上海木蚁机器人科技有限公司 Method, device and system for identifying vacancy of navigation vehicle
CN113744518A (en) * 2020-05-30 2021-12-03 华为技术有限公司 Method and device for detecting vehicle travelable area
CN113805572A (en) * 2020-05-29 2021-12-17 华为技术有限公司 Method and device for planning movement
CN113917875A (en) * 2021-10-19 2022-01-11 河南工业大学 Open universal intelligent controller, method and storage medium for autonomous unmanned system
CN113920735A (en) * 2021-10-21 2022-01-11 中国第一汽车股份有限公司 Information fusion method and device, electronic equipment and storage medium
CN113932820A (en) * 2020-06-29 2022-01-14 杭州海康威视数字技术股份有限公司 Object detection method and device
CN113962301A (en) * 2021-10-20 2022-01-21 北京理工大学 Multi-source input signal fused pavement quality detection method and system
CN114067556A (en) * 2020-08-05 2022-02-18 北京万集科技股份有限公司 Environment sensing method, device, server and readable storage medium
CN114092388A (en) * 2021-08-30 2022-02-25 河南笛卡尔机器人科技有限公司 Obstacle detection method based on monocular camera and odometer
CN114267191A (en) * 2021-12-10 2022-04-01 北京理工大学 Control system, method, medium, equipment and application for relieving traffic jam of driver
WO2022078463A1 (en) * 2020-10-16 2022-04-21 爱驰汽车(上海)有限公司 Vehicle-based obstacle detection method and device
CN114500736A (en) * 2020-10-23 2022-05-13 广州汽车集团股份有限公司 Intelligent terminal motion trajectory decision method and system and storage medium
CN114494248A (en) * 2022-04-01 2022-05-13 之江实验室 Three-dimensional target detection system and method based on point cloud and images under different visual angles
CN114563007A (en) * 2022-04-28 2022-05-31 新石器慧通(北京)科技有限公司 Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium
US20220194430A1 (en) * 2019-11-01 2022-06-23 Mitsubishi Electric Corporation Information processing device, information processing system, and information processing method
CN115100633A (en) * 2022-08-24 2022-09-23 广东中科凯泽信息科技有限公司 Obstacle identification method based on machine learning
CN115100631A (en) * 2022-07-18 2022-09-23 浙江省交通运输科学研究院 Road map acquisition system and method for multi-source information composite feature extraction
CN115222767A (en) * 2022-04-12 2022-10-21 广州汽车集团股份有限公司 Space parking stall-based tracking method and system
CN115516538A (en) * 2020-06-25 2022-12-23 株式会社日立制作所 Information management system, information management device, and information management method
CN115691221A (en) * 2022-12-16 2023-02-03 山东矩阵软件工程股份有限公司 Vehicle early warning method, vehicle early warning system and related device
CN115797900A (en) * 2021-09-09 2023-03-14 廊坊和易生活网络科技股份有限公司 Monocular vision-based vehicle road posture sensing method
CN115900771A (en) * 2023-03-08 2023-04-04 小米汽车科技有限公司 Information determination method and device, vehicle and storage medium
CN115965682A (en) * 2022-12-16 2023-04-14 镁佳(北京)科技有限公司 Method and device for determining passable area of vehicle and computer equipment
CN116453205A (en) * 2022-11-22 2023-07-18 深圳市旗扬特种装备技术工程有限公司 Method, device and system for identifying stay behavior of commercial vehicle
CN116592871A (en) * 2023-04-28 2023-08-15 连云港杰瑞科创园管理有限公司 Unmanned ship multi-source target information fusion method
WO2023173699A1 (en) * 2022-03-18 2023-09-21 合众新能源汽车股份有限公司 Machine learning-based assisted driving method and apparatus, and computer-readable medium
CN117456108A (en) * 2023-12-22 2024-01-26 四川省安全科学技术研究院 Three-dimensional data acquisition method for line laser sensor and high-definition camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160224850A1 (en) * 2013-12-06 2016-08-04 Google Inc. Static Obstacle Detection
CN106291736A (en) * 2016-08-16 2017-01-04 张家港长安大学汽车工程研究院 Pilotless automobile track dynamic disorder object detecting method
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160224850A1 (en) * 2013-12-06 2016-08-04 Google Inc. Static Obstacle Detection
CN106291736A (en) * 2016-08-16 2017-01-04 张家港长安大学汽车工程研究院 Pilotless automobile track dynamic disorder object detecting method
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王肖: "复杂环境下智能车辆动态目标三维感知方法研究", 《中国博士学位论文全文数据库工程科技Ⅱ辑》 *

Cited By (147)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200180646A1 (en) * 2018-12-05 2020-06-11 Hyundai Motor Company Sensor fusion target prediction device and method for vehicles and vehicle including the device
US11748593B2 (en) * 2018-12-05 2023-09-05 Hyundai Motor Company Sensor fusion target prediction device and method for vehicles and vehicle including the device
CN110390814A (en) * 2019-06-04 2019-10-29 深圳市速腾聚创科技有限公司 Monitoring system and method
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN112069856B (en) * 2019-06-10 2024-06-14 商汤集团有限公司 Map generation method, driving control device, electronic equipment and system
CN112069856A (en) * 2019-06-10 2020-12-11 商汤集团有限公司 Map generation method, driving control method, device, electronic equipment and system
CN112084810A (en) * 2019-06-12 2020-12-15 杭州海康威视数字技术股份有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN112084810B (en) * 2019-06-12 2024-03-08 杭州海康威视数字技术股份有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN112179360A (en) * 2019-06-14 2021-01-05 北京京东尚科信息技术有限公司 Map generation method, apparatus, system and medium
US20220108552A1 (en) 2019-06-18 2022-04-07 Huawei Technologies Co., Ltd. Method and Apparatus for Determining Drivable Region Information
US11698459B2 (en) 2019-06-18 2023-07-11 Huawei Technologies Co., Ltd. Method and apparatus for determining drivable region information
WO2020253764A1 (en) * 2019-06-18 2020-12-24 华为技术有限公司 Method and apparatus for determining running region information
CN110309741A (en) * 2019-06-19 2019-10-08 百度在线网络技术(北京)有限公司 Obstacle detection method and device
CN110533025A (en) * 2019-07-15 2019-12-03 西安电子科技大学 The millimeter wave human body image detection method of network is extracted based on candidate region
CN110286389A (en) * 2019-07-15 2019-09-27 北京智行者科技有限公司 A kind of grid management method for obstacle recognition
CN110501700A (en) * 2019-08-27 2019-11-26 四川长虹电器股份有限公司 A kind of personnel amount method of counting based on millimetre-wave radar
CN110781720A (en) * 2019-09-05 2020-02-11 国网江苏省电力有限公司 Object identification method based on image processing and multi-sensor fusion
CN110781720B (en) * 2019-09-05 2022-08-19 国网江苏省电力有限公司 Object identification method based on image processing and multi-sensor fusion
CN110502018A (en) * 2019-09-06 2019-11-26 百度在线网络技术(北京)有限公司 Determine method, apparatus, electronic equipment and the storage medium of vehicle safety zone
CN110502018B (en) * 2019-09-06 2022-04-12 百度在线网络技术(北京)有限公司 Method and device for determining vehicle safety area, electronic equipment and storage medium
CN110795819A (en) * 2019-09-16 2020-02-14 腾讯科技(深圳)有限公司 Method and device for generating automatic driving simulation scene and storage medium
CN110795819B (en) * 2019-09-16 2022-05-20 腾讯科技(深圳)有限公司 Method and device for generating automatic driving simulation scene and storage medium
CN110738121A (en) * 2019-09-17 2020-01-31 北京科技大学 front vehicle detection method and detection system
CN110827320A (en) * 2019-09-17 2020-02-21 北京邮电大学 Target tracking method and device based on time sequence prediction
CN110827320B (en) * 2019-09-17 2022-05-20 北京邮电大学 Target tracking method and device based on time sequence prediction
CN110568861B (en) * 2019-09-19 2022-09-16 中国电子科技集团公司电子科学研究院 Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
CN110568861A (en) * 2019-09-19 2019-12-13 中国电子科技集团公司电子科学研究院 Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
CN110688943A (en) * 2019-09-25 2020-01-14 武汉光庭信息技术股份有限公司 Method and device for automatically acquiring image sample based on actual driving data
CN110677491A (en) * 2019-10-10 2020-01-10 郑州迈拓信息技术有限公司 Method for estimating position of vehicle
WO2021072696A1 (en) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 Target detection and tracking method and system, and movable platform, camera and medium
CN110648538A (en) * 2019-10-29 2020-01-03 苏州大学 Traffic information sensing system and method based on laser radar network
US20220194430A1 (en) * 2019-11-01 2022-06-23 Mitsubishi Electric Corporation Information processing device, information processing system, and information processing method
CN110927765B (en) * 2019-11-19 2022-02-08 博康智能信息技术有限公司 Laser radar and satellite navigation fused target online positioning method
CN110927765A (en) * 2019-11-19 2020-03-27 博康智能信息技术有限公司 Laser radar and satellite navigation fused target online positioning method
CN110853393A (en) * 2019-11-26 2020-02-28 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN110853393B (en) * 2019-11-26 2020-12-11 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN110969130B (en) * 2019-12-03 2023-04-18 厦门瑞为信息技术有限公司 Driver dangerous action identification method and system based on YOLOV3
CN110969130A (en) * 2019-12-03 2020-04-07 厦门瑞为信息技术有限公司 Driver dangerous action identification method and system based on YOLOV3
CN111027461A (en) * 2019-12-06 2020-04-17 长安大学 Vehicle track prediction method based on multi-dimensional single-step LSTM network
CN111027461B (en) * 2019-12-06 2022-04-29 长安大学 Vehicle track prediction method based on multi-dimensional single-step LSTM network
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111191600B (en) * 2019-12-30 2023-06-23 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111353481A (en) * 2019-12-31 2020-06-30 成都理工大学 Road obstacle identification method based on laser point cloud and video image
CN111076726A (en) * 2019-12-31 2020-04-28 深圳供电局有限公司 Vision-assisted obstacle avoidance method and device for inspection robot, equipment and storage medium
CN111257892A (en) * 2020-01-09 2020-06-09 武汉理工大学 Obstacle detection method for automatic driving of vehicle
CN113177427A (en) * 2020-01-23 2021-07-27 宝马股份公司 Road prediction method, autonomous driving method, vehicle and equipment
CN111338336B (en) * 2020-02-11 2021-07-13 腾讯科技(深圳)有限公司 Automatic driving method and device
CN111338336A (en) * 2020-02-11 2020-06-26 腾讯科技(深圳)有限公司 Automatic driving method and device
CN113256962B (en) * 2020-02-13 2022-12-23 宁波吉利汽车研究开发有限公司 Vehicle safety early warning method and system
CN113256962A (en) * 2020-02-13 2021-08-13 宁波吉利汽车研究开发有限公司 Vehicle safety early warning method and system
CN111311945A (en) * 2020-02-20 2020-06-19 南京航空航天大学 Driving decision system and method fusing vision and sensor information
CN111289969B (en) * 2020-03-27 2022-03-04 北京润科通用技术有限公司 Vehicle-mounted radar moving target fusion method and device
CN111289969A (en) * 2020-03-27 2020-06-16 北京润科通用技术有限公司 Vehicle-mounted radar moving target fusion method and device
CN111123262B (en) * 2020-03-30 2020-06-26 江苏广宇科技产业发展有限公司 Automatic driving 3D modeling method, device and system
CN111123262A (en) * 2020-03-30 2020-05-08 江苏广宇科技产业发展有限公司 Automatic driving 3D modeling method, device and system
CN113496163B (en) * 2020-04-01 2024-01-16 北京京东乾石科技有限公司 Obstacle recognition method and device
CN113496163A (en) * 2020-04-01 2021-10-12 北京京东乾石科技有限公司 Obstacle identification method and device
CN111413983A (en) * 2020-04-08 2020-07-14 江苏盛海智能科技有限公司 Environment sensing method and control end of unmanned vehicle
CN111429791A (en) * 2020-04-09 2020-07-17 浙江大华技术股份有限公司 Identity determination method, identity determination device, storage medium and electronic device
CN111507233B (en) * 2020-04-13 2022-12-13 吉林大学 Multi-mode information fusion intelligent vehicle pavement type identification method
CN111507233A (en) * 2020-04-13 2020-08-07 吉林大学 Multi-mode information fusion intelligent vehicle pavement type identification method
CN111192295A (en) * 2020-04-14 2020-05-22 中智行科技有限公司 Target detection and tracking method, related device and computer readable storage medium
CN111208839A (en) * 2020-04-24 2020-05-29 清华大学 Fusion method and system of real-time perception information and automatic driving map
CN111516605A (en) * 2020-04-28 2020-08-11 上汽大众汽车有限公司 Multi-sensor monitoring equipment and monitoring method
CN111516605B (en) * 2020-04-28 2021-07-27 上汽大众汽车有限公司 Multi-sensor monitoring equipment and monitoring method
WO2021232463A1 (en) * 2020-05-19 2021-11-25 北京数字绿土科技有限公司 Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium
CN113805572A (en) * 2020-05-29 2021-12-17 华为技术有限公司 Method and device for planning movement
CN113805572B (en) * 2020-05-29 2023-12-15 华为技术有限公司 Method and device for motion planning
CN113744518B (en) * 2020-05-30 2023-04-18 华为技术有限公司 Method and device for detecting vehicle travelable area
CN113744518A (en) * 2020-05-30 2021-12-03 华为技术有限公司 Method and device for detecting vehicle travelable area
CN111680611A (en) * 2020-06-03 2020-09-18 江苏无线电厂有限公司 Road trafficability detection method, system and equipment
CN111680611B (en) * 2020-06-03 2023-06-16 江苏无线电厂有限公司 Road trafficability detection method, system and equipment
CN111880191A (en) * 2020-06-16 2020-11-03 北京大学 Map generation method based on multi-agent laser radar and visual information fusion
CN111880191B (en) * 2020-06-16 2023-03-28 北京大学 Map generation method based on multi-agent laser radar and visual information fusion
CN115516538A (en) * 2020-06-25 2022-12-23 株式会社日立制作所 Information management system, information management device, and information management method
CN113932820A (en) * 2020-06-29 2022-01-14 杭州海康威视数字技术股份有限公司 Object detection method and device
CN114067556B (en) * 2020-08-05 2023-03-14 北京万集科技股份有限公司 Environment sensing method, device, server and readable storage medium
CN114067556A (en) * 2020-08-05 2022-02-18 北京万集科技股份有限公司 Environment sensing method, device, server and readable storage medium
CN111898582A (en) * 2020-08-13 2020-11-06 清华大学苏州汽车研究院(吴江) Obstacle information fusion method and system for binocular camera and millimeter wave radar
CN111898582B (en) * 2020-08-13 2023-09-12 清华大学苏州汽车研究院(吴江) Obstacle information fusion method and system for binocular camera and millimeter wave radar
CN111967374A (en) * 2020-08-14 2020-11-20 安徽海博智能科技有限责任公司 Mine obstacle identification method, system and equipment based on image processing
CN112083400A (en) * 2020-08-21 2020-12-15 达闼机器人有限公司 Calibration method, device and storage medium for moving object and sensor thereof
CN112115819A (en) * 2020-09-03 2020-12-22 同济大学 Driving danger scene identification method based on target detection and TET (transient enhanced test) expansion index
CN112115819B (en) * 2020-09-03 2022-09-20 同济大学 Driving danger scene identification method based on target detection and TET (transient enhanced test) expansion index
CN111783905A (en) * 2020-09-07 2020-10-16 成都安智杰科技有限公司 Target fusion method and device, storage medium and electronic equipment
CN112130132B (en) * 2020-09-11 2023-08-29 广州大学 Underground pipeline detection method and system based on ground penetrating radar and deep learning
CN112130132A (en) * 2020-09-11 2020-12-25 广州大学 Underground pipeline detection method and system based on ground penetrating radar and deep learning
CN112033429A (en) * 2020-09-14 2020-12-04 吉林大学 Target-level multi-sensor fusion method for intelligent automobile
CN112033429B (en) * 2020-09-14 2022-07-19 吉林大学 Target-level multi-sensor fusion method for intelligent automobile
CN112215144A (en) * 2020-10-12 2021-01-12 北京四维智联科技有限公司 Method and system for processing lane line
CN112215144B (en) * 2020-10-12 2024-05-14 北京四维智联科技有限公司 Method and system for processing lane lines
WO2022078463A1 (en) * 2020-10-16 2022-04-21 爱驰汽车(上海)有限公司 Vehicle-based obstacle detection method and device
CN112233097A (en) * 2020-10-19 2021-01-15 中国科学技术大学 Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
CN112233097B (en) * 2020-10-19 2022-10-28 中国科学技术大学 Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
CN114500736B (en) * 2020-10-23 2023-12-05 广州汽车集团股份有限公司 Intelligent terminal motion trail decision method and system and storage medium thereof
CN114500736A (en) * 2020-10-23 2022-05-13 广州汽车集团股份有限公司 Intelligent terminal motion trajectory decision method and system and storage medium
CN112348848A (en) * 2020-10-26 2021-02-09 国汽(北京)智能网联汽车研究院有限公司 Information generation method and system for traffic participants
CN112348894A (en) * 2020-11-03 2021-02-09 中冶赛迪重庆信息技术有限公司 Method, system, equipment and medium for identifying position and state of scrap steel truck
CN112389440A (en) * 2020-11-07 2021-02-23 吉林大学 Vehicle driving risk prediction method in off-road environment based on vehicle-road action mechanism
CN112560974A (en) * 2020-12-22 2021-03-26 清华大学 Information fusion and vehicle information acquisition method and device
CN112558072A (en) * 2020-12-22 2021-03-26 北京百度网讯科技有限公司 Vehicle positioning method, device, system, electronic equipment and storage medium
CN112558072B (en) * 2020-12-22 2024-05-28 阿波罗智联(北京)科技有限公司 Vehicle positioning method, device, system, electronic equipment and storage medium
CN112763995A (en) * 2020-12-24 2021-05-07 北京百度网讯科技有限公司 Radar calibration method and device, electronic equipment and road side equipment
CN112763995B (en) * 2020-12-24 2023-09-01 阿波罗智联(北京)科技有限公司 Radar calibration method and device, electronic equipment and road side equipment
CN112764042A (en) * 2020-12-28 2021-05-07 上海汽车集团股份有限公司 Obstacle detection and tracking method and device
CN112764042B (en) * 2020-12-28 2023-11-21 上海汽车集团股份有限公司 Obstacle detection and tracking method and device
CN112767475A (en) * 2020-12-30 2021-05-07 重庆邮电大学 Intelligent roadside sensing system based on C-V2X, radar and vision
CN112686979A (en) * 2021-03-22 2021-04-20 中智行科技有限公司 Simulated pedestrian animation generation method and device and electronic equipment
CN113110424A (en) * 2021-03-26 2021-07-13 大连海事大学 Unmanned ship collision avoidance method based on chart information
CN113093221A (en) * 2021-03-31 2021-07-09 东软睿驰汽车技术(沈阳)有限公司 Generation method and device of grid-occupied map
CN113344954A (en) * 2021-05-06 2021-09-03 加特兰微电子科技(上海)有限公司 Boundary detection method and device, computer equipment, storage medium and sensor
CN113296118A (en) * 2021-05-24 2021-08-24 福建盛海智能科技有限公司 Unmanned obstacle-avoiding method and terminal based on laser radar and GPS
CN113296118B (en) * 2021-05-24 2023-11-24 江苏盛海智能科技有限公司 Unmanned obstacle detouring method and terminal based on laser radar and GPS
CN113642616B (en) * 2021-07-27 2023-10-31 北京三快在线科技有限公司 Training sample generation method and device based on environment data
CN113642616A (en) * 2021-07-27 2021-11-12 北京三快在线科技有限公司 Method and device for generating training sample based on environmental data
CN113379805A (en) * 2021-08-12 2021-09-10 深圳市城市交通规划设计研究中心股份有限公司 Multi-information resource fusion processing method for traffic nodes
CN113671460A (en) * 2021-08-20 2021-11-19 上海商汤临港智能科技有限公司 Map generation method and device, computer equipment and storage medium
CN113671460B (en) * 2021-08-20 2024-03-22 上海商汤临港智能科技有限公司 Map generation method, map generation device, computer equipment and storage medium
CN113657331A (en) * 2021-08-23 2021-11-16 深圳科卫机器人科技有限公司 Warning line infrared induction identification method and device, computer equipment and storage medium
CN114092388A (en) * 2021-08-30 2022-02-25 河南笛卡尔机器人科技有限公司 Obstacle detection method based on monocular camera and odometer
CN113703460B (en) * 2021-08-31 2024-02-09 上海木蚁机器人科技有限公司 Method, device and system for identifying vacant position of navigation vehicle
CN113703460A (en) * 2021-08-31 2021-11-26 上海木蚁机器人科技有限公司 Method, device and system for identifying vacancy of navigation vehicle
CN115797900A (en) * 2021-09-09 2023-03-14 廊坊和易生活网络科技股份有限公司 Monocular vision-based vehicle road posture sensing method
CN115797900B (en) * 2021-09-09 2023-06-27 廊坊和易生活网络科技股份有限公司 Vehicle-road gesture sensing method based on monocular vision
CN113702967A (en) * 2021-09-24 2021-11-26 中国北方车辆研究所 Vehicle target guiding and tracking method of ground unmanned platform and vehicle-mounted system
CN113702967B (en) * 2021-09-24 2023-07-28 中国北方车辆研究所 Method for identifying and tracking guided vehicle target of ground unmanned platform and vehicle-mounted system
CN113917875A (en) * 2021-10-19 2022-01-11 河南工业大学 Open universal intelligent controller, method and storage medium for autonomous unmanned system
CN113962301A (en) * 2021-10-20 2022-01-21 北京理工大学 Multi-source input signal fused pavement quality detection method and system
CN113920735A (en) * 2021-10-21 2022-01-11 中国第一汽车股份有限公司 Information fusion method and device, electronic equipment and storage medium
CN114267191A (en) * 2021-12-10 2022-04-01 北京理工大学 Control system, method, medium, equipment and application for relieving traffic jam of driver
WO2023173699A1 (en) * 2022-03-18 2023-09-21 合众新能源汽车股份有限公司 Machine learning-based assisted driving method and apparatus, and computer-readable medium
CN114494248A (en) * 2022-04-01 2022-05-13 之江实验室 Three-dimensional target detection system and method based on point cloud and images under different visual angles
CN115222767B (en) * 2022-04-12 2024-01-23 广州汽车集团股份有限公司 Tracking method and system based on space parking space
CN115222767A (en) * 2022-04-12 2022-10-21 广州汽车集团股份有限公司 Space parking stall-based tracking method and system
CN114563007A (en) * 2022-04-28 2022-05-31 新石器慧通(北京)科技有限公司 Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium
CN115100631A (en) * 2022-07-18 2022-09-23 浙江省交通运输科学研究院 Road map acquisition system and method for multi-source information composite feature extraction
CN115100633A (en) * 2022-08-24 2022-09-23 广东中科凯泽信息科技有限公司 Obstacle identification method based on machine learning
CN116453205A (en) * 2022-11-22 2023-07-18 深圳市旗扬特种装备技术工程有限公司 Method, device and system for identifying stay behavior of commercial vehicle
CN115965682B (en) * 2022-12-16 2023-09-01 镁佳(北京)科技有限公司 Vehicle passable area determining method and device and computer equipment
CN115965682A (en) * 2022-12-16 2023-04-14 镁佳(北京)科技有限公司 Method and device for determining passable area of vehicle and computer equipment
CN115691221A (en) * 2022-12-16 2023-02-03 山东矩阵软件工程股份有限公司 Vehicle early warning method, vehicle early warning system and related device
CN115900771B (en) * 2023-03-08 2023-05-30 小米汽车科技有限公司 Information determination method, device, vehicle and storage medium
CN115900771A (en) * 2023-03-08 2023-04-04 小米汽车科技有限公司 Information determination method and device, vehicle and storage medium
CN116592871A (en) * 2023-04-28 2023-08-15 连云港杰瑞科创园管理有限公司 Unmanned ship multi-source target information fusion method
CN116592871B (en) * 2023-04-28 2024-04-23 连云港杰瑞科创园管理有限公司 Unmanned ship multi-source target information fusion method
CN117456108A (en) * 2023-12-22 2024-01-26 四川省安全科学技术研究院 Three-dimensional data acquisition method for line laser sensor and high-definition camera
CN117456108B (en) * 2023-12-22 2024-02-23 四川省安全科学技术研究院 Three-dimensional data acquisition method for line laser sensor and high-definition camera

Also Published As

Publication number Publication date
CN109829386B (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN109829386A (en) Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN109556615B (en) Driving map generation method based on multi-sensor fusion cognition of automatic driving
US11934962B2 (en) Object association for autonomous vehicles
Badue et al. Self-driving cars: A survey
US11885910B2 (en) Hybrid-view LIDAR-based object detection
US11836623B2 (en) Object detection and property determination for autonomous vehicles
US11912286B2 (en) Driving risk identification model calibration method and system
CN110531753A (en) Control system, control method and the controller of autonomous vehicle
CN109062209A (en) A kind of intelligently auxiliary Ride Control System and its control method
US20210303922A1 (en) Systems and Methods for Training Object Detection Models Using Adversarial Examples
Zhang et al. A cognitively inspired system architecture for the Mengshi cognitive vehicle
CN108877267A (en) A kind of intersection detection method based on vehicle-mounted monocular camera
Min et al. Sae level 3 autonomous driving technology of the etri
US20230260266A1 (en) Camera-radar data fusion for efficient object detection
US20220171066A1 (en) Systems and methods for jointly predicting trajectories of multiple moving objects
EP4258226A1 (en) End-to-end object tracking using neural networks with attention
CN115662166A (en) Automatic driving data processing method and automatic driving traffic system
Gao et al. Discretionary cut-in driving behavior risk assessment based on naturalistic driving data
CN115951326A (en) Object detection method, system and storage medium
WO2023158642A1 (en) Camera-radar data fusion for efficient object detection
WO2023158706A1 (en) End-to-end processing in automated driving systems
Lai et al. Sensor fusion of camera and MMW radar based on machine learning for vehicles
Hetzel et al. The IMPTC Dataset: An Infrastructural Multi-Person Trajectory and Context Dataset
US20240096105A1 (en) Object identification in bird's-eye view reference frame with explicit depth estimation co-training
EP4361961A1 (en) Method of determining information related to road user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant