CN109948661A - A kind of 3D vehicle checking method based on Multi-sensor Fusion - Google Patents

A kind of 3D vehicle checking method based on Multi-sensor Fusion Download PDF

Info

Publication number
CN109948661A
CN109948661A CN201910144580.4A CN201910144580A CN109948661A CN 109948661 A CN109948661 A CN 109948661A CN 201910144580 A CN201910144580 A CN 201910144580A CN 109948661 A CN109948661 A CN 109948661A
Authority
CN
China
Prior art keywords
vehicle
cloud
frame
point cloud
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910144580.4A
Other languages
Chinese (zh)
Other versions
CN109948661B (en
Inventor
蔡英凤
张田田
王海
李祎承
刘擎超
陈小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201910144580.4A priority Critical patent/CN109948661B/en
Publication of CN109948661A publication Critical patent/CN109948661A/en
Application granted granted Critical
Publication of CN109948661B publication Critical patent/CN109948661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a kind of 3D vehicle checking method based on Multi-sensor Fusion, it include: step 1, the semantic information (i.e. RGB image) of vehicle is obtained by the camera being installed on vehicle, and vehicle-periphery is scanned positioned at the laser radar of roof, obtain the exact depth information (i.e. laser radar point cloud) of environment;Step 2, laser radar point cloud is pre-processed, according to the height of automobile, takes Z axis [0,2.5] m, cloud is sliced along impartial 5 height that are divided into of Z-direction;Step 3,3D interested area of vehicle is generated on laser radar point cloud;Step 4, feature extraction is carried out to treated radar points cloud and RGB image respectively and generates individual features figure;Step 5, above-mentioned 3D interested area of vehicle is respectively mapped on the characteristic pattern of a cloud and RGB image;Step 6, demapping section characteristic pattern in step 5 is merged, and finally realizes the 3D positioning and detection of vehicle target.

Description

A kind of 3D vehicle checking method based on Multi-sensor Fusion
Technical field
The invention belongs to automatic Pilot fields, and in particular to a kind of vehicle 3D detection method based on Multi-sensor Fusion.
Background technique
Intelligent vehicle is the complication system including technologies such as perception, decision and controls, and environment sensing is for path planning Basic information is provided with Decision Control, and automotive check is work extremely critical in Autonomous Vehicles context aware systems, mainstream Detection of obstacles sensor be camera and laser radar, the vehicle detection for being now based on vision has been achieved for imitating well Fruit, camera is at low cost, can obtain the texture and color of target, therefore widely used in terms of intelligent driving, however images Head it is more sensitive to illumination and dash area, accurate and enough location informations cannot be provided, frequently result in real-time it is not high or The problems such as robustness is too poor.Laser radar can obtain target range and three information, and detection range is remote and is not illuminated by the light influence, But the texture and color to target can not determine, so single sensor is unable to satisfy the demand of autonomous driving.Therefore it uses Laser radar and camera carry out data fusion to complete moving vehicles detection and tracking task, reduce in vehicle detection to single sensing The dependence of device detection effect, and obtain higher 3D vehicle detection rate.
Summary of the invention
The purpose of the invention is to preferably be detected to surrounding vehicles, to be intelligent vehicle path planning and decision Basic information is provided, a kind of 3 dimension (3D) vehicle checking methods based on sensor fusion is proposed, higher 3D vehicle can be obtained Verification and measurement ratio.
The technical solution that 3D vehicle checking method proposed by the present invention based on Multi-sensor Fusion uses includes following step It is rapid:
A kind of 3D vehicle checking method obtained based on Multi-sensor Fusion, is included the following steps:
Step 1, the semantic information (i.e. RGB image) of vehicle is obtained by the camera being installed on vehicle, and is located at vehicle The laser radar on top is scanned vehicle-periphery, obtains the exact depth information (i.e. laser radar point cloud) of environment;
Step 2, laser radar point cloud is pre-processed, according to the height of automobile, establish with laser radar vertically downward with Ground contact points are origin, vehicle heading is X-axis positive direction, the driver left side is Y-axis positive direction, is upwards perpendicular to the ground Z axis positive direction coordinate system takes Z axis [0,2.5] m, and cloud is sliced along impartial 5 height that are divided into of Z-direction;
Step 3,3D interested area of vehicle is generated on laser radar point cloud;
Step 4, feature extraction is carried out to treated radar points cloud and RGB image respectively and generates individual features figure;
Step 5, above-mentioned 3D interested area of vehicle is respectively mapped on the characteristic pattern of a cloud and RGB image;
Step 6, demapping section characteristic pattern in step 5 is merged, and finally realizes the carry out 3D positioning of vehicle target With detection.
Further, the pretreatment of step 2 includes the processing method of point cloud birds-eye view (BEV):
The birds-eye view (BEV) of point cloud is that (Z=0) 2D grid is projected to obtain to the ground by point cloud data, in order to obtain More detailed elevation information, the point centered on laser radar position take in BEV left-right position [- 40,40] m, front position [0, 70]m.And according to the actual height of automobile, Z axis [0,2.5] m is taken, 5 height that are divided by cloud along Z-direction equalization are cut Piece, for each slice, (Z=0) 2D grid is projected to the ground, and the corresponding altitude feature of each slice is taken as projecting to this The maximum height value of grating map point cloud data.Point cloud density M refers to the point cloud number of each cell, and makes each grid Value normalization:
Wherein N is the number of unit grating map midpoint cloud.
Further, the specific steps of step 3 are as follows:
3D interested area of vehicle is generated on cloud, for the classification and positioning of target, using birds-eye view (BEV) as defeated Enter, a series of 3D candidate frame can generated before, to reduce calculation amount, empty frameing shift is removed, we are to be left each frame One binary label of content assignment, i.e., positive tag representation target vehicle, negative tag representation background, by calculate anchor frame and IOU between real border frame is overlapped size, distributes positive label to two class anchor frames:
1) there is the anchor frame (less than 0.5) that highest IOU is Chong Die with some real border frame,
2) IOU with any real border frame greater than 0.5 overlapping anchor frame.
And a real border frame may distribute positive label to multiple anchor frames.Distribute negative label (background) give it is all true The IOU of real edge frame is below 0.3 anchor frame, and non-just non-negative anchor frame does not have any effect to training objective, thus we Subsequent processing is ignored.After the anchor frame for obtaining above-mentioned positive label, preliminary 3D regression optimization is carried out to it, it is assumed that each 3D prediction block indicates that (x, y, z) indicates that the central point of frame, (h, w, d) indicate the size of frame with (x, y, z, h, w, d).In laser The mass center and size of 3D frame in radar fix system, by calculating prospect, there are between the bounding box of target area and real border frame It is that (Δ x, Δ y, Δ z, Δ h, Δ w, Δ d) are generated to be mapped on characteristic pattern for after in the difference of central point and size ROI carry out difference and Primary Location.3D anchor frame (xa,ya,za,ha,wa,da) indicate, 3D real border frame (x*,y*,z*, h*,w*,d*) indicate, tiOffset of the prediction block relative to 3D anchor frame is indicated, if ti6 parametrization coordinates are ti=(tx,ty, tz,th,tw,td),Indicate offset of the 3D real border frame relative to 3D anchor frame, ifA parametrization coordinate isThen have:
tx=(x-xa)/ha ty=(y-ya)/wa
tz=(z-za)/da th=log (h/ha)
tw=log (w/wa)td=log (d/da)
It is returned by SmoothL1 function for 3Dbox:
Target object is calculated using cross-entropy function to lose:Wherein n It is that there are bounding box numbers for target area.
It is returned by calculating the mass center between 3D anchor frame and 3D real border frame and the difference between size to execute 3D frame, Final output 3D area-of-interest in cloud.
Further, the detailed process of step 4 includes:
Step 4.1, it is assumed that the size of input RGB image or BEV figure is H × W × D, uses VGG-16 in the down-sampling stage First three convolutional layer of network, cause export characteristic pattern resolution ratio its input accordingly it is 8 times small, in this stage, feature The output size of figure is
Step 4.2, by the characteristic pattern of the semantic information (including laser radar point cloud and RGB image) of high-rise low resolution 2x up-sampling is carried out, guarantees that down-sampling phase characteristic figure size corresponding with up-sampling is identical, and melt to characteristic pattern 3X3 convolution It closes, to obtain full resolution characteristic pattern in the last layer of feature extraction frame.
Further, step 5,6 specific method:
3D area-of-interest will be obtained on the birds-eye view (BEV) in laser radar point cloud that step 3 obtains, according to laser thunder The area-of-interest obtained on radar points cloud is respectively mapped to radar respectively up to the coordinate relationship between cloud and RGB image On the characteristic pattern of point cloud and RGB image, the coordinate position of the corresponding frame on characteristic pattern is finally obtained, but due to finally mapping It is of different sizes in the frame that characteristic pattern obtains, lead to that fusion treatment cannot be done, therefore be by fixed size by obtained characteristic pattern Then 3X3 carries out pixel to the characteristic pattern mapped in BEV and RGB and averagely merges.
1, the invention has the advantages that: the present invention perceives surrounding enviroment using laser radar and camera, energy There is the data for enough making full use of laser radar to acquire exact depth information and video camera can remain more detailed semantic letter The advantage of breath.The accuracy to nearby vehicle 3D detection greatly improved.
2, the method that tradition carries out vehicle detection using single-sensor, the Limited information of acquisition, while also by itself property The influence of energy, the present invention can make up deficiency of the single-sensor in vehicle detection, improve the accuracy of nearby vehicle detection.
3, the present invention first extracts vehicle interesting target, then carries out pixel to this Partial Feature figure and averagely merges Processing, greatly reduces calculation amount, improves the real-time of vehicle detection.
Detailed description of the invention
Fig. 1 is the flow chart of the vehicle checking method proposed by the present invention based on Multi-sensor Fusion;
Fig. 2 is will to put the cloud birds-eye view (BEV) that is divided into 5 height slice impartial along Z-direction;
(a) it indicates (b) to indicate (c) to indicate along Z axis [0.5,1.0] radar point cloud chart along Z axis [0,0.5] radar point cloud chart Along Z axis [1.0,1.5] radar point cloud chart, (d) indicate along Z axis [1.5,2.0] radar point cloud chart, (e) indicate along Z axis [2.0, 2.5] radar point cloud chart;
Fig. 3 is point cloud and RGB image feature extraction frame.
Specific embodiment
The present invention will be further explained below with reference to the attached drawings.
Automotive check is part extremely critical in Autonomous Vehicles context aware systems, and the invention proposes be based on multisensor The 3D vehicle checking method of fusion, overhaul flow chart is as shown in Figure 1, specific as follows:
(1) point cloud data is acquired by laser radar and camera acquires RGB image information, and to collected cloud It pre-processes, the birds-eye view (BEV) of cloud is inputted as point cloud data, be by point cloud data (Z=0) 2D grid to the ground It is projected to obtain, in order to obtain more detailed elevation information, the point centered on laser radar position takes in BEV left-right position [- 40,40] m, front position [0,70] m.And according to the actual height of automobile, Z axis [0,2.5] m is taken, by point cloud along Z-direction Impartial 5 height that are divided into are sliced, and for each slice, (Z=0) 2D grid is projected to the ground, and each slice is corresponding Altitude feature is taken as projecting to the maximum height value of the grating map point cloud data.Point cloud density M refers to the point of each cell Cloud number, and normalize the value of each grid:
Wherein N is the number of unit grating map midpoint cloud.
(2) 3D interested area of vehicle is generated on cloud, for the classification and positioning of target, with birds-eye view (BEV) work For input, a series of 3D candidate frame can be being generated before, to reduce calculation amount, empty frameing shift is being removed, we are remaining every One binary label of content assignment of a frame is target vehicle or background, by calculating between anchor frame and real border frame IOU be overlapped size, distribute positive label to two class anchor frames:
1. with some real border frame anchor frame that have highest IOU Chong Die (less than 0.5);
2. the anchor frame that the IOU with any real border frame greater than 0.5 is overlapped.
And a real border frame may distribute positive label to multiple anchor frames.Distribute negative label (background) give it is all true The IOU of real edge frame is below 0.3 anchor frame, and non-just non-negative anchor frame does not have any effect to training objective, in subsequent processing It ignores.After the anchor frame for obtaining above-mentioned positive label, preliminary 3D regression optimization is carried out to it, it is assumed that each 3D prediction block It is indicated with (x, y, z, h, w, d), (x, y, z) indicates that the central point of frame, (h, w, d) indicate the size of frame.In laser radar coordinate The mass center and size of 3D frame in system, by calculating prospect ROI and not connecing between mad in the difference of central point and size i.e. (Δ really X, Δ y, Δ z, Δ h, Δ w, Δ d), so that being mapped to the ROI generated on characteristic pattern for after carries out difference and Primary Location.3D Anchor frame (xa,ya,za,ha,wa,da) indicate, 3D real border frame (x*,y*,z*,h*,w*,d*) indicate, tiIndicate prediction block Relative to the offset of 3D anchor frame, then its 6 parametrization coordinates are ti=(tx,ty,tz,th,tw,td),Indicate 3D real border Offset of the frame relative to 3D anchor frame, then its 6 parametrization coordinates beThen have:
tx=(x-xa)/ha ty=(y-ya)/wa
tz=(z-za)/da th=log (h/ha)
tw=log (w/wa) td=log (d/da)
It is returned by SmoothL1 function for 3Dbox:
Target object loss is calculated using entropy function is intersected:
It is returned by calculating the mass center between 3D anchor frame and 3D real border frame and the difference between size to execute 3D frame, Final output 3D area-of-interest in cloud.
(3) birds-eye view (BEV) of cloud is inputted as point cloud data, is by point cloud data (Z=0) 2D grid to the ground Lattice are projected to obtain, in order to obtain more detailed elevation information, and the point centered on laser radar position takes in the position BEV or so Set [- 40,40] m, front position [0,70] m.And according to the actual height of automobile, Z axis [0,2.5] m is taken, by point cloud along Z axis side It is sliced to impartial 5 height that are divided into, for each slice, (Z=0) 2D grid is projected to the ground, and each slice corresponds to Altitude feature be taken as projecting to the maximum height value of the grating map point cloud data.Point cloud density M refers to each cell Point cloud number, and normalize the value of each grid:
Wherein N is the number of unit grating map midpoint cloud.
(4) as shown in figure 3, information in order to make full use of original lowermost layer characteristic pattern, by being up-sampled to high-level characteristic The fusion of 3X3 convolution operation is carried out with bottom-up information.To obtain characteristic information and high-definition picture abundant.Feature extractor base In VGG-16 framework.Assuming that the size of input RGB image or BEV figure is H × W × D, VGG-16 network is used in the down-sampling stage First three convolutional layer, cause export characteristic pattern resolution ratio its input accordingly it is 8 times small, therefore, in this stage, feature The output size of figure isThe Feature Mapping of down-sampling carries out convolution by convolution kernel 1X1, makes it have some with phase The channel for the up-sampling phase property mapping answered, therefore the fusion of 3X3 convolution can be executed, thus in the last of feature extraction frame Full resolution characteristic pattern is obtained in one layer.
(5) the 3D area-of-interest in (2) point cloud is respectively mapped on the characteristic pattern of a cloud and RGB image, and root According to the coordinate transformation relation of BEV and RGB image, the coordinate position of the corresponding frame on characteristic pattern is obtained.But due to finally reflecting It is of different sizes to penetrate the frame obtained in characteristic pattern, leads to that fusion treatment cannot be done, therefore be by fixed size by obtained characteristic pattern Then 3X3 merges the characteristic pattern mapped in BEV and RGB.The final position for determining nearby vehicle and size.
The series of detailed descriptions listed above only for feasible embodiment of the invention specifically Protection scope bright, that they are not intended to limit the invention, it is all without departing from equivalent implementations made by technical spirit of the present invention Or change should all be included in the protection scope of the present invention.

Claims (8)

1. a kind of 3D vehicle checking method based on Multi-sensor Fusion, which comprises the steps of:
Step 1, the RGB image of vehicle is obtained, and obtains the laser radar point cloud information of vehicle-periphery;
Step 2, laser radar point cloud information is pre-processed, according to the height of automobile, Z axis [0,2.5] m is taken, by laser radar Point cloud is sliced along impartial 5 height that are divided into of Z-direction;
Step 3,3D interested area of vehicle is generated on laser radar point cloud;
Step 4, feature extraction is carried out to treated radar points cloud and RGB image respectively and generates individual features figure;
Step 5, above-mentioned 3D interested area of vehicle is respectively mapped on the characteristic pattern of radar points cloud and RGB image;
Step 6, demapping section characteristic pattern in step 5 is merged, and finally realizes the 3D positioning and detection of vehicle target.
2. a kind of 3D vehicle checking method based on Multi-sensor Fusion according to claim 1, which is characterized in that step In 1, the RGB image passes through the camera being installed on vehicle and obtains;The laser radar point cloud is by being located at swashing for roof Optical radar is scanned acquisition to ambient enviroment.
3. a kind of 3D vehicle checking method based on Multi-sensor Fusion according to claim 1, which is characterized in that step Preprocess method in 2 includes a processing method for cloud birds-eye view, and described cloud birds-eye view is by point cloud data (Z=to the ground 0) 2D grid is projected to obtain.
4. a kind of 3D vehicle checking method based on Multi-sensor Fusion according to claim 3, which is characterized in that described The processing method of point cloud birds-eye view are as follows:
The point centered on laser radar position takes in a left-right position for cloud birds-eye view [- 40,40] m, front position [0,70] m, According to the actual height of automobile, Z axis [0,2.5] m is taken, cloud is sliced along impartial 5 height that are divided into of Z-direction, for (Z=0) 2D grid is projected each slice to the ground, and the corresponding altitude feature of each slice is taken as projecting to the grating map The maximum height value of point cloud data;Point cloud density M refers to the point cloud number of each element grid, and returns the value of each grid One changes:
Wherein N is the number of unit grating map midpoint cloud.
5. a kind of 3D vehicle checking method based on Multi-sensor Fusion according to claim 1, which is characterized in that step The specific steps of 3D interested area of vehicle are generated in 3 on laser radar point cloud are as follows:
Using cloud birds-eye view as input, and a series of 3D candidate frame is being generated before, empty candidate frame is removed, is Be left one binary label of content assignment of each candidate frame, i.e., positive tag representation target vehicle, negative tag representation background, It is overlapped size by the IOU calculated between anchor frame and real border frame, positive label is distributed and gives following two categories anchor frame:
1) the anchor frame for having highest Chong Die less than 0.5 IOU with some real border frame;
2) IOU with any real border frame greater than 0.5 overlapping anchor frame;
Distribute negative label give all real border frames IOU be below 0.3 anchor frame, wherein non-just non-negative anchor frame is to instruction Practicing target does not have any effect, ignores in subsequent processing;
After the anchor frame for obtaining above-mentioned positive label, preliminary 3D regression optimization is carried out to it, it is assumed that each 3D prediction block with (x, Y, z, h, w, d) it indicates, (x, y, z) indicates that the central point of frame, (h, w, d) indicate the size of frame;In laser radar coordinate system, By calculating between prospect ROI and real border frame in central point and the difference of size, i.e. (Δ x, Δ y, Δ z, Δ h, Δ w, Δ D), it is mapped to the ROI generated on characteristic pattern for after and carries out difference and Primary Location;3D anchor frame (xa,ya,za,ha,wa,da) It indicates, 3D real border frame (x*,y*,z*,h*,w*,d*) indicate, tiIndicate offset of the prediction block relative to 3D anchor frame, if Its 6 parametrization coordinates are ti=(tx,ty,tz,th,tw,td),Indicate offset of the 3D real border frame relative to 3D anchor frame Amount, if its 6 parametrization coordinates areThen have:
tx=(x-xa)/ha ty=(y-ya)/wa
tz=(z-za)/da th=log (h/ha)
tw=log (w/wa) td=log (d/da)
It is returned by SmoothL1 function for 3Dbox:
Target object is calculated using cross-entropy function to lose:
Wherein n is that there are bounding box numbers for target area.
It is returned by calculating the mass center between 3D anchor frame and 3D real border frame and the difference between size to execute 3D frame, finally Output 3D area-of-interest in cloud.
6. a kind of 3D vehicle checking method based on Multi-sensor Fusion according to claim 1, which is characterized in that step 4 specific steps include:
Step 4.1, it is assumed that the size of input RGB image or point cloud birds-eye view is H × W × D, uses VGG-16 in the down-sampling stage First three convolutional layer of network, cause export characteristic pattern resolution ratio its input accordingly it is 8 times small, in this stage, feature The output size of figure is set as
Step 4.2, the characteristic pattern of the semantic information of high-rise low resolution is subjected to 2x up-sampling, guaranteed corresponding with up-sampling Down-sampling phase characteristic figure size is identical, and merges to characteristic pattern 3X3 convolution, obtains in the last layer of feature extraction frame Full resolution characteristic pattern;Institute's semantic information includes laser radar point cloud and RGB image.
7. a kind of 3D vehicle checking method based on Multi-sensor Fusion according to claim 1, which is characterized in that step 5 specific mapping method: the sense that will be obtained on radar points cloud according to coordinate relationship corresponding between radar points cloud and RGB image The region interest 3D is respectively mapped on the characteristic pattern of radar points cloud and RGB image.
8. a kind of 3D vehicle checking method based on Multi-sensor Fusion according to claim 7, which is characterized in that step 6 specific fusion method: the characteristic pattern that radar points cloud and RGB image that step 5 obtains map is subjected to pixel and is averagely melted It closes.
CN201910144580.4A 2019-02-27 2019-02-27 3D vehicle detection method based on multi-sensor fusion Active CN109948661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910144580.4A CN109948661B (en) 2019-02-27 2019-02-27 3D vehicle detection method based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910144580.4A CN109948661B (en) 2019-02-27 2019-02-27 3D vehicle detection method based on multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN109948661A true CN109948661A (en) 2019-06-28
CN109948661B CN109948661B (en) 2023-04-07

Family

ID=67006948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910144580.4A Active CN109948661B (en) 2019-02-27 2019-02-27 3D vehicle detection method based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN109948661B (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363158A (en) * 2019-07-17 2019-10-22 浙江大学 A kind of millimetre-wave radar neural network based cooperates with object detection and recognition method with vision
CN110458112A (en) * 2019-08-14 2019-11-15 上海眼控科技股份有限公司 Vehicle checking method, device, computer equipment and readable storage medium storing program for executing
CN110543858A (en) * 2019-09-05 2019-12-06 西北工业大学 Multi-mode self-adaptive fusion three-dimensional target detection method
CN110738121A (en) * 2019-09-17 2020-01-31 北京科技大学 front vehicle detection method and detection system
CN110827202A (en) * 2019-11-07 2020-02-21 上海眼控科技股份有限公司 Target detection method, target detection device, computer equipment and storage medium
CN110929692A (en) * 2019-12-11 2020-03-27 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method and device based on multi-sensor information fusion
CN110991534A (en) * 2019-12-03 2020-04-10 上海眼控科技股份有限公司 Point cloud data processing method, device, equipment and computer readable storage medium
CN111144304A (en) * 2019-12-26 2020-05-12 上海眼控科技股份有限公司 Vehicle target detection model generation method, vehicle target detection method and device
CN111209825A (en) * 2019-12-31 2020-05-29 武汉中海庭数据技术有限公司 Method and device for dynamic target 3D detection
CN111239706A (en) * 2020-03-30 2020-06-05 许昌泛网信通科技有限公司 Laser radar data processing method
CN111291714A (en) * 2020-02-27 2020-06-16 同济大学 Vehicle detection method based on monocular vision and laser radar fusion
CN111352112A (en) * 2020-05-08 2020-06-30 泉州装备制造研究所 Target detection method based on vision, laser radar and millimeter wave radar
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111833358A (en) * 2020-06-26 2020-10-27 中国人民解放军32802部队 Semantic segmentation method and system based on 3D-YOLO
CN112001226A (en) * 2020-07-07 2020-11-27 中科曙光(南京)计算技术有限公司 Unmanned 3D target detection method and device and storage medium
CN112183393A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Laser radar point cloud target detection method, system and device
CN112184539A (en) * 2020-11-27 2021-01-05 深兰人工智能(深圳)有限公司 Point cloud data processing method and device
CN112215053A (en) * 2019-07-12 2021-01-12 通用汽车环球科技运作有限责任公司 Multi-sensor multi-object tracking
CN112287859A (en) * 2020-11-03 2021-01-29 北京京东乾石科技有限公司 Object recognition method, device and system, computer readable storage medium
CN112711034A (en) * 2020-12-22 2021-04-27 中国第一汽车股份有限公司 Object detection method, device and equipment
CN112990229A (en) * 2021-03-11 2021-06-18 上海交通大学 Multi-modal 3D target detection method, system, terminal and medium
CN113011317A (en) * 2021-03-16 2021-06-22 青岛科技大学 Three-dimensional target detection method and detection device
CN113128348A (en) * 2021-03-25 2021-07-16 西安电子科技大学 Laser radar target detection method and system fusing semantic information
CN113192091A (en) * 2021-05-11 2021-07-30 紫清智行科技(北京)有限公司 Long-distance target sensing method based on laser radar and camera fusion
CN113222111A (en) * 2021-04-01 2021-08-06 上海智能网联汽车技术中心有限公司 Automatic driving 4D perception method, system and medium suitable for all-weather environment
CN113379827A (en) * 2020-02-25 2021-09-10 斑马技术公司 Vehicle segmentation for data capture system
CN113378605A (en) * 2020-03-10 2021-09-10 北京京东乾石科技有限公司 Multi-source information fusion method and device, electronic equipment and storage medium
CN113408454A (en) * 2021-06-29 2021-09-17 上海高德威智能交通系统有限公司 Traffic target detection method and device, electronic equipment and detection system
US11165462B2 (en) * 2018-11-07 2021-11-02 Samsung Electronics Co., Ltd. Motion assisted leakage removal for radar applications
CN113655497A (en) * 2021-08-30 2021-11-16 杭州视光半导体科技有限公司 Region-of-interest scanning method based on FMCW solid state scanning laser radar
CN113678136A (en) * 2019-12-30 2021-11-19 深圳元戎启行科技有限公司 Obstacle detection method and device based on unmanned technology and computer equipment
CN113674346A (en) * 2020-05-14 2021-11-19 北京京东乾石科技有限公司 Image detection method, image detection device, electronic equipment and computer-readable storage medium
CN113705279A (en) * 2020-05-21 2021-11-26 阿波罗智联(北京)科技有限公司 Method and device for identifying position of target object
CN113763465A (en) * 2020-06-02 2021-12-07 中移(成都)信息通信科技有限公司 Garbage determination system, model training method, determination method and determination device
CN113761999A (en) * 2020-09-07 2021-12-07 北京京东乾石科技有限公司 Target detection method and device, electronic equipment and storage medium
WO2022126427A1 (en) * 2020-12-16 2022-06-23 深圳市大疆创新科技有限公司 Point cloud processing method, point cloud processing apparatus, mobile platform, and computer storage medium
US11482007B2 (en) 2021-02-10 2022-10-25 Ford Global Technologies, Llc Event-based vehicle pose estimation using monochromatic imaging
CN115909034A (en) * 2022-11-29 2023-04-04 白城师范学院 Point cloud target identification method and device based on scene density perception and storage medium
WO2023108931A1 (en) * 2021-12-14 2023-06-22 江苏航天大为科技股份有限公司 Vehicle model determining method based on video-radar fusion perception

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533630A (en) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 For the real time machine vision of remote sense and wagon control and put cloud analysis
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533630A (en) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 For the real time machine vision of remote sense and wagon control and put cloud analysis
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11165462B2 (en) * 2018-11-07 2021-11-02 Samsung Electronics Co., Ltd. Motion assisted leakage removal for radar applications
CN112215053A (en) * 2019-07-12 2021-01-12 通用汽车环球科技运作有限责任公司 Multi-sensor multi-object tracking
CN112215053B (en) * 2019-07-12 2023-09-19 通用汽车环球科技运作有限责任公司 Multi-sensor multi-object tracking
CN110363158B (en) * 2019-07-17 2021-05-25 浙江大学 Millimeter wave radar and visual cooperative target detection and identification method based on neural network
CN110363158A (en) * 2019-07-17 2019-10-22 浙江大学 A kind of millimetre-wave radar neural network based cooperates with object detection and recognition method with vision
CN110458112A (en) * 2019-08-14 2019-11-15 上海眼控科技股份有限公司 Vehicle checking method, device, computer equipment and readable storage medium storing program for executing
CN110458112B (en) * 2019-08-14 2020-11-20 上海眼控科技股份有限公司 Vehicle detection method and device, computer equipment and readable storage medium
CN110543858A (en) * 2019-09-05 2019-12-06 西北工业大学 Multi-mode self-adaptive fusion three-dimensional target detection method
CN110738121A (en) * 2019-09-17 2020-01-31 北京科技大学 front vehicle detection method and detection system
CN110827202A (en) * 2019-11-07 2020-02-21 上海眼控科技股份有限公司 Target detection method, target detection device, computer equipment and storage medium
CN110991534A (en) * 2019-12-03 2020-04-10 上海眼控科技股份有限公司 Point cloud data processing method, device, equipment and computer readable storage medium
CN110929692B (en) * 2019-12-11 2022-05-24 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method and device based on multi-sensor information fusion
CN110929692A (en) * 2019-12-11 2020-03-27 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method and device based on multi-sensor information fusion
CN111144304A (en) * 2019-12-26 2020-05-12 上海眼控科技股份有限公司 Vehicle target detection model generation method, vehicle target detection method and device
CN113678136A (en) * 2019-12-30 2021-11-19 深圳元戎启行科技有限公司 Obstacle detection method and device based on unmanned technology and computer equipment
CN111209825A (en) * 2019-12-31 2020-05-29 武汉中海庭数据技术有限公司 Method and device for dynamic target 3D detection
CN111209825B (en) * 2019-12-31 2022-07-01 武汉中海庭数据技术有限公司 Method and device for dynamic target 3D detection
CN113379827A (en) * 2020-02-25 2021-09-10 斑马技术公司 Vehicle segmentation for data capture system
CN111291714A (en) * 2020-02-27 2020-06-16 同济大学 Vehicle detection method based on monocular vision and laser radar fusion
CN113378605B (en) * 2020-03-10 2024-04-09 北京京东乾石科技有限公司 Multi-source information fusion method and device, electronic equipment and storage medium
CN113378605A (en) * 2020-03-10 2021-09-10 北京京东乾石科技有限公司 Multi-source information fusion method and device, electronic equipment and storage medium
CN111239706A (en) * 2020-03-30 2020-06-05 许昌泛网信通科技有限公司 Laser radar data processing method
CN111239706B (en) * 2020-03-30 2021-10-01 许昌泛网信通科技有限公司 Laser radar data processing method
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111583663B (en) * 2020-04-26 2022-07-12 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111352112A (en) * 2020-05-08 2020-06-30 泉州装备制造研究所 Target detection method based on vision, laser radar and millimeter wave radar
CN113674346A (en) * 2020-05-14 2021-11-19 北京京东乾石科技有限公司 Image detection method, image detection device, electronic equipment and computer-readable storage medium
CN113674346B (en) * 2020-05-14 2024-04-16 北京京东乾石科技有限公司 Image detection method, image detection device, electronic equipment and computer readable storage medium
CN113705279A (en) * 2020-05-21 2021-11-26 阿波罗智联(北京)科技有限公司 Method and device for identifying position of target object
CN113763465A (en) * 2020-06-02 2021-12-07 中移(成都)信息通信科技有限公司 Garbage determination system, model training method, determination method and determination device
CN111833358A (en) * 2020-06-26 2020-10-27 中国人民解放军32802部队 Semantic segmentation method and system based on 3D-YOLO
CN112001226A (en) * 2020-07-07 2020-11-27 中科曙光(南京)计算技术有限公司 Unmanned 3D target detection method and device and storage medium
CN113761999A (en) * 2020-09-07 2021-12-07 北京京东乾石科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113761999B (en) * 2020-09-07 2024-03-05 北京京东乾石科技有限公司 Target detection method and device, electronic equipment and storage medium
CN112183393A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Laser radar point cloud target detection method, system and device
CN112287859A (en) * 2020-11-03 2021-01-29 北京京东乾石科技有限公司 Object recognition method, device and system, computer readable storage medium
CN112184539A (en) * 2020-11-27 2021-01-05 深兰人工智能(深圳)有限公司 Point cloud data processing method and device
WO2022126427A1 (en) * 2020-12-16 2022-06-23 深圳市大疆创新科技有限公司 Point cloud processing method, point cloud processing apparatus, mobile platform, and computer storage medium
CN112711034A (en) * 2020-12-22 2021-04-27 中国第一汽车股份有限公司 Object detection method, device and equipment
US11482007B2 (en) 2021-02-10 2022-10-25 Ford Global Technologies, Llc Event-based vehicle pose estimation using monochromatic imaging
CN112990229A (en) * 2021-03-11 2021-06-18 上海交通大学 Multi-modal 3D target detection method, system, terminal and medium
CN113011317A (en) * 2021-03-16 2021-06-22 青岛科技大学 Three-dimensional target detection method and detection device
CN113128348A (en) * 2021-03-25 2021-07-16 西安电子科技大学 Laser radar target detection method and system fusing semantic information
CN113128348B (en) * 2021-03-25 2023-11-24 西安电子科技大学 Laser radar target detection method and system integrating semantic information
CN113222111A (en) * 2021-04-01 2021-08-06 上海智能网联汽车技术中心有限公司 Automatic driving 4D perception method, system and medium suitable for all-weather environment
CN113192091A (en) * 2021-05-11 2021-07-30 紫清智行科技(北京)有限公司 Long-distance target sensing method based on laser radar and camera fusion
CN113408454B (en) * 2021-06-29 2024-02-06 上海高德威智能交通系统有限公司 Traffic target detection method, device, electronic equipment and detection system
CN113408454A (en) * 2021-06-29 2021-09-17 上海高德威智能交通系统有限公司 Traffic target detection method and device, electronic equipment and detection system
CN113655497A (en) * 2021-08-30 2021-11-16 杭州视光半导体科技有限公司 Region-of-interest scanning method based on FMCW solid state scanning laser radar
CN113655497B (en) * 2021-08-30 2023-10-27 杭州视光半导体科技有限公司 Method for scanning region of interest based on FMCW solid-state scanning laser radar
WO2023108931A1 (en) * 2021-12-14 2023-06-22 江苏航天大为科技股份有限公司 Vehicle model determining method based on video-radar fusion perception
CN115909034A (en) * 2022-11-29 2023-04-04 白城师范学院 Point cloud target identification method and device based on scene density perception and storage medium

Also Published As

Publication number Publication date
CN109948661B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN109948661A (en) A kind of 3D vehicle checking method based on Multi-sensor Fusion
US11532151B2 (en) Vision-LiDAR fusion method and system based on deep canonical correlation analysis
US10140855B1 (en) Enhanced traffic detection by fusing multiple sensor data
CN105260699B (en) A kind of processing method and processing device of lane line data
JP4942509B2 (en) Vehicle position detection method and apparatus
CN108828621A (en) Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
US8670612B2 (en) Environment recognition device and environment recognition method
US9070023B2 (en) System and method of alerting a driver that visual perception of pedestrian may be difficult
CN111209825B (en) Method and device for dynamic target 3D detection
CN109931939A (en) Localization method, device, equipment and the computer readable storage medium of vehicle
CN109074490A (en) Path detection method, related device and computer readable storage medium
CN102073846A (en) Method for acquiring traffic information based on aerial images
CN105190288A (en) Method and device for determining a visual range in daytime fog
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN112825192A (en) Object identification system and method based on machine learning
CN110033621A (en) A kind of hazardous vehicles detection method, apparatus and system
CN113359692B (en) Obstacle avoidance method and movable robot
CN117274749B (en) Fused 3D target detection method based on 4D millimeter wave radar and image
CN114898319A (en) Vehicle type recognition method and system based on multi-sensor decision-level information fusion
CN110263679A (en) A kind of fine granularity vehicle checking method based on deep neural network
US20230266469A1 (en) System and method for detecting road intersection on point cloud height map
Liu et al. Runway detection during approach and landing based on image fusion
CN116486381A (en) Obstacle recognition method and device and vehicle
CN117690133A (en) Point cloud data labeling method and device, electronic equipment, vehicle and medium
CN114549276A (en) A on-vehicle chip for intelligent automobile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant