CN109100741A - A kind of object detection method based on 3D laser radar and image data - Google Patents

A kind of object detection method based on 3D laser radar and image data Download PDF

Info

Publication number
CN109100741A
CN109100741A CN201810594692.5A CN201810594692A CN109100741A CN 109100741 A CN109100741 A CN 109100741A CN 201810594692 A CN201810594692 A CN 201810594692A CN 109100741 A CN109100741 A CN 109100741A
Authority
CN
China
Prior art keywords
interest
point
area
radar
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810594692.5A
Other languages
Chinese (zh)
Other versions
CN109100741B (en
Inventor
赵祥模
孙朋朋
徐志刚
王润民
李骁驰
闵海根
尚旭明
吴霞
王召月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201810594692.5A priority Critical patent/CN109100741B/en
Publication of CN109100741A publication Critical patent/CN109100741A/en
Application granted granted Critical
Publication of CN109100741B publication Critical patent/CN109100741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Abstract

The invention discloses a kind of object detection method based on 3D laser radar and image data, this method obtains the 3D point cloud data and camera image of ambient enviroment using 3D laser radar and camera, and pre-processes to the 3D point cloud data;The ground point in 3D point cloud data is filtered out, space clustering is carried out to remaining non-ground points, extracts the 3D area-of-interest of target;The outer parameter of the coordinate of 3D laser radar and camera is demarcated, and the 3D area-of-interest of target is mapped in corresponding camera image according to calibrating parameters, extracts corresponding 2D area-of-interest in camera image;Feature extraction is carried out to the 2D area-of-interest using depth convolutional network, so that the target in 2D area-of-interest is positioned and be identified.The present invention takes full advantage of the complementarity among 3D laser radar and camera data, improves the precision and timeliness of the target positioning and Classification and Identification to scene, the target real-time detection that can be used in unmanned vehicle.

Description

A kind of object detection method based on 3D laser radar and image data
Technical field
The present invention relates to multi-sensor information fusions, and in particular to one kind is based on 3D laser radar point cloud object candidate area The object detection method with image convolution neural network classification is extracted, it is right for the important component of unmanned vehicle environment sensing The detection accuracy of Vehicle target is improved, ensures that unmanned vehicle safety traffic is of great significance.
Background technique
Autonomous driving vehicle can fundamentally improve safety and the comfort of driving population, while reduce automobile to ring The influence in border.In order to develop such vehicle, sensory perceptual system be vehicle analysis understand driving environment indispensable component it One, the position including peripheral obstacle, direction and classification.
3D laser radar is one of the most popular sensor for autonomous vehicle sensory perceptual system, it has extensive view Open country, the remote and Infravision in accurate depth information and target identification.In object detection task, since laser is swept The space coordinate substantially comprising point cloud is retouched, so 3D laser radar is relative to posture and the shape tool for obtaining the object detected There is certain advantage.However, with the increase with scanning center distance, the distribution of 3D laser radar point cloud becomes more and more diluter It dredges, this makes 3D laser radar be difficult to detect specific object in classification.
Camera can provide high-definition picture for precise classification, in recent years in terms of field of image recognition, deep learning Extensive research is obtained.These methods usually generate object candidate area using object candidate area generation method first, Such as sliding window method, select search method and Multiscale combination method, then using convolutional neural networks model to candidate region into Row feature extraction and target fixation and recognition.The shortcomings that these methods is to need to generate a large amount of candidate region, and real-time is very poor.This Outside, camera is influenced by illumination change, lacks the position of 3D object, direction and geometry, and object candidate area is caused to be extracted not Accurately.
Summary of the invention
The object of the present invention is to provide a kind of object detection method based on 3D laser radar and image data, makes full use of 3D laser radar can directly acquire high-precision target depth and target geometrical characteristic parameter and image object classification it is excellent Gesture realizes mutual supplement with each other's advantages, overcome using single-sensor carry out target detection existing for precision it is low, poor robustness etc. is asked Topic, has utmostly ensured unmanned vehicle in the safety traffic of complex condition.
Technical scheme is as follows:
A kind of object detection method based on 3D laser radar and image data, comprising the following steps:
Step 1, the 3D point cloud data and phase of ambient enviroment are obtained using the 3D laser radar and camera being installed on vehicle Machine image, and the 3D point cloud data are pre-processed;
Step 2, the ground point in 3D point cloud data is filtered out, space clustering is carried out to remaining non-ground points, extracts target 3D area-of-interest;
Step 3, the outer parameter of the coordinate of 3D laser radar and camera is demarcated, and according to calibrating parameters by target 3D area-of-interest is mapped in corresponding camera image, extracts corresponding 2D area-of-interest in camera image;
Step 4, feature extraction is carried out to the 2D area-of-interest using depth convolutional network, thus interested in 2D Target in region is positioned and is identified.
Further, the 3D point cloud data are pre-processed in step 1, comprising:
Step 1.1, the point cloud data that radar obtains is transformed under rectangular coordinate system
By the point set P of point cloud datarIt is transformed under rectangular coordinate system, calculates point and concentrate each scanning element in rectangular coordinate system Under coordinate, obtain each scanning element piMulti-parameter representation:
pi=(γiii,Ii,xi,yi,zi)
Wherein γiIndicate radial distance of the scanning element to radar, θiiIndicate water of the scanning element relative to spheric coordinate system Gentle vertical angle, IiIndicate radar reflection intensity, xi,yi,ziFor scanning element piCoordinate under rectangular coordinate system;
The rectangular coordinate system is using the geometric center position of radar as coordinate origin, with the vertical axis direction of radar For Z axis, using vehicle forward direction as Y-axis, and X-axis follows the right-hand rule by Z axis and Y-axis and determines;Conversion process is as follows:
Step 1.2, region filtering is carried out according to the rectangular co-ordinate, sets region of interest border, retained interested Scanning element in region, it may be assumed that
Pf={ pi|-X<xi<X,Y<yi<Y,Z1<zi<Z2} (2)
As scanning element piCoordinate (xi,yi,zi) meet in region of interest border-X < xi<X,Y<yi<Y,Z1<zi<Z2It is interior When, by scanning element piPoint set P is addedf, thus obtain the point set P of area-of-interest scanning elementf
Further, the 3D point cloud data are pre-processed, further includes:
Step 1.3, noise spot filters out
For point set PfEach of scanning element pi, search in scanning element piNeighbor Points in radius R, if piIt is close Adjoint point quantity, then will point p less than MiIt is labeled as noise spot and from point set PfMiddle removal;Traverse point set Pf, find all noises Point and from point set PfMiddle removal obtains pretreated point set P.
Further, the ground point in 3D point cloud data is filtered out described in step 2, comprising:
Step 2.1, point set P is mapped in multi-dimensional matrix, the line number of matrix is equal to the item number of radar scanning line, matrix Columns is equal to the points that a scan line includes;Scanning element p in point set PiBe mapped in matrix where row r and column c meter Calculation mode is as follows:
R=(θi+180)/Δθ (3)
C=φi/Δφ (4)
In above formula, Δ θ, Δ φ respectively indicate the horizontal angular resolution and vertical angular resolution of radar, θi、φiRespectively The level angle and vertical angle of scan line where indicating scanning element;
Step 2.2, b is usedr,cR row c column element in representing matrix calculates br,cIn point piDepth value pdepth i, meter Calculation mode is as follows:
In above formula, xi,yiRespectively br,cCorresponding scanning element piSeat in the rectangular coordinate system relative to X-axis, Y-axis Mark;
Step 2.3, calculating matrix element br,cCorresponding scanning element is the probability P (b of ground pointr,c), if probability is more than threshold Value, then by br,cCorresponding scanning element is labeled as ground point;
Step 2.4, each element in Ergodic Matrices, ground point all in matrix is marked according to the method for step 2.3 And remove ground point from point set P, remaining non-ground points are denoted as point set Po
Further, (the b of ground point probability P described in step 2.3r,c) calculating step are as follows:
Step 2.3.1, the adjacent element b of calculating matrix same rowr-1,cAnd br,cIn point between depth measurement difference Md (br-1,c,br,c), calculation method is as follows:
Md(br-1,c,br,c)=| pdepth r,c-pdepth r-1,c| (6)
Step 2.3.2 estimates the adjacent member of matrix same row according to the distribution situation of radar point cloud data in the plane Plain br-1,cAnd br,cIn point between depth difference Ed(br-1,c,br,c), circular is as follows:
Wherein, h indicates the mounting height of radar, and Δ φ indicates radar vertical angular resolution, φr-1And φrIt respectively indicates The vertical angle of radar r-1 and the r articles scan line, γr-1Indicate element br-1,cRadial direction of the corresponding scanning element away from radar center Distance value;
Step 2.3.3, then element br,cCorresponding scanning element piIt is the probability P (b of ground pointr,c) are as follows:
Wherein as probability P (br,c) when being greater than threshold value 0.8, then element br,cCorresponding scanning element piLabeled as ground point.
Further, space clustering is carried out to remaining non-ground points described in step 2, extracts the 3D region of interest of target Domain, comprising:
Step 2.5.1 establishes first cluster C1, by non-ground points collection PoIn first scanning element p1It is divided into first A cluster C1In;
Step 2.5.2, for point set PoIn other point pi∈Po, (i ≠ 1) calculates the cluster C nearest from itjIn scanning It puts with its Euler apart from minimum value, it, will point p if minimum value is less than threshold value diIt is divided into cluster CjIn (j≤n), wherein n Indicate current cluster numbers;Otherwise (n+1)th cluster C is re-createdn+1, and by piIt is divided into Cn+1In, until point set PoIn it is all Scanning element is all divided into cluster;
Step 2.5.3 indicates cluster set with Γ, for each of cluster set Γ cluster Cj, utilize cluster CjInstitute The spatial distribution for the scanning element for including calculates the minimum 3D axis alignment rectangular bounding box of the cluster, if the size of bounding box is big In threshold size, then the cluster is marked into pseudo- target area, be otherwise labeled as candidate target region;
Step 2.5.4 retains all target 3D region of interest for marking the bounding box for being as extraction Domain.
Further, the outer parameter of the coordinate of 3D laser radar and camera is demarcated, and according to calibrating parameters by mesh Target 3D area-of-interest is mapped in corresponding camera image, extracts corresponding 2D area-of-interest in camera image, comprising:
Using gridiron pattern scaling board as target, characteristic point is marked on scaling board, while obtaining the point cloud data and phase of radar Then the image data of machine calculates calibration ginseng in radar and magazine coordinate correspondence relationship according to the characteristic point on scaling board Number, i.e., spin matrix and translation vector between radar fix system and camera coordinates system;
Finally the 3D area-of-interest of target is mapped in corresponding camera image according to calibrating parameters, extracts camera figure Corresponding 2D axis alignment rectangular bounding box is as 2D area-of-interest as in.
Further, feature extraction is carried out to the 2D area-of-interest using depth convolutional network described in step 4, To which the target in 2D area-of-interest is positioned and be identified, comprising:
The depth convolutional network uses VGG16, respectively by ' conv3 ' in model, ' conv4 ' and ' conv5 ' this three The characteristic pattern that layer convolutional layer finally exports is normalized first, is then combined, so that final target signature has not Same scale;1 × 1 convolution operation is carried out to the feature after combination, the feature vector obtained to the end is exported to the convolutional network most The full articulamentum of two layers afterwards is to be positioned and be identified to the target in 2D area-of-interest.
Further, the method further include:
Step 5, it is optimized using result of the non-maxima suppression algorithm to step 4.
A kind of object detection system based on 3D laser radar and image data, including the acquisition of sequentially connected data with it is pre- Processing module, 3D area-of-interest obtain module, 2D area-of-interest obtains module and positioning and identification module, in which:
The data acquisition obtains surrounding using the 3D laser radar and camera being installed on vehicle with preprocessing module The 3D point cloud data and camera image of environment, and the 3D point cloud data are pre-processed;
The 3D area-of-interest obtains module and is used to filter out the ground point in 3D point cloud data, to remaining non-ground points Space clustering is carried out, the 3D area-of-interest of target is extracted;
The 2D area-of-interest obtains module and is used to mark the outer parameter of the coordinate of 3D laser radar and camera It is fixed, and the 3D area-of-interest of target is mapped in corresponding camera image according to calibrating parameters, it is right in camera image to extract The 2D area-of-interest answered;
The positioning and identification module is used to carry out feature to the 2D area-of-interest using depth convolutional network It extracts, so that the target in 2D area-of-interest is positioned and be identified.
The present invention has following technical characterstic compared with prior art:
1. the present invention extracts the candidate region 3D of target using the point cloud data of 3D laser radar, then according to radar and phase Relationship is demarcated outside the coordinate of machine, the candidate region 3D in cloud is mapped in image space, and utilizes convolutional neural networks pair Candidate region carries out feature extraction, target positioning and identification.When the method for the present invention can overcome single-sensor target detection The shortcomings that precision is low, and environmental factor dependence is strong, poor robustness can satisfy unmanned vehicle in environment understanding to the essence of target detection Degree, the requirement of real-time and adaptive capacity to environment.
2. the present invention is to obtain height in unmanned environment using the complementarity between 3D laser radar and camera data Accurate object space and classification provide new approaches;By combining, the mutual supplement with each other's advantages of two sensors is realized, inspection is enhanced The robustness of method of determining and calculating.
Detailed description of the invention
Fig. 1 is the flow chart of method of the invention;
(a) of Fig. 2 is the point cloud data of 3D laser radar acquisition, is (b) the corresponding camera image of point cloud data;
Fig. 3 is the geometrical principle figure of ground data reduction of the present invention;
(a) of Fig. 4 is the schematic diagram of the ground point cloud extracted;It (b) is to filter out showing for the non-ground points cloud after ground point cloud It is intended to;
Fig. 5 is the schematic diagram of 3D area-of-interest;
Fig. 6 is that 3D area-of-interest is mapped to the 2D area-of-interest schematic diagram generated in camera image;
Fig. 7 is the convolutional network model that the present invention uses;
Fig. 8 is the actually detected result schematic diagram of the method for the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, once in conjunction with specific embodiments, such as Fig. 1 Shown, the method includes steps once:
Step 1, the 3D point cloud data and phase of ambient enviroment are obtained using the 3D laser radar and camera being installed on vehicle Machine image, and the 3D point cloud data are pre-processed.
In an embodiment of the present invention, 3D laser radar (hereinafter referred to as radar) the model Velodyne HDL- of selection 32E, radar are mounted on the top of vehicle;In the present embodiment, vehicle is unmanned vehicle, mounting height 2.1m;It is travelled in unmanned vehicle In the process, by radar scanning ambient enviroment to obtain 3D point cloud data, as shown in (a) of Fig. 2, to pass through radar scanning environment The frame original point cloud data obtained;The radar by 32 single laser constitutions, 32 scan lines on a column, with The frequency of 10Hz executes 360 ° of degree scannings, and the horizontal resolution of scanning angle is 0.16 °;The vertical angular resolution of radar is 1.33 °, cover -30.67 °~10.67 ° of pitch angle.
In an embodiment of the present invention, to obtain the model Basler acA640-100gc of the camera of image data Colored CCD 658*492 camera.Radar and camera are by outer calibration progress unified coordinate system, and camera and radar are using identical Frame per second, and synchronous acquisition, i.e., the frame camera image in image data correspond to the frame point cloud number in 3D point cloud data According to.
Each frame point cloud data that radar scanning obtains is the point set P by scanning elementrIt constitutes, and point set PrIn it is each A scanning element piIt is to be described by spherical coordinate:
Pr={ piiii,Ii}
Wherein γiIndicate radial distance of the scanning element to radar, θiiIndicate water of the scanning element relative to spheric coordinate system Gentle vertical angle, IiIndicate radar reflection intensity, i indicates that point concentrates the number of scanning element.In single sweep operation (i.e. radar scanning One week) in generate about 70,000 point, can describe targets and detecting distance around all is more than 70 meters of object, Fig. 2 (a) and (b) be original radar point cloud data and its corresponding camera image.For subsequent processing, need to calculate each sweep Coordinate of the described point in rectangular coordinate system.
Since the data coverage that radar scanning generates is wide, data are big, if all carrying out processing is unable to satisfy unmanned vehicle Requirement to algorithm real-time.Therefore it needs to retain the data of area-of-interest, and filters out the data of inactive area.In addition, by There are some isolated noise spots in the data that radar is returned due to physical cause, if may influence candidate without filtering out The precision of algorithm, can be by being filtered out noise spot based on a method for cloud density.
It includes that the conversion of coordinate system, noise spot filter out that 3D point cloud data shown in step 1, which carry out pretreatment, specific wrap Include following steps:
Step 1.1, the point cloud data that radar obtains is transformed under rectangular coordinate system
By the point set P of point cloud datarIt is transformed under rectangular coordinate system, calculates point and concentrate each scanning element in rectangular coordinate system Under coordinate, obtain each scanning element piMulti-parameter representation:
pi=(γiii,Ii,xi,yi,zi)
The rectangular coordinate system is using the geometric center position of radar as coordinate origin, with the vertical axis direction of radar For Z axis, using vehicle forward direction as Y-axis, and X-axis follows the right-hand rule by Z axis and Y-axis and determines.Conversion process is as follows:
Step 1.2, region filtering is carried out according to the rectangular co-ordinate, sets region of interest border, retained interested Scanning element in region, it may be assumed that
Pf={ pi|-X<xi<X,Y<yi<Y,Z1<zi<Z2} (2)
That is, working as scanning element piCoordinate (xi,yi,zi) meet in region of interest border-X < xi<X,Y<yi<Y,Z1<zi<Z2 When interior, by scanning element piPoint set P is addedf, thus obtain the point set P of area-of-interest scanning elementf
In the present embodiment, the X of the region of interest border of unmanned vehicle takes 15m, Y to take 50m, Z1Take -2.1, Z2Take 0.5m.
Step 1.3, noise spot filters out
For point set PfEach of scanning element pi, search in scanning element piNeighbor Points in radius R, if piIt is close Adjoint point quantity, then will point p less than MiIt is labeled as noise spot and from point set PfMiddle removal.
Traverse point set Pf, find all noise spots and from point set PfMiddle removal obtains pretreated point set P.This implementation In example, radius R takes 0.5m, M to take 3.
Step 2, the ground point in 3D point cloud data is filtered out, space clustering is carried out to remaining non-ground points, extracts target 3D area-of-interest.
In the point cloud data of radar, the scanning element in all targets is connected together by ground point, it is difficult to point It opens, so needing first to filter out the ground point in scanning element, and filters out ground point to obtain non-ground points.
Step 2.1, point set P is mapped in multi-dimensional matrix, the line number of matrix is equal to the item number of radar scanning line, matrix Columns is equal to the points that a scan line includes;Radar scanning line in the present embodiment is 32, the point that a scan line includes It is 2250.Scanning element p in point set PiBe mapped in matrix where row r and column c calculation it is as follows:
R=(θi+180)/Δθ (3)
C=φi/Δφ (4)
In above formula, Δ θ, Δ φ respectively indicate the horizontal angular resolution and vertical angular resolution of radar, θi、φiRespectively The level angle and vertical angle of scan line where indicating scanning element.
Step 2.2, b is usedr,cR row c column element in representing matrix calculates br,cIn point piDepth value pdepth i, meter Calculation mode is as follows:
In above formula, xi,yiRespectively br,cCorresponding scanning element piSeat in the rectangular coordinate system relative to X-axis, Y-axis Mark.
Step 2.3, calculating matrix element br,cCorresponding scanning element is the probability P (b of ground pointr,c), if probability is more than threshold Value, then by br,cCorresponding scanning element is labeled as ground point;If Fig. 3 is the geometrical principle figure of ground data reduction of the present invention.
Ground point probability P (the br,c) calculating step are as follows:
Step 2.3.1, the adjacent element b of calculating matrix same rowr-1,cAnd br,cIn point between depth measurement difference Md (br-1,c,br,c), calculation method is as follows:
Md(br-1,c,br,c)=| pdepth r,c-pdepth r-1,c| (6)
Step 2.3.2 estimates the adjacent member of matrix same row according to the distribution situation of radar point cloud data in the plane Plain br-1,cAnd br,cIn point between depth difference Ed(br-1,c,br,c), circular is as follows:
Wherein, h indicates the mounting height of radar, and Δ φ indicates radar vertical angular resolution, φr-1And φrIt respectively indicates The vertical angle of radar r-1 and the r articles scan line, γr-1Indicate element br-1,cRadial direction of the corresponding scanning element away from radar center Distance value.
Step 2.3.3, then element br,cCorresponding scanning element piIt is the probability P (b of ground pointr,c) are as follows:
Wherein as probability P (br,c) when being greater than threshold value 0.8, then element br,cCorresponding scanning element piLabeled as ground point.Fig. 4's (b) it is non-ground points cloud after filtering out ground point cloud.
Step 2.4, each element in Ergodic Matrices, ground point all in matrix is marked according to the method for step 2.3 (such as (a) of Fig. 4) and ground point is removed from point set P, remaining non-ground points are denoted as point set Po, as shown in (b) of Fig. 4.
Step 2.5, space clustering is carried out to non-ground points, to obtain the geometrical characteristic information of target, to obtain target 3D area-of-interest, specific steps include:
Step 2.5.1 establishes first cluster C1, by non-ground points collection PoIn first scanning element p1It is divided into first A cluster C1In;
Step 2.5.2, for point set PoIn other point pi∈Po, (i ≠ 1) calculates the cluster C nearest from itjIn scanning It puts with its Euler apart from minimum value, it, will point p if minimum value is less than threshold value diIt is divided into cluster CjIn (j≤n), wherein n Indicate current cluster numbers;Otherwise (n+1)th cluster C is re-createdn+1, and by piIt is divided into Cn+1In, until point set PoIn it is all Scanning element is all divided into cluster;
Step 2.5.3 indicates cluster set with Γ, for each of cluster set Γ cluster Cj, utilize cluster CjInstitute The spatial distribution for the scanning point set for including calculates the minimum 3D axis alignment rectangular bounding box OBB (oriented of the cluster Bounding box), if the size of bounding box is greater than threshold size, which is marked into pseudo- target area, is otherwise marked For candidate target region;Threshold size length is 10m, width 5m in the present embodiment, is highly 3m.
Step 2.5.4 retains all target 3D region of interest for marking the bounding box for being as extraction Domain, such as the 3D area-of-interest of Fig. 5 target finally retained.
Step 3, the outer parameter of the coordinate of 3D laser radar and camera is demarcated, and according to calibrating parameters by target 3D area-of-interest is mapped in corresponding camera image, and the 2D for extracting (3D area-of-interest) corresponding in camera image is interested Region.
The specific method of this step is to mark characteristic point on scaling board using gridiron pattern scaling board as target, obtain simultaneously The point cloud data of radar and the image data of camera, then according to the characteristic point on scaling board in radar and magazine coordinate pair It should be related to and calculate calibrating parameters, i.e., the spin matrix and translation vector between radar fix system and camera coordinates system;
The 3D area-of-interest for the target that step 2 obtains finally is mapped to corresponding camera image according to calibrating parameters In, it extracts corresponding 2D axis in camera image and is aligned rectangular bounding box as 2D area-of-interest;If Fig. 6 is will to put cloud to be mapped to The 2D area-of-interest generated in camera image.
Step 4, feature extraction is carried out to the 2D area-of-interest using depth convolutional network, thus interested in 2D Target in region is positioned and is identified.
Used in this programme depth convolutional neural networks in camera image 2D area-of-interest carry out feature extraction, with And bounding box recurrence and identification are carried out to the target in region;The boundary for surrounding target position is accurately positioned by way of recurrence Frame, and target is identified.
In order to improve this method to the detection accuracy of Small object, VGG16 convolutional network mould that the present invention as shown in Figure 7 uses Type, respectively by ' conv3 ' in model, the characteristic pattern that ' conv4 ' and ' conv5 ' this three-layer coil lamination finally export uses L2 first Method for normalizing is normalized, and is then combined, so that final target signature has different scale;To the spy after combination Sign carries out 1 × 1 convolution operation, obtains feature vector to the end and exports to two layers last of full articulamentum of the convolutional network with right Target in 2D area-of-interest positioned and identified, the result schematic diagram of the method for the present invention detection as shown in Figure 8.
This programme is with 80% pedestrian, bicycle and the vehicle sample in KITTI public target detection training set to the volume Product network is trained, and is obtained for detecting vehicle, pedestrian and the model parameter of bicycle.In test in addition to using KITTI 20% pedestrian, bicycle and vehicle sample in target detection data set in remaining training set test, and also utilize me The survey of detection pedestrian, bicycle and vehicle is carried out using the data that HDL-32E 3D laser radar and Balser camera acquire Examination, obtains position, the classification of the 2D rectangular bounding box of each target, to realize the positioning and identification to target.
In order to advanced optimize as a result, this programme further include: step 5, using non-maxima suppression algorithm to the knot of step 4 Fruit optimizes, and obtains the range information of final target bezel locations, classification and target.
In order to reduce the pseudo- target of detection, detection accuracy is improved, is lower than 0.5 using non-maxima suppression algorithm removal probability 2D rectangular bounding box, obtain the range information of final target bezel locations and classification and target.
It is demonstrated experimentally that this method still can be detected effectively in real time in front of unmanned vehicle in different traffic scenes Each target.

Claims (10)

1. a kind of object detection method based on 3D laser radar and image data, comprising the following steps:
Step 1, the 3D point cloud data and camera figure of ambient enviroment are obtained using the 3D laser radar and camera being installed on vehicle Picture, and the 3D point cloud data are pre-processed;
Step 2, the ground point in 3D point cloud data is filtered out, space clustering is carried out to remaining non-ground points, extracts the 3D sense of target Interest region;
Step 3, the outer parameter of the coordinate of 3D laser radar and camera is demarcated, and is felt the 3D of target according to calibrating parameters Interest area maps extract corresponding 2D area-of-interest in camera image into corresponding camera image;
Step 4, feature extraction is carried out to the 2D area-of-interest using depth convolutional network, thus to 2D area-of-interest In target positioned and identified.
2. the object detection method based on 3D laser radar and image data as described in claim 1, which is characterized in that step The 3D point cloud data are pre-processed in 1, comprising:
Step 1.1, the point cloud data that radar obtains is transformed under rectangular coordinate system
By the point set P of point cloud datarIt is transformed under rectangular coordinate system, calculates point and concentrate each scanning element under rectangular coordinate system Coordinate obtains each scanning element piMulti-parameter representation:
pi=(γiii,Ii,xi,yi,zi)
Wherein γiIndicate radial distance of the scanning element to radar, θiiIndicate scanning element relative to spheric coordinate system level and Vertical angle, IiIndicate radar reflection intensity, xi,yi,ziFor scanning element piCoordinate under rectangular coordinate system;
The rectangular coordinate system is using the geometric center position of radar as coordinate origin, is Z with the vertical axis direction of radar Axis, using vehicle forward direction as Y-axis, and X-axis follows the right-hand rule by Z axis and Y-axis and determines;Conversion process is as follows:
Step 1.2, region filtering is carried out according to the rectangular co-ordinate, sets region of interest border, retain area-of-interest In scanning element, it may be assumed that
Pf={ pi|-X<xi<X,Y<yi<Y,Z1<zi<Z2} (2)
As scanning element piCoordinate (xi,yi,zi) meet in region of interest border-X < xi<X,Y<yi<Y,Z1<zi<Z2It, will when interior Scanning element piPoint set P is addedf, thus obtain the point set P of area-of-interest scanning elementf
3. the object detection method based on 3D laser radar and image data as described in claim 1, which is characterized in that described 3D point cloud data pre-processed, further includes:
Step 1.3, noise spot filters out
For point set PfEach of scanning element pi, search in scanning element piNeighbor Points in radius R, if piNeighbour points Amount, then will point p less than MiIt is labeled as noise spot and from point set PfMiddle removal;Traverse point set Pf, find all noise spots and from Point set PfMiddle removal obtains pretreated point set P.
4. the object detection method based on 3D laser radar and image data as described in claim 1, which is characterized in that step The ground point in 3D point cloud data is filtered out described in 2, comprising:
Step 2.1, point set P is mapped in multi-dimensional matrix, the line number of matrix is equal to the item number of radar scanning line, matrix column number The points for including equal to a scan line;Scanning element p in point set PiBe mapped in matrix where row r and the column calculating side c Formula is as follows:
R=(θi+180)/Δθ (3)
C=φi/Δφ (4)
In above formula, Δ θ, Δ φ respectively indicate the horizontal angular resolution and vertical angular resolution of radar, θi、φiIt respectively indicates The level angle and vertical angle of scan line where scanning element;
Step 2.2, b is usedr,cR row c column element in representing matrix calculates br,cIn point piDepth value pdepth i, calculating side Formula is as follows:
In above formula, xi,yiRespectively br,cCorresponding scanning element piCoordinate in the rectangular coordinate system relative to X-axis, Y-axis;
Step 2.3, calculating matrix element br,cCorresponding scanning element is the probability P (b of ground pointr,c), if probability is more than threshold value, By br,cCorresponding scanning element is labeled as ground point;
Step 2.4, each element in Ergodic Matrices marks ground point all in matrix according to the method for step 2.3 and incites somebody to action Ground point is removed from point set P, and remaining non-ground points are denoted as point set Po
5. the object detection method based on 3D laser radar and image data as claimed in claim 4, which is characterized in that step Ground point probability P (b described in 2.3r,c) calculating step are as follows:
Step 2.3.1, the adjacent element b of calculating matrix same rowr-1,cAnd br,cIn point between depth measurement difference Md (br-1,c,br,c), calculation method is as follows:
Md(br-1,c,br,c)=| pdepth r,c-pdepth r-1,c| (6)
Step 2.3.2 estimates the adjacent element of matrix same row according to the distribution situation of radar point cloud data in the plane br-1,cAnd br,cIn point between depth difference Ed(br-1,c,br,c), circular is as follows:
Wherein, h indicates the mounting height of radar, and Δ φ indicates radar vertical angular resolution, φr-1And φrRespectively indicate radar The vertical angle of r-1 and the r articles scan line, γr-1Indicate element br-1,cRadial distance of the corresponding scanning element away from radar center Value;
Step 2.3.3, then element br,cCorresponding scanning element piIt is the probability P (b of ground pointr,c) are as follows:
Wherein as probability P (br,c) when being greater than threshold value 0.8, then element br,cCorresponding scanning element piLabeled as ground point.
6. the object detection method based on 3D laser radar and image data as described in claim 1, which is characterized in that step Space clustering is carried out to remaining non-ground points described in 2, extracts the 3D area-of-interest of target, comprising:
Step 2.5.1 establishes first cluster C1, by non-ground points collection PoIn first scanning element p1First is divided into gather Class C1In;
Step 2.5.2, for point set PoIn other point pi∈Po, (i ≠ 1) calculates the cluster C nearest from itjIn scanning element with Its Euler is apart from minimum value, will point p if minimum value is less than threshold value diIt is divided into cluster CjIn (j≤n), wherein n is indicated Current cluster numbers;Otherwise (n+1)th cluster C is re-createdn+1, and by piIt is divided into Cn+1In, until point set PoIn all scanning Point is all divided into cluster;
Step 2.5.3 indicates cluster set with Γ, for each of cluster set Γ cluster Cj, utilize cluster CjIncluded Scanning element spatial distribution, calculate the cluster minimum 3D axis alignment rectangular bounding box, if the size of bounding box be greater than threshold It is worth size, then the cluster is marked into pseudo- target area, be otherwise labeled as candidate target region;
Step 2.5.4 retains all target 3D area-of-interests for marking the bounding box for being as extraction.
7. the object detection method based on 3D laser radar and image data as described in claim 1, which is characterized in that step The outer parameter of the coordinate of 3D laser radar and camera is demarcated described in 3, and according to calibrating parameters that the 3D sense of target is emerging Interesting area maps extract corresponding 2D area-of-interest in camera image into corresponding camera image, comprising:
Using gridiron pattern scaling board as target, characteristic point is marked on scaling board, while obtaining the point cloud data and camera of radar Then image data calculates calibrating parameters in radar and magazine coordinate correspondence relationship according to the characteristic point on scaling board, Spin matrix and translation vector i.e. between radar fix system and camera coordinates system;
Finally the 3D area-of-interest of target is mapped in corresponding camera image according to calibrating parameters, is extracted in camera image Corresponding 2D axis alignment rectangular bounding box is as 2D area-of-interest.
8. the object detection method based on 3D laser radar and image data as described in claim 1, which is characterized in that step Feature extraction is carried out to the 2D area-of-interest using depth convolutional network described in 4, thus in 2D area-of-interest Target positioned and identified, comprising:
The depth convolutional network uses VGG16, respectively by ' conv3 ' in model, ' conv4 ' and ' conv5 ' this three-layer coil The characteristic pattern that lamination finally exports is normalized first, is then combined, so that final target signature has different rulers Degree;The convolution operation that 1 × 1 is carried out to the feature after combination, the feature vector for obtaining to the end export last to the convolutional network Two layers of full articulamentum is to be positioned and be identified to the target in 2D area-of-interest.
9. the object detection method based on 3D laser radar and image data as described in claim 1, which is characterized in that described Method further include:
Step 5, it is optimized using result of the non-maxima suppression algorithm to step 4.
10. a kind of object detection system based on 3D laser radar and image data, which is characterized in that including sequentially connected number Module is obtained with preprocessing module, 3D area-of-interest according to acquisition, 2D area-of-interest obtains module and positioning and identification mould Block, in which:
The data acquisition obtains ambient enviroment using the 3D laser radar and camera being installed on vehicle with preprocessing module 3D point cloud data and camera image, and the 3D point cloud data are pre-processed;
The 3D area-of-interest obtains module and is used to filter out the ground point in 3D point cloud data, carries out to remaining non-ground points Space clustering extracts the 3D area-of-interest of target;
The 2D area-of-interest obtains module and is used to demarcate the outer parameter of the coordinate of 3D laser radar and camera, and The 3D area-of-interest of target is mapped in corresponding camera image according to calibrating parameters, extracts corresponding 2D in camera image Area-of-interest;
The positioning and identification module is used to carry out feature extraction to the 2D area-of-interest using depth convolutional network, To which the target in 2D area-of-interest is positioned and be identified.
CN201810594692.5A 2018-06-11 2018-06-11 Target detection method based on 3D laser radar and image data Active CN109100741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810594692.5A CN109100741B (en) 2018-06-11 2018-06-11 Target detection method based on 3D laser radar and image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810594692.5A CN109100741B (en) 2018-06-11 2018-06-11 Target detection method based on 3D laser radar and image data

Publications (2)

Publication Number Publication Date
CN109100741A true CN109100741A (en) 2018-12-28
CN109100741B CN109100741B (en) 2020-11-20

Family

ID=64796805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810594692.5A Active CN109100741B (en) 2018-06-11 2018-06-11 Target detection method based on 3D laser radar and image data

Country Status (1)

Country Link
CN (1) CN109100741B (en)

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934228A (en) * 2019-03-18 2019-06-25 上海盎维信息技术有限公司 3D point cloud processing method and processing device based on artificial intelligence
CN109949372A (en) * 2019-03-18 2019-06-28 北京智行者科技有限公司 A kind of laser radar and vision combined calibrating method
CN109948661A (en) * 2019-02-27 2019-06-28 江苏大学 A kind of 3D vehicle checking method based on Multi-sensor Fusion
CN109978954A (en) * 2019-01-30 2019-07-05 杭州飞步科技有限公司 The method and apparatus of radar and camera combined calibrating based on cabinet
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110008891A (en) * 2019-03-29 2019-07-12 厦门金龙旅行车有限公司 A kind of pedestrian detection localization method, device, cart-mounted computing device and storage medium
CN110018470A (en) * 2019-03-01 2019-07-16 北京纵目安驰智能科技有限公司 Based on example mask method, model, terminal and the storage medium merged before multisensor
CN110033457A (en) * 2019-03-11 2019-07-19 北京理工大学 A kind of target point cloud dividing method
CN110080326A (en) * 2019-04-29 2019-08-02 北京拓疆者智能科技有限公司 A kind of discharge method, controller, excavator, electronic equipment and storage medium
CN110146865A (en) * 2019-05-31 2019-08-20 阿里巴巴集团控股有限公司 Target identification method and device for radar image
CN110161464A (en) * 2019-06-14 2019-08-23 成都纳雷科技有限公司 A kind of Radar Multi Target clustering method and device
CN110276793A (en) * 2019-06-05 2019-09-24 北京三快在线科技有限公司 A kind of method and device for demarcating three-dimension object
CN110363158A (en) * 2019-07-17 2019-10-22 浙江大学 A kind of millimetre-wave radar neural network based cooperates with object detection and recognition method with vision
CN110361717A (en) * 2019-07-31 2019-10-22 苏州玖物互通智能科技有限公司 Laser radar-camera combined calibration target and combined calibration method
CN110458780A (en) * 2019-08-14 2019-11-15 上海眼控科技股份有限公司 3D point cloud data de-noising method, apparatus, computer equipment and readable storage medium storing program for executing
CN110472553A (en) * 2019-08-12 2019-11-19 北京易航远智科技有限公司 Target tracking method, computing device and the medium of image and laser point cloud fusion
CN110503093A (en) * 2019-07-24 2019-11-26 中国航空无线电电子研究所 Area-of-interest exacting method based on disparity map DBSCAN cluster
CN110531340A (en) * 2019-08-22 2019-12-03 吴文吉 A kind of identifying processing method based on deep learning of laser radar point cloud data
CN110632617A (en) * 2019-09-29 2019-12-31 北京邮电大学 Laser radar point cloud data processing method and device
CN110909656A (en) * 2019-11-18 2020-03-24 中电海康集团有限公司 Pedestrian detection method and system with integration of radar and camera
CN111047901A (en) * 2019-11-05 2020-04-21 珠海格力电器股份有限公司 Parking management method, parking management device, storage medium and computer equipment
CN111079652A (en) * 2019-12-18 2020-04-28 北京航空航天大学 3D target detection method based on point cloud data simple coding
CN111077506A (en) * 2019-12-12 2020-04-28 苏州智加科技有限公司 Method, device and system for calibrating millimeter wave radar
CN111179331A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Depth estimation method, depth estimation device, electronic equipment and computer-readable storage medium
CN111427032A (en) * 2020-04-24 2020-07-17 森思泰克河北科技有限公司 Room wall contour recognition method based on millimeter wave radar and terminal equipment
WO2020168464A1 (en) * 2019-02-19 2020-08-27 SZ DJI Technology Co., Ltd. Local sensing based autonomous navigation, and associated systems and methods
CN111598770A (en) * 2020-05-15 2020-08-28 弗徕威智能机器人科技(上海)有限公司 Object detection method and device based on three-dimensional data and two-dimensional image
CN111626288A (en) * 2019-02-28 2020-09-04 深圳市速腾聚创科技有限公司 Data processing method, data processing device, computer equipment and storage medium
CN111638499A (en) * 2020-05-08 2020-09-08 上海交通大学 Camera-laser radar relative external reference calibration method based on laser radar reflection intensity point characteristics
CN111736114A (en) * 2020-08-21 2020-10-02 武汉煜炜光学科技有限公司 Method for improving data transmission speed of laser radar and laser radar
CN111950543A (en) * 2019-05-14 2020-11-17 北京京东尚科信息技术有限公司 Target detection method and device
CN111964673A (en) * 2020-08-25 2020-11-20 一汽解放汽车有限公司 Unmanned vehicle positioning system
CN112034432A (en) * 2019-06-03 2020-12-04 华为技术有限公司 Radar target clustering method and related device
CN112219206A (en) * 2019-07-25 2021-01-12 北京航迹科技有限公司 System and method for determining pose
CN112270694A (en) * 2020-07-07 2021-01-26 中国人民解放军61540部队 Method for detecting urban environment dynamic target based on laser radar scanning pattern
CN112543877A (en) * 2019-04-03 2021-03-23 华为技术有限公司 Positioning method and positioning device
WO2021052121A1 (en) * 2019-09-20 2021-03-25 于毅欣 Object identification method and apparatus based on laser radar and camera
CN112614189A (en) * 2020-12-09 2021-04-06 中国北方车辆研究所 Combined calibration method based on camera and 3D laser radar
WO2021097807A1 (en) * 2019-11-22 2021-05-27 深圳市大疆创新科技有限公司 Method and device for calibrating external parameters of detection device, and mobile platform
CN112989877A (en) * 2019-12-13 2021-06-18 阿里巴巴集团控股有限公司 Method and device for labeling object in point cloud data
CN113064179A (en) * 2021-03-22 2021-07-02 上海商汤临港智能科技有限公司 Point cloud data screening method and vehicle control method and device
CN113095324A (en) * 2021-04-28 2021-07-09 合肥工业大学 Classification and distance measurement method and system for cone barrel
CN113255560A (en) * 2021-06-09 2021-08-13 深圳朗道智通科技有限公司 Target detection system based on image and laser data under automatic driving scene
WO2021166912A1 (en) * 2020-02-18 2021-08-26 株式会社デンソー Object detection device
CN113359148A (en) * 2020-02-20 2021-09-07 百度在线网络技术(北京)有限公司 Laser radar point cloud data processing method, device, equipment and storage medium
CN113436273A (en) * 2021-06-28 2021-09-24 南京冲浪智行科技有限公司 3D scene calibration method, calibration device and calibration application thereof
WO2021189375A1 (en) * 2020-03-26 2021-09-30 Baidu.Com Times Technology (Beijing) Co., Ltd. A point cloud feature-based obstacle filter system
CN113671458A (en) * 2020-05-13 2021-11-19 华为技术有限公司 Target object identification method and device
CN113711273A (en) * 2019-04-25 2021-11-26 三菱电机株式会社 Motion amount estimation device, motion amount estimation method, and motion amount estimation program
CN113761999A (en) * 2020-09-07 2021-12-07 北京京东乾石科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113838140A (en) * 2021-08-16 2021-12-24 中国矿业大学(北京) Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance
CN113947639A (en) * 2021-10-27 2022-01-18 北京斯年智驾科技有限公司 Self-adaptive online estimation calibration system and method based on multi-radar-point cloud line characteristics
CN114820953A (en) * 2022-06-29 2022-07-29 深圳市镭神智能系统有限公司 Data processing method, device, equipment and storage medium
WO2023028774A1 (en) * 2021-08-30 2023-03-09 华为技术有限公司 Lidar calibration method and apparatus, and storage medium
CN116839499A (en) * 2022-11-03 2023-10-03 上海点莘技术有限公司 Large-visual-field micro-size 2D and 3D measurement calibration method
CN117008122A (en) * 2023-08-04 2023-11-07 江苏苏港智能装备产业创新中心有限公司 Method and system for positioning surrounding objects of engineering mechanical equipment based on multi-radar fusion
CN116839499B (en) * 2022-11-03 2024-04-30 上海点莘技术有限公司 Large-visual-field micro-size 2D and 3D measurement calibration method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101581575A (en) * 2009-06-19 2009-11-18 南昌航空大学 Three-dimensional rebuilding method based on laser and camera data fusion
CN101975951A (en) * 2010-06-09 2011-02-16 北京理工大学 Field environment barrier detection method fusing distance and image information
CN102944224A (en) * 2012-11-09 2013-02-27 大连理工大学 Automatic environmental perception system for remotely piloted vehicle and work method for automatic environmental perception system
CN103226833A (en) * 2013-05-08 2013-07-31 清华大学 Point cloud data partitioning method based on three-dimensional laser radar
CN103455144A (en) * 2013-08-22 2013-12-18 深圳先进技术研究院 Vehicle-mounted man-machine interaction system and method
CN104143194A (en) * 2014-08-20 2014-11-12 清华大学 Point cloud partition method and device
CN106407947A (en) * 2016-09-29 2017-02-15 百度在线网络技术(北京)有限公司 Target object recognition method and device applied to unmanned vehicle
CN106530380A (en) * 2016-09-20 2017-03-22 长安大学 Ground point cloud segmentation method based on three-dimensional laser radar
CN107192994A (en) * 2016-03-15 2017-09-22 山东理工大学 Multi-line laser radar mass cloud data is quickly effectively extracted and vehicle, lane line characteristic recognition method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101581575A (en) * 2009-06-19 2009-11-18 南昌航空大学 Three-dimensional rebuilding method based on laser and camera data fusion
CN101975951A (en) * 2010-06-09 2011-02-16 北京理工大学 Field environment barrier detection method fusing distance and image information
CN102944224A (en) * 2012-11-09 2013-02-27 大连理工大学 Automatic environmental perception system for remotely piloted vehicle and work method for automatic environmental perception system
CN103226833A (en) * 2013-05-08 2013-07-31 清华大学 Point cloud data partitioning method based on three-dimensional laser radar
CN103455144A (en) * 2013-08-22 2013-12-18 深圳先进技术研究院 Vehicle-mounted man-machine interaction system and method
CN104143194A (en) * 2014-08-20 2014-11-12 清华大学 Point cloud partition method and device
CN107192994A (en) * 2016-03-15 2017-09-22 山东理工大学 Multi-line laser radar mass cloud data is quickly effectively extracted and vehicle, lane line characteristic recognition method
CN106530380A (en) * 2016-09-20 2017-03-22 长安大学 Ground point cloud segmentation method based on three-dimensional laser radar
CN106407947A (en) * 2016-09-29 2017-02-15 百度在线网络技术(北京)有限公司 Target object recognition method and device applied to unmanned vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙朋朋 等: "基于3D激光雷达城市道路边界鲁棒检测算法", 《浙江大学学报(工学版)》 *

Cited By (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978954A (en) * 2019-01-30 2019-07-05 杭州飞步科技有限公司 The method and apparatus of radar and camera combined calibrating based on cabinet
WO2020168464A1 (en) * 2019-02-19 2020-08-27 SZ DJI Technology Co., Ltd. Local sensing based autonomous navigation, and associated systems and methods
CN109948661A (en) * 2019-02-27 2019-06-28 江苏大学 A kind of 3D vehicle checking method based on Multi-sensor Fusion
CN109948661B (en) * 2019-02-27 2023-04-07 江苏大学 3D vehicle detection method based on multi-sensor fusion
CN111626288B (en) * 2019-02-28 2023-12-01 深圳市速腾聚创科技有限公司 Data processing method, device, computer equipment and storage medium
CN111626288A (en) * 2019-02-28 2020-09-04 深圳市速腾聚创科技有限公司 Data processing method, data processing device, computer equipment and storage medium
CN110018470A (en) * 2019-03-01 2019-07-16 北京纵目安驰智能科技有限公司 Based on example mask method, model, terminal and the storage medium merged before multisensor
CN110033457A (en) * 2019-03-11 2019-07-19 北京理工大学 A kind of target point cloud dividing method
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN109978955B (en) * 2019-03-11 2021-03-19 武汉环宇智行科技有限公司 Efficient marking method combining laser point cloud and image
CN110033457B (en) * 2019-03-11 2021-11-30 北京理工大学 Target point cloud segmentation method
CN109934228A (en) * 2019-03-18 2019-06-25 上海盎维信息技术有限公司 3D point cloud processing method and processing device based on artificial intelligence
CN109949372A (en) * 2019-03-18 2019-06-28 北京智行者科技有限公司 A kind of laser radar and vision combined calibrating method
CN109934228B (en) * 2019-03-18 2023-02-10 上海盎维信息技术有限公司 3D point cloud processing method and device based on artificial intelligence
CN110008891A (en) * 2019-03-29 2019-07-12 厦门金龙旅行车有限公司 A kind of pedestrian detection localization method, device, cart-mounted computing device and storage medium
CN112543877B (en) * 2019-04-03 2022-01-11 华为技术有限公司 Positioning method and positioning device
CN112543877A (en) * 2019-04-03 2021-03-23 华为技术有限公司 Positioning method and positioning device
CN113711273A (en) * 2019-04-25 2021-11-26 三菱电机株式会社 Motion amount estimation device, motion amount estimation method, and motion amount estimation program
CN110080326A (en) * 2019-04-29 2019-08-02 北京拓疆者智能科技有限公司 A kind of discharge method, controller, excavator, electronic equipment and storage medium
CN110080326B (en) * 2019-04-29 2021-11-16 北京拓疆者智能科技有限公司 Unloading method, controller, excavator, electronic equipment and storage medium
CN111950543A (en) * 2019-05-14 2020-11-17 北京京东尚科信息技术有限公司 Target detection method and device
CN110146865A (en) * 2019-05-31 2019-08-20 阿里巴巴集团控股有限公司 Target identification method and device for radar image
CN112034432A (en) * 2019-06-03 2020-12-04 华为技术有限公司 Radar target clustering method and related device
CN110276793A (en) * 2019-06-05 2019-09-24 北京三快在线科技有限公司 A kind of method and device for demarcating three-dimension object
CN110161464B (en) * 2019-06-14 2023-03-10 成都纳雷科技有限公司 Radar multi-target clustering method and device
CN110161464A (en) * 2019-06-14 2019-08-23 成都纳雷科技有限公司 A kind of Radar Multi Target clustering method and device
CN110363158A (en) * 2019-07-17 2019-10-22 浙江大学 A kind of millimetre-wave radar neural network based cooperates with object detection and recognition method with vision
CN110363158B (en) * 2019-07-17 2021-05-25 浙江大学 Millimeter wave radar and visual cooperative target detection and identification method based on neural network
CN110503093A (en) * 2019-07-24 2019-11-26 中国航空无线电电子研究所 Area-of-interest exacting method based on disparity map DBSCAN cluster
CN110503093B (en) * 2019-07-24 2022-11-04 中国航空无线电电子研究所 Region-of-interest extraction method based on disparity map DBSCAN clustering
CN112219206A (en) * 2019-07-25 2021-01-12 北京航迹科技有限公司 System and method for determining pose
WO2021012245A1 (en) * 2019-07-25 2021-01-28 Beijing Voyager Technology Co., Ltd. Systems and methods for pose determination
CN110361717A (en) * 2019-07-31 2019-10-22 苏州玖物互通智能科技有限公司 Laser radar-camera combined calibration target and combined calibration method
CN110472553B (en) * 2019-08-12 2022-03-11 北京易航远智科技有限公司 Target tracking method, computing device and medium for fusion of image and laser point cloud
CN110472553A (en) * 2019-08-12 2019-11-19 北京易航远智科技有限公司 Target tracking method, computing device and the medium of image and laser point cloud fusion
CN110458780A (en) * 2019-08-14 2019-11-15 上海眼控科技股份有限公司 3D point cloud data de-noising method, apparatus, computer equipment and readable storage medium storing program for executing
CN110531340B (en) * 2019-08-22 2023-01-13 吴文吉 Identification processing method of laser radar point cloud data based on deep learning
CN110531340A (en) * 2019-08-22 2019-12-03 吴文吉 A kind of identifying processing method based on deep learning of laser radar point cloud data
WO2021052121A1 (en) * 2019-09-20 2021-03-25 于毅欣 Object identification method and apparatus based on laser radar and camera
CN110632617B (en) * 2019-09-29 2021-11-02 北京邮电大学 Laser radar point cloud data processing method and device
CN110632617A (en) * 2019-09-29 2019-12-31 北京邮电大学 Laser radar point cloud data processing method and device
CN111047901A (en) * 2019-11-05 2020-04-21 珠海格力电器股份有限公司 Parking management method, parking management device, storage medium and computer equipment
CN110909656B (en) * 2019-11-18 2023-10-13 中电海康集团有限公司 Pedestrian detection method and system integrating radar and camera
CN110909656A (en) * 2019-11-18 2020-03-24 中电海康集团有限公司 Pedestrian detection method and system with integration of radar and camera
WO2021097807A1 (en) * 2019-11-22 2021-05-27 深圳市大疆创新科技有限公司 Method and device for calibrating external parameters of detection device, and mobile platform
CN111077506A (en) * 2019-12-12 2020-04-28 苏州智加科技有限公司 Method, device and system for calibrating millimeter wave radar
CN112989877A (en) * 2019-12-13 2021-06-18 阿里巴巴集团控股有限公司 Method and device for labeling object in point cloud data
CN111079652B (en) * 2019-12-18 2022-05-13 北京航空航天大学 3D target detection method based on point cloud data simple coding
CN111079652A (en) * 2019-12-18 2020-04-28 北京航空航天大学 3D target detection method based on point cloud data simple coding
CN111179331B (en) * 2019-12-31 2023-09-08 智车优行科技(上海)有限公司 Depth estimation method, depth estimation device, electronic equipment and computer readable storage medium
CN111179331A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Depth estimation method, depth estimation device, electronic equipment and computer-readable storage medium
WO2021166912A1 (en) * 2020-02-18 2021-08-26 株式会社デンソー Object detection device
CN113359148A (en) * 2020-02-20 2021-09-07 百度在线网络技术(北京)有限公司 Laser radar point cloud data processing method, device, equipment and storage medium
US11609333B2 (en) 2020-03-26 2023-03-21 Baidu Usa Llc Point cloud feature-based obstacle filter system
WO2021189375A1 (en) * 2020-03-26 2021-09-30 Baidu.Com Times Technology (Beijing) Co., Ltd. A point cloud feature-based obstacle filter system
CN111427032A (en) * 2020-04-24 2020-07-17 森思泰克河北科技有限公司 Room wall contour recognition method based on millimeter wave radar and terminal equipment
CN111638499B (en) * 2020-05-08 2024-04-09 上海交通大学 Camera-laser radar relative external parameter calibration method based on laser radar reflection intensity point characteristics
CN111638499A (en) * 2020-05-08 2020-09-08 上海交通大学 Camera-laser radar relative external reference calibration method based on laser radar reflection intensity point characteristics
CN113671458A (en) * 2020-05-13 2021-11-19 华为技术有限公司 Target object identification method and device
CN111598770B (en) * 2020-05-15 2023-09-19 汇智机器人科技(深圳)有限公司 Object detection method and device based on three-dimensional data and two-dimensional image
CN111598770A (en) * 2020-05-15 2020-08-28 弗徕威智能机器人科技(上海)有限公司 Object detection method and device based on three-dimensional data and two-dimensional image
CN112270694A (en) * 2020-07-07 2021-01-26 中国人民解放军61540部队 Method for detecting urban environment dynamic target based on laser radar scanning pattern
CN111736114A (en) * 2020-08-21 2020-10-02 武汉煜炜光学科技有限公司 Method for improving data transmission speed of laser radar and laser radar
CN111964673A (en) * 2020-08-25 2020-11-20 一汽解放汽车有限公司 Unmanned vehicle positioning system
CN113761999B (en) * 2020-09-07 2024-03-05 北京京东乾石科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113761999A (en) * 2020-09-07 2021-12-07 北京京东乾石科技有限公司 Target detection method and device, electronic equipment and storage medium
CN112614189A (en) * 2020-12-09 2021-04-06 中国北方车辆研究所 Combined calibration method based on camera and 3D laser radar
CN113064179A (en) * 2021-03-22 2021-07-02 上海商汤临港智能科技有限公司 Point cloud data screening method and vehicle control method and device
CN113095324A (en) * 2021-04-28 2021-07-09 合肥工业大学 Classification and distance measurement method and system for cone barrel
CN113255560A (en) * 2021-06-09 2021-08-13 深圳朗道智通科技有限公司 Target detection system based on image and laser data under automatic driving scene
CN113436273A (en) * 2021-06-28 2021-09-24 南京冲浪智行科技有限公司 3D scene calibration method, calibration device and calibration application thereof
CN113838140B (en) * 2021-08-16 2023-07-18 中国矿业大学(北京) Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance
CN113838140A (en) * 2021-08-16 2021-12-24 中国矿业大学(北京) Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance
WO2023028774A1 (en) * 2021-08-30 2023-03-09 华为技术有限公司 Lidar calibration method and apparatus, and storage medium
CN113947639B (en) * 2021-10-27 2023-08-18 北京斯年智驾科技有限公司 Self-adaptive online estimation calibration system and method based on multi-radar point cloud line characteristics
CN113947639A (en) * 2021-10-27 2022-01-18 北京斯年智驾科技有限公司 Self-adaptive online estimation calibration system and method based on multi-radar-point cloud line characteristics
CN114820953A (en) * 2022-06-29 2022-07-29 深圳市镭神智能系统有限公司 Data processing method, device, equipment and storage medium
CN116839499A (en) * 2022-11-03 2023-10-03 上海点莘技术有限公司 Large-visual-field micro-size 2D and 3D measurement calibration method
CN116839499B (en) * 2022-11-03 2024-04-30 上海点莘技术有限公司 Large-visual-field micro-size 2D and 3D measurement calibration method
CN117008122A (en) * 2023-08-04 2023-11-07 江苏苏港智能装备产业创新中心有限公司 Method and system for positioning surrounding objects of engineering mechanical equipment based on multi-radar fusion

Also Published As

Publication number Publication date
CN109100741B (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN109100741A (en) A kind of object detection method based on 3D laser radar and image data
CN107341453B (en) Lane line extraction method and device
CN107067415B (en) A kind of object localization method based on images match
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN109444911A (en) A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
CN110415342A (en) A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN102032875B (en) Image-processing-based cable sheath thickness measuring method
CN106969706A (en) Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision
CN109472831A (en) Obstacle recognition range-measurement system and method towards road roller work progress
CN109711288A (en) Remote sensing ship detecting method based on feature pyramid and distance restraint FCN
CN102609701B (en) Remote sensing detection method based on optimal scale for high-resolution SAR (synthetic aperture radar)
CN110297232A (en) Monocular distance measuring method, device and electronic equipment based on computer vision
CN108007388A (en) A kind of turntable angle high precision online measuring method based on machine vision
CN105335973A (en) Visual processing method for strip steel processing production line
CN104764407B (en) A kind of fine measuring method of thickness of cable sheath
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN114089329A (en) Target detection method based on fusion of long and short focus cameras and millimeter wave radar
CN109816051B (en) Hazardous chemical cargo feature point matching method and system
CN108573280B (en) Method for unmanned ship to autonomously pass through bridge
Palenichka et al. Multi-scale segmentation of forest areas and tree detection in LiDAR images by the attentive vision method
CN110298271A (en) Seawater method for detecting area based on critical point detection network and space constraint mixed model
CN116844147A (en) Pointer instrument identification and abnormal alarm method based on deep learning
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN107345814A (en) A kind of mobile robot visual alignment system and localization method
Liu et al. Outdoor camera calibration method for a GPS & camera based surveillance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231228

Address after: 100193 area a, building 12, Zhongguancun Software Park, Haidian District, Beijing

Patentee after: BEIJING WANJI TECHNOLOGY Co.,Ltd.

Address before: 710064 No. 126 central section of South Ring Road, Yanta District, Xi'an, Shaanxi

Patentee before: CHANG'AN University