CN110414577A - A kind of laser radar point cloud multiple target Objects recognition method based on deep learning - Google Patents

A kind of laser radar point cloud multiple target Objects recognition method based on deep learning Download PDF

Info

Publication number
CN110414577A
CN110414577A CN201910639585.4A CN201910639585A CN110414577A CN 110414577 A CN110414577 A CN 110414577A CN 201910639585 A CN201910639585 A CN 201910639585A CN 110414577 A CN110414577 A CN 110414577A
Authority
CN
China
Prior art keywords
point cloud
dimensional space
layer
point
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910639585.4A
Other languages
Chinese (zh)
Inventor
邓建华
余坤
申睿涵
孙一鸣
周群芳
钱璨
王云
何子远
俞泉泉
常为弘
陈翔
罗凌云
魏傲寒
俞婷
肖正欣
邓力恺
王韬
杨远望
游长江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910639585.4A priority Critical patent/CN110414577A/en
Publication of CN110414577A publication Critical patent/CN110414577A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The laser radar point cloud multiple target Objects recognition method based on deep learning that the invention discloses a kind of, is related to a cloud recognition methods field;Comprising: obtain the point cloud data including several three-dimensional space after point cloud scene is successively carried out region segmentation, character representation and label label;Foundation includes the network model of input layer, N layers of convolutional layer, full articulamentum and Softmax function, and test set in data set after the above-mentioned model of training obtains optimal models, is inputted optimal models and obtain recognition result by the test set that input data is concentrated;Spatial relationship of the poor, power tower beside power line and the relationship between adjacent three-dimensional space according to depth information and high level, re-start classification after searching the point of doubtful misclassification, obtain final recognition result;The present invention solves the problems, such as existing neural network, and because there is cloud magnanimity, sparsity and randomness to cause, computationally intensive, feature extraction is difficult, recognition accuracy is low.

Description

A kind of laser radar point cloud multiple target Objects recognition method based on deep learning
This research obtains Sichuan Province's plan of science and technology and subsidizes (No:2019YFS0487).
Technical field
The present invention relates to a cloud recognition methods field, especially a kind of laser radar point cloud multiple target based on deep learning Objects recognition method.
Background technique
The Objects recognition of three-dimensional point cloud is to identify from mixed and disorderly unordered point cloud, extract artificial and natural feature on a map (road, room Room, trees, power line, tower etc.) element process, be the basis of subsequent applications such as " smart cities ", with deep learning method Proposition and development, do not need manual extraction feature but by simulation human brain neuron work structuring, automatically to object The features such as feature extracts overcomes conventional method to feature extraction complexity, indicates the disadvantages of insufficient, Su and Maji et al. are sharp With perspective view of the 3D shape under 12 different angles, each angle is all made of the study of VGG-M convolutional neural networks, finally will All characteristic weighings simultaneously carry out pond, are sent into next CNN network and obtain final classification results, the experimental results showed that polygonal The image of degree can be obtained than single angular image better performance, but this method, more complicated to the processing of input cumbersome. Charles et al. uses multi-layer perception (MLP) Max-Pooling near symmetrical function, knows to propose one for point cloud data Other deep learning network PointNet, and the simple analysis robustness of network, but deep learning is not derived further Influence for point cloud data Target scalar accuracy of identification.Although this method input facilitates many, tight to data demand Lattice, it usually needs object segmentation is at simple target.
Laser radar (Light Detection and Ranging, LiDAR) is a kind of active remote sensing measuring instrument, The various information of target can be obtained, such as: three-dimensional coordinate information, colouring information, echo information.Again because its sampling density is high, The point data amount of acquisition is big, has magnanimity, therefore is also known as point cloud (Point Cloud) data.The sampled point of laser radar covers Lid has very strong sparsity for the scale of scene.In data set, if original laser radar point cloud is thrown On shadow to corresponding color image, probably only 3% pixel just has corresponding radar points;With the pixel array or body in image Voxel array in product grid is different, and point cloud, which is one group, does not have the point set of particular order.Therefore, point cloud has sparsity and unordered Property characteristic;Above-mentioned existing neural network is handled input since cloud has magnanimity, sparsity and randomness, from And have the shortcomings that computationally intensive, feature extraction is difficult, recognition accuracy is low, it is therefore desirable to a kind of laser based on deep learning Radar points cloud multiple target Objects recognition method overcomes the problems, such as that existing method exists.
Summary of the invention
It is an object of the invention to: the present invention provides a kind of laser radar point cloud multiple target atural object based on deep learning Recognition methods solves existing neural network because there is cloud magnanimity, sparsity and randomness to lead to computationally intensive, feature extraction Problem difficult, recognition accuracy is low.
The technical solution adopted by the invention is as follows:
A kind of laser radar point cloud multiple target Objects recognition method based on deep learning, includes the following steps:
Step 1: establishing data set: after point cloud scene is successively carried out region segmentation, character representation and label label, obtaining Point cloud data including several three-dimensional space, the point cloud data include training set and test set;
Step 2: construct and training points cloud Objects recognition network: establish include input layer, N layers of convolutional layer, full articulamentum and Above-mentioned training set is inputted the network model by the network model of Softmax function, will after completing training acquisition optimal models Test set inputs optimal models and obtains recognition result;
Step 3: the recognition result of adjusting point cloud Objects recognition network: according to depth information and high level, poor, power tower is in electricity Relationship between spatial relationship beside the line of force and adjacent three-dimensional space searches the point of doubtful misclassification, carries out weight to above-mentioned point New classification, obtains final recognition result.
Preferably, the step 1 includes the following steps:
Step 1.1: point cloud scene being subjected to region segmentation and obtains several three-dimensional space: by each coordinate in point cloud scene Point is made difference with the minimum value of each reference axis and is completed after translating, each reference axis carried out as unit of 100 meters region segmentation formed it is multiple The zonule of 100*100*100, and above-mentioned zonule is divided to the three-dimensional space for obtaining several A*A*A sizes again;
Step 1.2: character representation being carried out to each three-dimensional space: each three-dimensional space is divided into K*K*K junior unit; Judge in each junior unit with the presence or absence of point cloud data, and if it exists, corresponding unit is then assigned a value of 1, otherwise assigns corresponding unit Value is 0, finally obtains the input vector that including 0 and 1 and dimension is K*K*K;
Step 1.3: marking the output of each three-dimensional space using statistic law: each three-dimensional space is counted according to input vector Present in all types of point cloud quantity, according to the type of ballot method and all types of point cloud quantitative determination three-dimensional space, thus Obtain the label of three-dimensional space.
Preferably, the step 2 includes the following steps:
Step 2.1: building includes input layer, first layer convolutional layer-n-th layer convolutional layer, full articulamentum and Softmax function Network model, when N takes five, input layer input include K*K*K junior unit three-dimensional space, the convolution kernel of first layer convolutional layer Size is 7*7*7, channel number 20, and the convolution kernel size of second layer convolutional layer and third layer convolutional layer is 5*5*5, channel Number is 20, and the size of four layers of convolution kernel and layer 5 convolution kernel is 3*3*3, channel number 20;
Step 2.2: establish residual error network based on above-mentioned network model: the input of third layer convolutional layer is the output of first layer The sum of with the output of second layer convolution, the input of the sum of the output of third layer and the 4th layer output as layer 5;
Step 2.3: using back-propagation algorithm and the above-mentioned network model of gradient descent method training, passing through every a kind of atural object The weighted value of cost function feedback regulation network between feature vector and actual value iterates until cost function is less than setting Test set input optimal network model after obtaining the network model of optimal weights value, is obtained recognition result by threshold value.
Preferably, training set is inputted the network model in the step 2 includes training set pretreatment, and training set is located in advance Steps are as follows for reason:
Judge whether the point cloud quantity of each three-dimensional space is greater than threshold value M, if more than then the net is inputted as training set Network model;Otherwise the three-dimensional space is filtered to and is classified as unfiled point, the value range of M is 20-100.
Preferably, the step 3 includes the following steps:
Step 3.1: being searched after ground mixes line and tower and sorted out using depth information and difference in height based on recognition result For unfiled point;
Step 3.2: will after searching the tower mixed in high-rise vegetation using the distance of most near line classification point based on recognition result It is classified as unfiled point;
Step 3.3: based on recognition result using after optimized relation other class culture points between adjacent three-dimensional space by it It is classified as unfiled point;
Step 3.4: the unfiled point filtered based on above-mentioned unfiled point and three-dimensional space identifies own using K nearest neighbor algorithm It is unfiled, obtain final recognition result.
Preferably, the value range of the A is 0-20m, and the value range of K is 0-50.
In conclusion by adopting the above-described technical solution, the beneficial effects of the present invention are:
1. the present invention establishes data set by region segmentation and division three-dimensional space, cascade is established while reducing calculation amount Disaggregated model, first using the biggish three-dimensional space of density in Three dimensional convolution neural network recognization data set, then with algorithm revision/ (such as K nearest neighbor algorithm) identifies the point cloud and doubtful misclassified gene that density is small in data set, uses Three dimensional convolution merely to overcome The low problem of neural network recognization accuracy rate;
2. the present invention cannot be directly placed into neural network since point cloud data has the features such as randomness, sparsity, more noise Middle training, after entire point cloud data is cut into three-dimensional space by the present invention, so that the space of the available side point cloud of a single point The features such as information, obtains point Yun Tezheng, while carrying out region segmentation is that three-dimensional space establishes data set, is reduced in neural network Input dimension, greatly reduce calculation amount;
3. point cloud Objects recognition network of the invention, establishes residual error network and gradient is avoided to disappear, the first layer drop of convolutional layer Dimension, behind the convolution kernel size of convolutional layer be gradually reduced, extract fine granularity feature, be conducive to improve recognition accuracy;
4. the present invention is according to depth information and high level, spatial relationship and adjacent three-dimensional of the poor, power tower beside power line are empty Between between relationship, search the point of doubtful misclassification, re-start classification using to above-mentioned point, obtain final recognition result, Doubtful misclassified gene is corrected, recognition accuracy is greatly improved;
5. carrying out screening and filtering by threshold value before training set input, cloud sparsity bring sample midpoint cloud quantity is avoided It is few, so as to cause the low problem of over-fitting and discrimination, further it is conducive to improve recognition accuracy.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is the flow chart for establishing data set of the invention;
Fig. 3 is the structure chart of network model of the invention;
Fig. 4 is the flow chart of amendment recognition result of the invention;
Fig. 5 is recognition result contrast schematic diagram of the invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not For limiting the present invention, i.e., described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is logical The component for the embodiment of the present invention being often described and illustrated herein in the accompanying drawings can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiment of the present invention provided in the accompanying drawings is not intended to limit below claimed The scope of the present invention, but be merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
It should be noted that the relational terms of term " first " and " second " or the like be used merely to an entity or Operation is distinguished with another entity or operation, and without necessarily requiring or implying between these entities or operation, there are any This actual relationship or sequence.Moreover, the terms "include", "comprise" or its any other variant be intended to it is non-exclusive Property include so that include a series of elements process, method, article or equipment not only include those elements, but also Further include other elements that are not explicitly listed, or further include for this process, method, article or equipment it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described There is also other identical elements in the process, method, article or equipment of element.
Feature and performance of the invention are described in further detail with reference to embodiments.
Embodiment 1
A kind of laser radar point cloud multiple target Objects recognition method based on deep learning includes the following steps:
A: data set is established: after point cloud scene is successively carried out region segmentation, character representation and label label, including The point cloud data of several three-dimensional space, the point cloud data include training set and test set, as shown in Figure 2:
A1: region segmentation: the minimum value of the XYZ coordinate in a cloud scene is first found out respectively, then by each coordinate points and respectively It is poor that the minimum value of reference axis make, and completes the translation of whole coordinate points, then each reference axis all carries out area as unit of 100 meters Regional partition is divided into the zonule of multiple 100*100*100 sizes, for each zonule, is carried out with parameter space size A into one The three-dimensional space of several A*A*A sizes of subdivision of step;Cloud scene parts will be entirely put in this way, and above-mentioned split plot design can in calculating Lead to low memory to solve the problems, such as that the point cloud data in the big region of a monolith directly inputs;The value of A is 0m~20m, density Relatively high point cloud space, A can set a little bit smaller, and the recognition effect under fine granularity is good, and opposite low density three-dimensional space can With set it is big a bit, will lead to the deficiency of three-dimensional spatial information amount if bulk has set small, to influence recognition effect.It removes Except this, it is also necessary to consider each respective physical size of classification object.
A2: the input feature vector of point cloud data is indicated: for point cloud data, in order to obtain between neighbouring point and partial structurtes Each three-dimensional space is divided into K*K*K junior unit by point Yun Tezheng, will be right according to whether there is point cloud data in each unit Answer unit to be assigned a value of 1 or 0, thus obtain one be made of 0 and 1, dimension be K*K*K input vector.The value range of K is 0-50。
A3: using the output of direct statistical magnitude method label point cloud data, in the three-dimensional space of each A*A*A size Statistics the inside is there are the points summation of all types of point clouds in point cloud data, and the point quantity of which classification is most, the three-dimensional space Between decide that into this kind, if there is the identical situation of multiple classes points summation, determined using ballot method, at random in point Label of the classification as the three-dimensional space is selected in the most classification of number summation.
B: construct and training points cloud Objects recognition network: establish include input layer, N layers of convolutional layer, full articulamentum and The network model of Softmax function utilizes back propagation and gradient decline coaching method training using above-mentioned training set as input After above-mentioned model obtains optimal models, test set input optimal models are obtained into recognition result, N takes five, as shown in Figure 3:
B1: Definition Model structure: input layer is the three-dimensional space of A*A*A size, and first layer convolutional layer (Conv1) passes through 20 The filter of a 7*7*7 carries out feature extraction, fills input matrix with 0, so that input layer is involved in convolution algorithm per one-dimensional, To obtain the 8*8*8 characteristic pattern tensor in 20 channels, the input of second layer convolutional layer (Conv2) is exactly first layer by normalization Linearly activate treated to export with Relu, convolution kernel of the input by 20 5*5*5*20 sizes carries out convolution operation, from And the local feature of finer grain is obtained, step-length 1, same 0 filled matrix obtains the 8*8*8 characteristic pattern in 20 channels, passes through Input after normalization and the processing of Relu activation primitive as third layer convolutional layer, the size of third layer convolution kernel are 5*5*5* 20, similar second layer convolution operation operates to obtain the characteristic pattern of 8*8*8 by normalization and Relu, the 4th layer of convolution kernel it is big Small is 3*3*3*20, and layer 5 convolution kernel size is 3*3*3*20;Layer 6 and layer 7 are full articulamentum and Softmax behaviour Make, the input of full articulamentum is the output of layer 5, output feature size and number be 8*8*8*20, first by the four-dimension to Amount is straightened processing and becomes an one-dimensional vector, then carries out linear operation, exports the feature vector tieed up at one 300, then Using Softmax function, one n-dimensional vector of final output (sum that n is the class of atural object in database).
B2: it establishes residual error network: also using residual error structure in model and avoid gradient disappearance problem, third layer convolutional layer Input is the output of first layer and the sum of the output of second layer convolution, and the sum of the output of third layer and the 4th layer output are as the Five layers of input, as shown in Figure 3.
B3: utilizing back-propagation algorithm and gradient descent method training pattern, passes through the feature vector and reality of every a kind of atural object The weighted value of cost function feedback regulation neural network between actual value, iterate until cost function be less than given threshold, obtain To the model of optimal weights value, test set input optimal models are obtained into recognition result.
It includes training set pretreatment that training set, which is inputted the network model, and training set pre-treatment step is as follows:
Judge whether the point cloud quantity of each three-dimensional space is greater than threshold value M, if more than then the net is inputted as training set Network model;Otherwise the three-dimensional space is filtered to and is classified as unfiled point, the value range of M is 20-100.
C: the recognition result exported to network according to depth information and high level, close by space of the poor, power tower beside power line Be the relationship between adjacent three-dimensional space, search the point of doubtful misclassification, using neighborhood processing to above-mentioned point again minute Class obtains final recognition result, as shown in Figure 3;
C1: it is searched after ground mixes line and tower based on recognition result using depth information and difference in height and is classified as not dividing Class point;
C2: sorted out after searching the tower mixed in high-rise vegetation using the distance of most near line classification point based on recognition result For unfiled point;
C3: based on recognition result using being classified as after other class culture points of the optimized relation between adjacent three-dimensional space Unfiled point;
C4: all unfiled points are identified using K nearest neighbor algorithm based on the above results, obtain final recognition result.
The three-dimensional space that the present embodiment compares different spaces size divides corresponding accuracy rate, such as table 1, more different three-dimensionals Influence of the spatial point cloud amount threshold M to accuracy rate, such as table 2, net training time under more different neural network structures and Average Accuracy after convolutional neural networks calculating, such as table 3, the front and back for comparing adjusting point cloud Objects recognition is all kinds of average accurate Rate, such as table 4:
The three-dimensional space of 1 different spaces size of table is divided to accuracy rate table
Bulk All kinds of Average Accuracies after convolutional neural networks calculating
5m*5m*5m 85.80%
8m*8m*8m 86.59%
10m*10m*10m 87.88%
12m*12m*12m 87.01%
15m*15m*15m 86.25%
Influence of the 2 three-dimensional space point cloud amount threshold M difference of table to accuracy rate
Point cloud amount threshold value (a) All kinds of Average Accuracies after convolutional neural networks calculating
20 85.17%
30 87.33%
50 87.88%
80 88.12%
100 88.35%
The Average Accuracy table after net training time and convolutional neural networks calculating under the different neural network structures of table 3
Neural network title All kinds of Average Accuracies after convolutional neural networks calculating Training time
The network used herein 87.88% 12min
3D ShapeNets[1] 82.56% 18min
Voxnet[2] 83.20% 15min
All kinds of Average Accuracy contrast tables in front and back of 4 adjusting point cloud Objects recognition of table
It can be obtained according to table 1-4, identification method recognition accuracy is higher, and recognition rate is faster;Such as Fig. 5 institute Show, compared amendment front and back recognition effect, Fig. 5 (a) is that amendment is preceding as a result, Fig. 5 (b) is after correcting as a result, different colours represent Different classes of (because of the requirement of Patent Law attached drawing, discoloration processing is carried out to it) of identification, including power line, power tower, road, room Room, five class of green vegetation;To sum up, the point cloud classifications recognizer provided through the invention realizes the automatic fast of point cloud data Speed identification gets rid of cumbersome time-consuming and laborious manual identification, establishes data set by region segmentation, reduce calculation amount;Lead to simultaneously It is modified identification after crossing basis identification, the first three-dimensional space for identifying that density is big identifies the small point of density by neighborhood processing Cloud greatly improves recognition accuracy.
Embodiment 2
A kind of laser radar point cloud multiple target Objects recognition method based on deep learning includes the following steps:
Establish data set:
Region segmentation: finding the minimum value, maximum value of x coordinate in point cloud data, and minimum value, maximum value and the z of y-coordinate are sat All points are subtracted the minimum value of respective coordinates by target minimum value, maximum value, so that point cloud data is 0-'s (max-min) In range, facilitate the processing of follow-up data, promotes program operational efficiency.From top to bottom by all point cloud datas, from left to right It is divided into the cube of 100m*100m*100, region total number M*N*K (wherein M=int (max.x)/100+1;N=int (max.y)/100+1;K=int (max.z)/100+1), all point cloud datas are successively traversed to be rounded divided by 100 and are included into In corresponding region, and all areas are traversed, the region that will be present a little is saved into corresponding file.It is obtained after region segmentation All Files (region of one 100m*100m*100m of a representation of file) are successively handled, then by each region division at several The three-dimensional space of a A*A*A, by experimental contrast analysis, the value takes 10 accuracy rate highests, i.e., each region is divided into 1000 10m* The three-dimensional space of 10m*10m.
Input feature vector indicates and output label: since point cloud data all includes 2 significant digits, for the guarantee of precision, Obtained three-dimensional space is successively subjected to further division, the point x, y, z of three-dimensional space are successively subtracted into the three-dimensional space in sky Between in position (normalization) so that coordinate range is all between 0-10, by coordinate respectively divided by 0.3333, so that each 10m* The three-dimensional space of 10m*10m is in effect divided into 30*30*30 junior unit, and formula is as follows:
Wherein, aa, bb, cc are corresponding three-dimensional space in the position of coordinate space;
Each three-dimensional space includes 27000 junior units, whether there is actual point cloud data for small list according to the unit The index of the value assignment 1 or 0 of member, respective coordinates and vector is shown below, and passes through each point in the entire voxel of statistical decision The summation of class number, the maximum class for the three-dimensional space.It is multi-party additionally, due to landform, the big, article size of point cloud variable density etc. The reason of face, the data set Different categories of samples generated after voxelization will appear unevenly, differ greatly, and will lead to train in this way and Model generalization it is bad and be easy to appear over-fitting, so passing through threshold value for the three-dimensional space that there are point cloud negligible amounts Judgement does not allow it as training dataset, and threshold value tests test in 20,30,50,80,100 ranges, and it is optimal to find 50;Coordinate It is as follows with the index formula of vector:
Index=900*x+30*y+z
Construct the point cloud Objects recognition network based on residual error network:
Network structure is as shown in Fig. 3, and wherein the hyper parameter value of model is as follows, maximum number of iterations: 250;Restrain threshold Value: 0.0001;Cost function: information exchange entropy;Sample lot number: 64;Initial learning rate: 0.05;Optimizer: Adam;Study Rate setting: exponential damping;Learning rate decaying step number: 100.All kinds of Average Accuracies after calculating have reached 87.88%.
The recognition result of adjusting point cloud Objects recognition network:
The ground point finding out the high-rise atural object mixed on ground and mixing on high-rise object divides the form benefit of cuboid The point that power line is misidentified on ground is found with depth information, the fixed side length of XY sets 5 meters here, and Z axis is just (similar without limitation In imfinitesimal method);Then rectangular intracorporal minimum point value (minimum value of Z) is found out, power line will be identified as in the cuboid It is poor that the Z coordinate value and minimum point value of point are made, and the point that difference is less than threshold value is reset to unfiled point (threshold value is set as 10), threshold value is equal It is the size and height by measuring obtained object.
Other class atural objects are such as: house, road, vegetation etc., take here and compare adjacent three-dimensional space and optimize, with one It centered on three-dimensional space, extracts 26 three-dimensional space adjacent thereto and tests to current types value, if with central three-dimensional sky Between if the equal three-dimensional space of types value is less than or equal to threshold value, point all in central three-dimensional space is reset into unfiled point (threshold value is 2 optimal here for many experiments discovery);
There is total count in three-dimensional space in pretreatment temporarily to be abandoned less than threshold value and carried out more because of wrong identification Just the point cloud data in unfiled state is sought using K nearest neighbor algorithm/neighborhood processing progress Classification and Identification, K nearest neighbor algorithm afterwards K point of surrounding near near point is looked for, classification number at most corresponds to classification;Based on the recognition result of Objects recognition network, in conjunction with depth The relationship between information and the spatial relationship that high level is poor, power tower is beside power line and adjacent three-dimensional space is spent, is searched doubtful The point of misclassification carries out classification again and obtains final recognition result, greatly reduces misclassification, improve recognition accuracy.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (6)

1. a kind of laser radar point cloud multiple target Objects recognition method based on deep learning, it is characterised in that: including walking as follows It is rapid:
Step 1: establish data set: after point cloud scene is successively carried out region segmentation, character representation and label label, including The point cloud data of several three-dimensional space, the point cloud data include training set and test set;
Step 2: construct and training points cloud Objects recognition network: establish include input layer, N layers of convolutional layer, full articulamentum and Above-mentioned training set is inputted the network model and completed after training acquisition optimal models, will surveyed by the network model of Softmax function Examination collection input optimal models obtain recognition result;
Step 3: the recognition result of adjusting point cloud Objects recognition network: according to depth information and high level, poor, power tower is in power line Relationship between the spatial relationship and adjacent three-dimensional space on side, searches the point of doubtful misclassification, again minute to above-mentioned point Class obtains final recognition result.
2. a kind of laser radar point cloud multiple target Objects recognition method based on deep learning according to claim 1, Be characterized in that: the step 1 includes the following steps:
Step 1.1: will point cloud scene carry out region segmentation obtain several three-dimensional space: will point cloud scene in each coordinate points with The minimum value of each reference axis is made difference and is completed after translating, and each reference axis is carried out region segmentation as unit of 100 meters and forms multiple 100* The zonule of 100*100, and above-mentioned zonule is divided to the three-dimensional space for obtaining several A*A*A sizes again;
Step 1.2: character representation being carried out to each three-dimensional space: each three-dimensional space is divided into K*K*K junior unit;Judgement It whether there is point cloud data in each junior unit, and if it exists, corresponding unit is then assigned a value of 1, is otherwise assigned a value of corresponding unit 0, finally obtain the input vector that including 0 and 1 and dimension is K*K*K;
Step 1.3: marking the output of each three-dimensional space using statistic law: being counted in each three-dimensional space and deposited according to input vector All types of point cloud quantity, according to the type of ballot method and all types of point cloud quantitative determination three-dimensional space, to obtain The label of three-dimensional space.
3. a kind of laser radar point cloud multiple target Objects recognition method based on deep learning according to claim 1, Be characterized in that: the step 2 includes the following steps:
Step 2.1: building includes the net of input layer, first layer convolutional layer-n-th layer convolutional layer, full articulamentum and Softmax function Network model, when N takes five, input layer inputs the three-dimensional space, and the convolution kernel size of first layer convolutional layer is 7*7*7, channel Number is 20, and the convolution kernel size of second layer convolutional layer and third layer convolutional layer is 5*5*5, channel number 20, four layers of convolution kernel Size with layer 5 convolution kernel is 3*3*3, channel number 20;
Step 2.2: establish residual error network based on above-mentioned network model: the input of third layer convolutional layer is the output and the of first layer The sum of the output of two layers of convolution, the input of the sum of output and the 4th layer of output of third layer as layer 5;
Step 2.3: using back-propagation algorithm and the above-mentioned network model of gradient descent method training, passing through the feature of every a kind of atural object The weighted value of cost function feedback regulation network between vector and actual value iterates until cost function is less than setting threshold Test set input optimal network model after obtaining the network model of optimal weights value, is obtained recognition result by value.
4. a kind of laser radar point cloud multiple target Objects recognition method based on deep learning according to claim 3, Be characterized in that: it includes training set pretreatment, training set pre-treatment step that training set, which is inputted the network model, in the step 2 It is as follows:
Judge whether the point cloud quantity of each three-dimensional space is greater than threshold value M, if more than then the network mould is inputted as training set Type;Otherwise the three-dimensional space is filtered to and is classified as unfiled point, the value range of M is 20-100.
5. a kind of laser radar point cloud multiple target Objects recognition method based on deep learning according to claim 4, Be characterized in that: the step 3 includes the following steps:
Step 3.1: being searched after ground mixes line and tower and be classified as not using depth information and difference in height based on recognition result Classification point;
Step 3.2: being returned after searching the tower mixed in high-rise vegetation using the distance of most near line classification point based on recognition result Class is unfiled point;
Step 3.3: based on recognition result using being sorted out after optimized relation other class culture points between adjacent three-dimensional space For unfiled point;
Step 3.4: the unfiled point filtered based on above-mentioned unfiled point and three-dimensional space using K nearest neighbor algorithm identify it is all not minute Class point, obtains final recognition result.
6. a kind of laser radar point cloud multiple target Objects recognition method based on deep learning according to claim 2, Be characterized in that: the value range of the A is 0-20m, and the value range of K is 0-50.
CN201910639585.4A 2019-07-16 2019-07-16 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning Pending CN110414577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910639585.4A CN110414577A (en) 2019-07-16 2019-07-16 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910639585.4A CN110414577A (en) 2019-07-16 2019-07-16 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning

Publications (1)

Publication Number Publication Date
CN110414577A true CN110414577A (en) 2019-11-05

Family

ID=68361567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910639585.4A Pending CN110414577A (en) 2019-07-16 2019-07-16 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning

Country Status (1)

Country Link
CN (1) CN110414577A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807461A (en) * 2020-01-08 2020-02-18 深圳市越疆科技有限公司 Target position detection method
CN110827302A (en) * 2019-11-14 2020-02-21 中南大学 Point cloud target extraction method and device based on depth map convolutional network
CN111044993A (en) * 2019-12-27 2020-04-21 歌尔股份有限公司 Laser sensor based slam map calibration method and device
CN111310765A (en) * 2020-02-14 2020-06-19 北京经纬恒润科技有限公司 Laser point cloud semantic segmentation method and device
CN111337898A (en) * 2020-02-19 2020-06-26 北京百度网讯科技有限公司 Laser point cloud processing method, device, equipment and storage medium
CN111339876A (en) * 2020-02-19 2020-06-26 北京百度网讯科技有限公司 Method and device for identifying types of regions in scene
CN111798397A (en) * 2020-07-08 2020-10-20 上海振华重工电气有限公司 Jitter elimination and rain and fog processing method for laser radar data
CN111859772A (en) * 2020-07-07 2020-10-30 河南工程学院 Power line extraction method and system based on cloth simulation algorithm
CN112131947A (en) * 2020-08-21 2020-12-25 河北鼎联科技有限公司 Road indication line extraction method and device
CN112488190A (en) * 2020-11-30 2021-03-12 深圳供电局有限公司 Point cloud data classification method and system based on deep learning
CN112633069A (en) * 2020-11-26 2021-04-09 贝壳技术有限公司 Object detection method and device
CN112666553A (en) * 2020-12-16 2021-04-16 动联(山东)电子科技有限公司 Road ponding identification method and equipment based on millimeter wave radar
CN112825192A (en) * 2019-11-21 2021-05-21 财团法人工业技术研究院 Object identification system and method based on machine learning
CN113052131A (en) * 2021-04-20 2021-06-29 深圳市商汤科技有限公司 Point cloud data processing and automatic driving vehicle control method and device
CN113219472A (en) * 2021-04-28 2021-08-06 合肥工业大学 Distance measuring system and method
CN113705655A (en) * 2021-08-24 2021-11-26 北京建筑大学 Full-automatic classification method for three-dimensional point cloud and deep neural network model
WO2022017129A1 (en) * 2020-07-22 2022-01-27 上海商汤临港智能科技有限公司 Target object detection method and apparatus, electronic device, and storage medium
CN114037948A (en) * 2021-10-08 2022-02-11 中铁第一勘察设计院集团有限公司 Vehicle-mounted road point cloud element vectorization method and device based on migration active learning
CN114511682A (en) * 2022-04-19 2022-05-17 清华大学 Three-dimensional scene reconstruction method and device based on laser radar and electronic equipment
CN114743010A (en) * 2022-06-13 2022-07-12 山东科技大学 Ultrahigh voltage power transmission line point cloud data semantic segmentation method based on deep learning
CN114821327A (en) * 2022-04-29 2022-07-29 北京数字绿土科技股份有限公司 Method and system for extracting and processing characteristics of power line and tower and storage medium
CN115136202A (en) * 2020-02-27 2022-09-30 苹果公司 Semantic annotation of point cloud clusters
CN115578608A (en) * 2022-12-12 2023-01-06 南京慧尔视智能科技有限公司 Anti-interference classification method and device based on millimeter wave radar point cloud
CN115908749A (en) * 2022-11-21 2023-04-04 中国科学院空天信息创新研究院 Specific target identification method based on laser radar point cloud data
CN116538996A (en) * 2023-07-04 2023-08-04 云南超图地理信息有限公司 Laser radar-based topographic mapping system and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103202A (en) * 2010-12-01 2011-06-22 武汉大学 Semi-supervised classification method for airborne laser radar data fusing images
CN102945567A (en) * 2012-10-19 2013-02-27 深圳先进技术研究院 Method and system for classifying and reconstructing indoor scene
CN103390169A (en) * 2013-07-19 2013-11-13 武汉大学 Sorting method of vehicle-mounted laser scanning point cloud data of urban ground objects
CN103473734A (en) * 2013-09-16 2013-12-25 南京大学 Power line extracting and fitting method based on in-vehicle LiDAR data
CN104866840A (en) * 2015-06-04 2015-08-26 广东中城规划设计有限公司 Method for recognizing overhead power transmission line from airborne laser point cloud data
CN105046264A (en) * 2015-07-08 2015-11-11 西安电子科技大学 Sparse surface feature classification and labeling method based on visible light and laser radar images
CN108647607A (en) * 2018-04-28 2018-10-12 国网湖南省电力有限公司 Objects recognition method for project of transmitting and converting electricity
CN109085604A (en) * 2018-08-22 2018-12-25 上海华测导航技术股份有限公司 A kind of system and method for power-line patrolling
CN109993748A (en) * 2019-03-30 2019-07-09 华南理工大学 A kind of three-dimensional grid method for segmenting objects based on points cloud processing network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103202A (en) * 2010-12-01 2011-06-22 武汉大学 Semi-supervised classification method for airborne laser radar data fusing images
CN102945567A (en) * 2012-10-19 2013-02-27 深圳先进技术研究院 Method and system for classifying and reconstructing indoor scene
CN103390169A (en) * 2013-07-19 2013-11-13 武汉大学 Sorting method of vehicle-mounted laser scanning point cloud data of urban ground objects
CN103473734A (en) * 2013-09-16 2013-12-25 南京大学 Power line extracting and fitting method based on in-vehicle LiDAR data
CN104866840A (en) * 2015-06-04 2015-08-26 广东中城规划设计有限公司 Method for recognizing overhead power transmission line from airborne laser point cloud data
CN105046264A (en) * 2015-07-08 2015-11-11 西安电子科技大学 Sparse surface feature classification and labeling method based on visible light and laser radar images
CN108647607A (en) * 2018-04-28 2018-10-12 国网湖南省电力有限公司 Objects recognition method for project of transmitting and converting electricity
CN109085604A (en) * 2018-08-22 2018-12-25 上海华测导航技术股份有限公司 A kind of system and method for power-line patrolling
CN109993748A (en) * 2019-03-30 2019-07-09 华南理工大学 A kind of three-dimensional grid method for segmenting objects based on points cloud processing network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DANIEL MATURANA 等: "VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition", 《INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 *
张继贤 等: "点云信息提取研究进展和展望", 《测绘学报》 *
董保根: "机载LiDAR点云与遥感影像融合的地物分类技术研究", 《中国优秀博士学位论文全文数据库 信息科技辑》 *
赵中阳 等: "基于多尺度特征和PionNet的LiDAR点云地物分类方法", 《激光与光电子学进展》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827302A (en) * 2019-11-14 2020-02-21 中南大学 Point cloud target extraction method and device based on depth map convolutional network
CN112825192A (en) * 2019-11-21 2021-05-21 财团法人工业技术研究院 Object identification system and method based on machine learning
CN112825192B (en) * 2019-11-21 2023-10-17 财团法人工业技术研究院 Object identification system and method based on machine learning
CN111044993A (en) * 2019-12-27 2020-04-21 歌尔股份有限公司 Laser sensor based slam map calibration method and device
CN110807461B (en) * 2020-01-08 2020-06-02 深圳市越疆科技有限公司 Target position detection method
CN110807461A (en) * 2020-01-08 2020-02-18 深圳市越疆科技有限公司 Target position detection method
CN111310765A (en) * 2020-02-14 2020-06-19 北京经纬恒润科技有限公司 Laser point cloud semantic segmentation method and device
CN111339876B (en) * 2020-02-19 2023-09-01 北京百度网讯科技有限公司 Method and device for identifying types of areas in scene
CN111339876A (en) * 2020-02-19 2020-06-26 北京百度网讯科技有限公司 Method and device for identifying types of regions in scene
CN111337898A (en) * 2020-02-19 2020-06-26 北京百度网讯科技有限公司 Laser point cloud processing method, device, equipment and storage medium
CN115136202A (en) * 2020-02-27 2022-09-30 苹果公司 Semantic annotation of point cloud clusters
CN111859772A (en) * 2020-07-07 2020-10-30 河南工程学院 Power line extraction method and system based on cloth simulation algorithm
CN111859772B (en) * 2020-07-07 2023-11-17 河南工程学院 Power line extraction method and system based on cloth simulation algorithm
CN111798397A (en) * 2020-07-08 2020-10-20 上海振华重工电气有限公司 Jitter elimination and rain and fog processing method for laser radar data
WO2022017129A1 (en) * 2020-07-22 2022-01-27 上海商汤临港智能科技有限公司 Target object detection method and apparatus, electronic device, and storage medium
CN112131947A (en) * 2020-08-21 2020-12-25 河北鼎联科技有限公司 Road indication line extraction method and device
CN112633069A (en) * 2020-11-26 2021-04-09 贝壳技术有限公司 Object detection method and device
CN112488190A (en) * 2020-11-30 2021-03-12 深圳供电局有限公司 Point cloud data classification method and system based on deep learning
CN112666553A (en) * 2020-12-16 2021-04-16 动联(山东)电子科技有限公司 Road ponding identification method and equipment based on millimeter wave radar
CN113052131A (en) * 2021-04-20 2021-06-29 深圳市商汤科技有限公司 Point cloud data processing and automatic driving vehicle control method and device
CN113219472A (en) * 2021-04-28 2021-08-06 合肥工业大学 Distance measuring system and method
CN113705655A (en) * 2021-08-24 2021-11-26 北京建筑大学 Full-automatic classification method for three-dimensional point cloud and deep neural network model
CN113705655B (en) * 2021-08-24 2023-07-18 北京建筑大学 Three-dimensional point cloud full-automatic classification method and deep neural network model
CN114037948A (en) * 2021-10-08 2022-02-11 中铁第一勘察设计院集团有限公司 Vehicle-mounted road point cloud element vectorization method and device based on migration active learning
CN114511682A (en) * 2022-04-19 2022-05-17 清华大学 Three-dimensional scene reconstruction method and device based on laser radar and electronic equipment
CN114821327A (en) * 2022-04-29 2022-07-29 北京数字绿土科技股份有限公司 Method and system for extracting and processing characteristics of power line and tower and storage medium
CN114743010A (en) * 2022-06-13 2022-07-12 山东科技大学 Ultrahigh voltage power transmission line point cloud data semantic segmentation method based on deep learning
CN115908749A (en) * 2022-11-21 2023-04-04 中国科学院空天信息创新研究院 Specific target identification method based on laser radar point cloud data
CN115578608A (en) * 2022-12-12 2023-01-06 南京慧尔视智能科技有限公司 Anti-interference classification method and device based on millimeter wave radar point cloud
CN115578608B (en) * 2022-12-12 2023-02-28 南京慧尔视智能科技有限公司 Anti-interference classification method and device based on millimeter wave radar point cloud
CN116538996A (en) * 2023-07-04 2023-08-04 云南超图地理信息有限公司 Laser radar-based topographic mapping system and method
CN116538996B (en) * 2023-07-04 2023-09-29 云南超图地理信息有限公司 Laser radar-based topographic mapping system and method

Similar Documents

Publication Publication Date Title
CN110414577A (en) A kind of laser radar point cloud multiple target Objects recognition method based on deep learning
CN109829399B (en) Vehicle-mounted road scene point cloud automatic classification method based on deep learning
CN109993220B (en) Multi-source remote sensing image classification method based on double-path attention fusion neural network
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
CN103839261B (en) SAR image segmentation method based on decomposition evolution multi-objective optimization and FCM
CN105488770B (en) A kind of airborne laser radar point cloud filtering method of object-oriented
CN105528596B (en) Utilize the high-resolution remote sensing image automatic building extraction method and system of shade
CN112052755B (en) Semantic convolution hyperspectral image classification method based on multipath attention mechanism
CN103473786B (en) Gray level image segmentation method based on multi-objective fuzzy clustering
CN105608474B (en) Region adaptivity plant extraction method based on high resolution image
CN110263705A (en) Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN102324038B (en) Plant species identification method based on digital image
CN110222767B (en) Three-dimensional point cloud classification method based on nested neural network and grid map
CN108052966A (en) Remote sensing images scene based on convolutional neural networks automatically extracts and sorting technique
CN112819830A (en) Individual tree crown segmentation method based on deep learning and airborne laser point cloud
CN107832797B (en) Multispectral image classification method based on depth fusion residual error network
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN109657616A (en) A kind of remote sensing image land cover pattern automatic classification method
Guirado et al. Deep-learning convolutional neural networks for scattered shrub detection with google earth imagery
CN106228130A (en) Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN114120067A (en) Object identification method, device, equipment and medium
Alburshaid et al. Palm trees detection using the integration between gis and deep learning
CN104331711B (en) SAR image recognition methods based on multiple dimensioned fuzzy mearue and semi-supervised learning
Al-Ghrairi et al. Classification of satellite images based on color features using remote sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191105

WD01 Invention patent application deemed withdrawn after publication