CN111798397A - Jitter elimination and rain and fog processing method for laser radar data - Google Patents

Jitter elimination and rain and fog processing method for laser radar data Download PDF

Info

Publication number
CN111798397A
CN111798397A CN202010649942.8A CN202010649942A CN111798397A CN 111798397 A CN111798397 A CN 111798397A CN 202010649942 A CN202010649942 A CN 202010649942A CN 111798397 A CN111798397 A CN 111798397A
Authority
CN
China
Prior art keywords
point cloud
neural network
deep learning
convolutional neural
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010649942.8A
Other languages
Chinese (zh)
Inventor
杨育青
江灏
钱磊
刘玉强
陆冬明
李佩龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zpmc Electric Co ltd
Shanghai Zhenghua Heavy Industries Co Ltd
Original Assignee
Shanghai Zpmc Electric Co ltd
Shanghai Zhenghua Heavy Industries Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zpmc Electric Co ltd, Shanghai Zhenghua Heavy Industries Co Ltd filed Critical Shanghai Zpmc Electric Co ltd
Priority to CN202010649942.8A priority Critical patent/CN111798397A/en
Publication of CN111798397A publication Critical patent/CN111798397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology

Abstract

The invention discloses a method for jitter elimination and rain and fog processing of laser radar data, which comprises the following steps: 1) inputting point cloud image data of a laser radar; 2) correcting the point cloud image data; 3) preprocessing the point cloud image data to obtain a point cloud data set; 4) inputting the point cloud data set into a deep learning convolution neural network for model training; 5) optimizing a loss function; 6) and (5) verifying whether the accuracy M of target identification is greater than 95%, if so, finishing the rain and fog processing, otherwise, returning to the step 5) until the accuracy M is greater than 95%. The method has the advantages that the collected point cloud data are corrected to achieve the effect of eliminating the trembling, and the point cloud images obtained in rainy days and foggy days are processed by utilizing the deep learning convolutional neural network, so that the accuracy of target object identification of the laser radar in rainy and foggy days is improved, and the effect of processing the rain and fog is achieved.

Description

Jitter elimination and rain and fog processing method for laser radar data
Technical Field
The invention relates to a laser radar data image processing technology, in particular to a jitter elimination and rain fog processing method of laser radar data.
Background
With the development and progress of scientific technology and the improvement of computer computing capacity, the application of distance sensors such as laser radar to obstacle avoidance and target identification of robots is more and more extensive, the laser radar can obtain point cloud images under severe conditions such as foggy days and rainy days, but due to the low scanning frequency of the laser radar, the fixed laser radar can shake in the moving process and be influenced by weather, the obtained point cloud images are sparse and incomplete, and the accuracy of identification and classification of target objects is reduced.
However, in the existing methods for processing point cloud image data to eliminate jitter and rain fog, the local features of the image are mainly extracted and calibrated manually, but all the methods rely on very professional knowledge and personal experience, which consumes manpower, has limited extracted features, and has poor effect of processing rain fog of the image, thus the accuracy of identifying a target object is reduced.
Therefore, how to correct the cloud point image data and automatically extract the features of the cloud point image by the cloud point image obtained by the laser radar so as to improve the identification precision of the target object is a problem which needs to be solved urgently.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method for eliminating jitter and processing rain and fog of laser radar data.
In order to achieve the purpose, the invention adopts the following technical scheme:
a jitter elimination and rain and fog processing method for laser radar data comprises the following steps:
1) inputting point cloud image data of a laser radar;
2) correcting the point cloud image data;
3) preprocessing the point cloud image data to obtain a point cloud data set;
4) inputting the point cloud data set into a deep learning convolution neural network for model training;
5) optimizing a loss function;
6) and (5) verifying whether the accuracy M of target identification is greater than 95%, if so, finishing the rain and fog processing, otherwise, returning to the step 5) until the accuracy M is greater than 95%.
Preferably, the point cloud image data comprises a rain map and a fog map of the three-dimensional point cloud obtained by the laser radar.
Preferably, in the step 3), the preprocessing includes drying of the point cloud image, simplification of the point cloud image, registration of the point cloud image, and hole filling of the point cloud image.
Preferably, in the step 4), the point cloud data set is divided into a training set and a testing set;
inputting the training set into the deep learning convolutional neural network for model training, so as to adjust parameters of the model training in the deep learning convolutional neural network;
the test set is used for observing model training in the deep learning convolutional neural network, and judging whether parameters and initialization values of the model training in the deep learning convolutional neural network need to be adjusted or not according to the recognition accuracy of the target objects on the rain map and the fog map.
Preferably, in the step 4), the model training in the deep learning convolutional neural network includes a forward learning stage and a back propagation stage.
Preferably, the forward learning stage is that the input data is transmitted from front to back through each layer, that is, the point cloud data set is input into the deep learning convolutional neural network, the output value of the upper layer in the deep learning convolutional neural network is the input value of the current layer, the output value is transmitted to the next layer after the processing of each layer, and finally the output result is obtained, and the input value x of the current layer is the input value x of the current layerl-1Output value x from the previous layerlThe relationship between them is shown as follows:
xl=f(wlxl-1+bl)
in the formula, l is the number of layers, w is the weight, b represents a bias, and f is the activation function.
Preferably, in the back propagation stage, after the forward learning stage is completed each time, an error function is used to calculate an error between an expected value and an actual value, then the deep learning convolutional neural network transfers the error layer by layer to a previous layer, and the neural unit in the previous layer updates its weight according to the error.
Preferably, the back propagation stage includes the following two steps:
1) solving the difference between the output value and the target value of the deep learning convolution neural network obtained in the forward learning stage, and solving the loss value by using an error function;
2) and performing back propagation on the deep learning convolutional neural network by using a gradient descent algorithm, and updating the weight value of the neuron parameter in the deep learning convolutional neural network.
Preferably, in the step 4), the loss value of the model training is calculated through the forward learning stage and the backward propagation stage of the model training in the deep learning convolutional neural network, and for a data set D, the loss function is represented by the following formula:
Figure BDA0002574571670000031
wherein X (l) is the first layer convolution, f is the activation function, and r (w) is the regularization term.
Preferably, in the step 5), the optimization loss function is a parameter for adjusting model training in the deep learning convolutional neural network until a value of the loss function is a relative minimum value or reaches a preset threshold value.
The method for eliminating trembling and processing rain and fog of the laser radar data, provided by the invention, has the following beneficial effects:
1) according to the method for eliminating the shake and processing the rain fog, the density distribution of the point cloud image data points is changed by converting the coordinate system point cloud data values and utilizing a point cloud data correction algorithm, so that the real environment is more accurately reflected, and the guarantee is provided for the rain fog processing of the subsequent point cloud image;
2) according to the method for processing the shivering and rain fog, the acquired data are subjected to drying removal processing by using a space grid algorithm, point cloud simplification is completed by using a neighbor clustering segmentation algorithm based on the maximum grid density, registration of a public point cloud data model is realized by using a point cloud registration algorithm based on geometric attributes and improved ICP (inductively coupled plasma) and hole repairing is realized by using a repairing algorithm based on a Poisson equation, a large amount of invalid redundant data contained in the point cloud data are removed, the learning efficiency of a deep learning convolutional neural network is improved, and the training time is saved;
3) the deep learning convolutional neural network is utilized to automatically extract the point cloud image characteristics and share the weight of the neurons on the convolutional layer, so that the network parameters are reduced, and the training speed of the model is accelerated.
Drawings
FIG. 1 is a schematic view of the general flow of the method for reducing jitter and treating rain and fog according to the present invention;
fig. 2 is a detailed flow chart of an embodiment of the method for removing shake and treating rain fog according to the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the drawings and the embodiment.
Referring to fig. 1 to fig. 2, a method for jitter elimination and rain fog processing of lidar data provided by the present invention includes the following steps:
1) inputting point cloud image data of a laser radar;
2) the point cloud image data is corrected, so that the density distribution of the point cloud image data obtained by the laser radar is closer to the real environment and more characteristic elements of the acquired image are obtained;
the collected point cloud image data comprises a rain map and a fog map of the three-dimensional point cloud obtained by the laser radar;
3) preprocessing the point cloud image data to obtain a point cloud data set;
the preprocessing comprises the drying removal of the point cloud image, the simplification of the point cloud image, the registration of the point cloud image and the hole filling of the point cloud image;
4) inputting the point cloud data set into a deep learning Convolutional Neural Network (CNN) for model training;
through a trained deep learning Convolutional Neural Network (CNN), the accuracy rate of target object identification on a rain image and a fog image is improved, and the effect of performing rain/fog removing operation processing on the obtained rain/fog blurred image is achieved;
5) optimizing a loss function;
6) and (5) verifying whether the accuracy M of target identification is greater than 95%, if so, finishing the rain and fog processing, otherwise, returning to the step 5) until the accuracy M is greater than 95%.
In step 4), the point cloud data set is divided into a training set and a test set, and the training set is divided into small data sets of the same size batch.
The deep learning Convolutional Neural Network (CNN) includes convolutional layers, activation functions, pooling layers, and fully-connected layers.
And inputting the training set into a deep learning Convolutional Neural Network (CNN) for model training, so as to adjust the parameters of the model training in the deep learning Convolutional Neural Network (CNN).
The test set is used for observing model training in the deep learning Convolutional Neural Network (CNN), and judging whether parameters and initialization values of the model training in the deep learning Convolutional Neural Network (CNN) need to be adjusted or not according to the accuracy of target object identification on a rain graph and a fog graph.
In step 4), model training in the deep learning Convolutional Neural Network (CNN) includes a forward learning phase and a back propagation phase.
In the forward learning stage, input data is transmitted from front to back through each layer, namely a point cloud data set is input into a deep learning Convolutional Neural Network (CNN), the output value of the upper layer in the deep learning Convolutional Neural Network (CNN) is the input value of the current layer, the output value is transmitted to the next layer through the processing of each layer, and finally an output result is obtained, wherein the input value x of the current layer is the input value x of the current layerl-1Output value x from the previous layerlThe relationship between them is shown as follows:
xl=f(wlxl-1+bl)
in the formula, l is the number of layers, w is the weight, b represents a bias, and f is the activation function.
In the back propagation stage, after each forward learning stage is completed, an error function is used for calculating the error between the expected value and the actual value, then a deep learning Convolutional Neural Network (CNN) transmits the error to the previous layer by layer, and the neural unit in the previous layer updates the weight of the neural unit according to the error.
The back propagation phase comprises the following two steps:
1) solving a difference between an output value and a target value obtained by a deep learning Convolutional Neural Network (CNN) through a forward learning stage, and solving a loss value by using an error function;
2) and (3) performing back propagation on the deep learning Convolutional Neural Network (CNN) by using a gradient descent algorithm, and updating the weight values of the neuron parameters in the deep learning convolutional neural network.
In step 4), calculating the loss value of model training through the forward learning stage and the backward propagation stage of model training in the deep learning Convolutional Neural Network (CNN), wherein for a data set D, the loss function is represented by the following formula:
Figure BDA0002574571670000051
where X (l) is the first layer of convolution, f is the activation function, and r (w) is the regularization term (to prevent overfitting).
In the step 5), the loss function is optimized to adjust parameters of model training in the deep learning convolutional neural network until the value of the loss function is a relative minimum value or reaches a preset threshold value, so that the identification accuracy of the target object of the rain and fog image is improved, and the effect of rain and fog processing is achieved.
In the method for eliminating shake and treating rain fog, the collected point cloud image data set of the laser radar specifically comprises the following points:
1) the method comprises the steps that a point cloud image data format obtained by a laser radar cannot be directly used for a deep learning Convolutional Neural Network (CNN), the obtained point cloud image data format needs to be converted, and preferably, the point cloud image data format is converted into an HDF5 format;
2) dividing the point cloud image data with the converted format into a training set and a testing set, storing the training set and the testing set in a first folder and a second folder, and naming the training set and the testing set;
and inputting the test set into a trained deep learning Convolutional Neural Network (CNN), and verifying the accuracy of the fog image and the target object identification.
Examples
The method for eliminating shake and treating rain fog comprises the steps of obtaining point cloud image data correction by a laser radar, collecting a point cloud image data set, preprocessing the point cloud image, training a deep learning Convolutional Neural Network (CNN), and testing the test set.
And (3) correcting the point cloud image data of the laser radar, changing the density distribution of the obtained point cloud image data points by converting the point cloud image data values of the coordinate system and utilizing a point cloud image data correction algorithm, and obtaining more image characteristic elements.
The conversion of the coordinate system point cloud image data value is to consider the translation and rotation generated by the laser radar shake in the laser radar moving state.
Firstly, defining a world coordinate system as a W coordinate system, defining a robot coordinate system as an R coordinate system, fixing a laser radar on a robot, defining the coordinate system of the laser radar as an L coordinate system, defining the robot coordinate system after t time as an R 'coordinate system, and defining the laser radar coordinate system after t time as an L' coordinate system; p2 is an observation point in the world coordinate system, and the coordinate transformation of the observation point is in accordance with the following relation:
Figure BDA0002574571670000061
assumed point (ρ)e,θe) The following equation is satisfied for the observation data acquired by the laser radar:
L′x=ρe·cosθe
L′y=ρe·sinθe
based on time t, the measured correction at point P2 is as follows:
Figure BDA0002574571670000071
and converting all the measurement data in one frame of the laser radar to a coordinate system taking time as a standard according to the measurement data, namely finishing the correction of the frame data, and correcting the obtained point cloud image data in the same way.
The method comprises the steps of collecting a point cloud image data set, wherein the point cloud image data set is obtained by a laser radar in a point cloud image data format and cannot be directly used for a deep learning Convolutional Neural Network (CNN), the obtained point cloud image data format needs to be converted, preferably, the point cloud image data format is converted into an HDF5 format, the point cloud image data converted into the format is divided into a training set and a testing set, and the training set and the testing set are stored in a first folder and a second folder and named.
And performing point cloud image data preprocessing on the training set, wherein the point cloud image data preprocessing comprises point cloud denoising, point cloud simplification, point cloud registration and point cloud hole filling.
And denoising the point cloud, namely denoising the acquired data by using a space grid algorithm.
The simplification of the point cloud is to adopt a nearest neighbor clustering segmentation algorithm based on the maximum grid density to complete the simplification of the point cloud, and the specific steps are as follows:
1) carrying out uniform gridding on the denoised point cloud image data according to the spatial distribution of the point cloud image data, and counting the point cloud density DoPC in each unit grid;
2) the DoPC divides the point cloud image area into grids of UxV xW by setting the X, Y, Z maximum or minimum values of the whole area of the point cloud image data of the laser radar as Xmin, Xmax, Ymin, Ymax, Zmin and Zmax respectively, and the grid interval is delta x delta y x delta z, so that
Figure BDA0002574571670000081
For the grids of UxV xW, the number of points in each unit grid is obtained through statistics, and the density DoPC of each unit grid is obtained;
3) setting a minimum grid density threshold min, selecting a seed grid and seed points, and recording a set of the seed points as Q;
4) selecting a grid with the maximum DoPC value from the seed grids, regarding the points in the grid as belonging to the same class, clustering the points, and classifying the points into a point set 1;
5) observing all the remaining seed points, if one point q in 1 exists for one seed point p, and the distance between p and q is smaller than a threshold value, classifying p into a point set 1, wherein p belongs to 1;
6) and continuously selecting the grid with the maximum density in the point set Q-1, repeatedly setting a threshold value min to form a new point set until all the seed points are divided into a certain point set, and terminating iteration.
The point cloud registration is realized by adopting a point cloud registration algorithm based on geometric attributes and improved ICP (inductively coupled plasma), and the method comprises the following specific steps of:
1) setting an initial value: rotation matrix R0Translation vector t0The iteration number k is 0;
2) at point to be registered cloud a ═ x1,x2,…,xmSelect initial point set Ak
3) Nearest neighbor using K-d treeDomain search for Bk, i.e. initial set of points AkIn reference point cloud B ═ y1,y2,…,ynThe nearest registration point in (c);
4) the registration point pair is eliminated and eliminated by using the Euclidean distance threshold method, and the rotation matrix R of the registration point pair is obtained by using the unit quaternion methodkAnd a translation vector tk
5) Set of computation points AkObtaining a new point set A after one-time coordinate transformationK+1=RKAK+tkAnd then repeating the steps 1) to 4) until an iteration termination condition is met.
The hole repairing of the point cloud is realized by adopting a repairing algorithm based on a Poisson equation, firstly, a geometric Poisson equation is established, the outer surface of the model is fitted, and then, a fitting curved surface is cut and is sewn with the hole, so that the point cloud hole repairing is realized, and the concrete steps are as follows:
1) identifying holes according to the properties of triangles at the hole boundary of the triangular mesh model;
2) establishing a Poisson range by utilizing the acquisition information of the input model or the vertex information of the mesh model, and fitting the original curved surface to obtain a predicted curved surface;
3) and subdividing and cutting the prediction surface with protection constraints to obtain a hole patch, and seamlessly and smoothly stitching the hole patch and the original hole model.
Inputting the preprocessed point cloud image data into deep learning Convolutional Neural Network (CNN) training.
The method selects a neural network model trained by a deep learning Convolutional Neural Network (CNN), and the AlexNet neural network model is adopted for training. The neural network model comprises 8 layers (excluding pooling layers and local response normalization), wherein the first 5 layers are convolutional layers, the last 3 layers are full-link layers, the last layer of the neural network model is classified by 1000 classes of output Softmax function layers, a Local Response Normalization (LRN) layer appears after the 1 st convolutional layer and the 2 nd convolutional layer, and the maximum pooling layer appears after the two LRN layers and the last convolutional layer. A linear rectifying function (ReLU) activation function is applied after each of the 8 layers.
The AlexNet neural network model parameters are set as the following table 1, in the table, Name is a layer Name, Type is a pooling Type, Filter size is a Filter size, Stride is a step size, Padding is a boundary Padding number, and Output size is an Output size. In the first 5 layers of the model, the 1 st, 2 nd and 5 th convolutional layers are followed by a pooling layer (the pooling rule is max pooling), the 3 rd and 4 th layers only have convolutional layers and do not have pooling layers, the last 3 layers of fully-connected layers play a role of a classifier, the features of the front layer are weighted, and the feature space is mapped to the sample mark space through linear transformation.
Table 1 AlexNet parameter settings
Figure BDA0002574571670000091
Figure BDA0002574571670000101
Training is carried out in a deep learning Convolutional Neural Network (CNN), a training set is divided into N small data sets batch with the same size, and the small data sets are input into the network for training according to a certain sequence.
The training phase is divided into a forward learning phase and a backward propagation phase, and the specific process is as the following table 2.
TABLE 2 model learning Algorithm
Figure BDA0002574571670000102
And adjusting parameters of the training model, improving the accuracy of identifying the target object of the rain and fog image and achieving the effect of rain and fog treatment.
And testing through the test set to verify the identification accuracy of the target object.
And finally, verifying the accurate cooling rate M of the target identification, if the M is more than 95%, finishing the rain and fog treatment, and otherwise, continuously optimizing the loss function until the requirement of the accuracy rate is met.
Aiming at the parameters in the training model adjusted in the steps, the main purpose is to make the value of the loss function be a relative minimum value or reach a preset threshold value. The method improves the identification accuracy of the target object of the rain fog image, thereby achieving the effect of performing rain fog treatment on the point cloud image, and comprises the following specific processes:
firstly, in a deep learning Convolutional Neural Network (CNN), a loss function in the whole network structure is non-convex, there is no way to accurately calculate by using a formula, and only gradually solve by using an optimization algorithm, and for a data set D, the loss function is as follows:
Xl=f(wlXl-1+bl)
wherein XlFor the first layer of convolution, f is the activation function, and r (w) is the term used to prevent overfitting.
If the loss function value in the whole network structure is to be minimized and the loss function needs to be optimized, the whole data set needs to be used once through iteration, and when the data set is very large, the method can occupy a large amount of memory resources and cannot perform parallel computation. In practice, we will often divide a data set into equal-size data sets batch, where the number is N < < | D |, and in combination with the content of the present invention, define a loss function that needs to be optimized as follows:
Figure BDA0002574571670000111
minimizing the loss function value, from convolution XlIs divided into the size of the data set N and adjusted by the gradient descent algorithm in the back propagation phase.
The input data set is transmitted in each layer, and the relationship between layers is as follows, and it can be seen from the following formula that the convolution size can be changed by selecting a proper activation function, so that the loss function is minimized, and the convolution value can be minimized due to the rapid convergence speed of the ReLU activation function, and the ReLU activation function is preferred.
Xl=f(wlXl-1+bl)
Wherein l is the number of convolution layers; w represents a weight; b represents a bias term for each output feature map; f represents an activation function.
The loss function can be minimized by selecting a proper N value, namely, the value of the batch is adjusted, if the value of the batch is too small, the loss function is unstable and has larger fluctuation, and the value of the batch obtained through experiments is about 120 to 150, so that the value of the loss function can be minimized.
And in the back propagation stage, a gradient descent algorithm is utilized to automatically update the neuron parameter values in the network, so that the loss function is minimum.
The gradient descent algorithm is to automatically update W by using negative gradient ^ L (W) once for updating the new weight value Vt combination, as follows:
Figure BDA0002574571670000121
where η is the learning rate and the momentum μ is used once for updating the weight VtAnd the weight w is updated by adjusting the parameter mu, so that the loss function value is reduced, and the accuracy of target identification is improved.
And finally, inputting the test set into a trained deep learning Convolutional Neural Network (CNN) to verify the accuracy of target object identification.
It should be understood by those skilled in the art that the above embodiments are only for illustrating the present invention and are not to be used as a limitation of the present invention, and that changes and modifications to the above described embodiments are within the scope of the claims of the present invention as long as they are within the spirit and scope of the present invention.

Claims (10)

1. A jitter elimination and rain and fog processing method for laser radar data is characterized by comprising the following steps:
1) inputting point cloud image data of a laser radar;
2) correcting the point cloud image data;
3) preprocessing the point cloud image data to obtain a point cloud data set;
4) inputting the point cloud data set into a deep learning convolution neural network for model training;
5) optimizing a loss function;
6) and (5) verifying whether the accuracy M of target identification is greater than 95%, if so, finishing the rain and fog processing, otherwise, returning to the step 5) until the accuracy M is greater than 95%.
2. The method of claim 1, wherein the method comprises: the point cloud image data comprises a rain map and a fog map of the three-dimensional point cloud obtained by the laser radar.
3. The method of claim 1, wherein the method comprises: in the step 3), the preprocessing comprises the drying removal of the point cloud image, the simplification of the point cloud image, the registration of the point cloud image and the hole filling of the point cloud image.
4. The method of claim 2, wherein the method comprises: in the step 4), the point cloud data set is divided into a training set and a test set;
inputting the training set into the deep learning convolutional neural network for model training, so as to adjust parameters of the model training in the deep learning convolutional neural network;
the test set is used for observing model training in the deep learning convolutional neural network, and judging whether parameters and initialization values of the model training in the deep learning convolutional neural network need to be adjusted or not according to the recognition accuracy of the target objects on the rain map and the fog map.
5. The method of claim 4, wherein the method comprises: in the step 4), the model training in the deep learning convolutional neural network comprises a forward learning stage and a back propagation stage.
6. The method of claim 5, wherein the method further comprises: in the forward learning stage, input data is transmitted from front to back through each layer, namely the point cloud data set is input into the deep learning convolutional neural network, the output value of the upper layer in the deep learning convolutional neural network is the input value of the current layer, the output value is transmitted to the next layer through the processing of each layer, and finally an output result is obtained, wherein the input value x of the current layer is the input value x of the current layerl-1Output value x from the previous layerlThe relationship between them is shown as follows:
xl=f(wlxl-1+bl)
in the formula, l is the number of layers, w is the weight, b represents a bias, and f is the activation function.
7. The method of claim 6, wherein the method comprises: and in the back propagation stage, after the forward learning stage is completed each time, an error function is used for calculating an error between an expected value and an actual value, then the deep learning convolutional neural network transmits the error to the previous layer by layer, and the neural unit in the previous layer updates the weight of the neural unit according to the error.
8. The method of claim 7, wherein the method comprises: the back propagation phase comprises the following two steps:
1) solving the difference between the output value and the target value of the deep learning convolution neural network obtained in the forward learning stage, and solving the loss value by using an error function;
2) and performing back propagation on the deep learning convolutional neural network by using a gradient descent algorithm, and updating the weight value of the neuron parameter in the deep learning convolutional neural network.
9. A method of de-jittering and rain-fog processing of lidar data according to any of claims 5 to 8, wherein: in the step 4), a loss value of model training is calculated through a forward learning stage and a backward propagation stage of model training in the deep learning convolutional neural network, and for a data set D, a loss function is as follows:
Figure FDA0002574571660000031
wherein X (l) is the first layer convolution, f is the activation function, and r (w) is the regularization term.
10. The method of laser radar data de-jittering and rain-fog processing, as claimed in claim 9, wherein: in the step 5), the optimized loss function is a parameter for adjusting model training in the deep learning convolutional neural network until the value of the loss function is a relative minimum value or reaches a preset threshold value.
CN202010649942.8A 2020-07-08 2020-07-08 Jitter elimination and rain and fog processing method for laser radar data Pending CN111798397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010649942.8A CN111798397A (en) 2020-07-08 2020-07-08 Jitter elimination and rain and fog processing method for laser radar data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010649942.8A CN111798397A (en) 2020-07-08 2020-07-08 Jitter elimination and rain and fog processing method for laser radar data

Publications (1)

Publication Number Publication Date
CN111798397A true CN111798397A (en) 2020-10-20

Family

ID=72809746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010649942.8A Pending CN111798397A (en) 2020-07-08 2020-07-08 Jitter elimination and rain and fog processing method for laser radar data

Country Status (1)

Country Link
CN (1) CN111798397A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801225A (en) * 2021-04-01 2021-05-14 中国人民解放军国防科技大学 Automatic driving multi-sensor fusion sensing method and system under limit working condition

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108415032A (en) * 2018-03-05 2018-08-17 中山大学 A kind of point cloud semanteme map constructing method based on deep learning and laser radar
CN109584183A (en) * 2018-12-05 2019-04-05 吉林大学 A kind of laser radar point cloud goes distortion method and system
CN109975792A (en) * 2019-04-24 2019-07-05 福州大学 Method based on Multi-sensor Fusion correction multi-line laser radar point cloud motion distortion
EP3525000A1 (en) * 2018-02-09 2019-08-14 Bayerische Motoren Werke Aktiengesellschaft Methods and apparatuses for object detection in a scene based on lidar data and radar data of the scene
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A kind of road based on three-dimensional laser radar can traffic areas detection method
CN110346808A (en) * 2019-07-15 2019-10-18 上海点积实业有限公司 A kind of Processing Method of Point-clouds and system of laser radar
CN110414577A (en) * 2019-07-16 2019-11-05 电子科技大学 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111103578A (en) * 2020-01-10 2020-05-05 清华大学 Laser radar online calibration method based on deep convolutional neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3525000A1 (en) * 2018-02-09 2019-08-14 Bayerische Motoren Werke Aktiengesellschaft Methods and apparatuses for object detection in a scene based on lidar data and radar data of the scene
CN108415032A (en) * 2018-03-05 2018-08-17 中山大学 A kind of point cloud semanteme map constructing method based on deep learning and laser radar
CN109584183A (en) * 2018-12-05 2019-04-05 吉林大学 A kind of laser radar point cloud goes distortion method and system
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A kind of road based on three-dimensional laser radar can traffic areas detection method
CN109975792A (en) * 2019-04-24 2019-07-05 福州大学 Method based on Multi-sensor Fusion correction multi-line laser radar point cloud motion distortion
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN110346808A (en) * 2019-07-15 2019-10-18 上海点积实业有限公司 A kind of Processing Method of Point-clouds and system of laser radar
CN110414577A (en) * 2019-07-16 2019-11-05 电子科技大学 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111103578A (en) * 2020-01-10 2020-05-05 清华大学 Laser radar online calibration method based on deep convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘忠雨 等: "《深入浅出图神经网络:GNN原理解析》", 机械工业出版社, pages: 18 - 22 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801225A (en) * 2021-04-01 2021-05-14 中国人民解放军国防科技大学 Automatic driving multi-sensor fusion sensing method and system under limit working condition

Similar Documents

Publication Publication Date Title
WO2022160771A1 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
CN112489212B (en) Intelligent building three-dimensional mapping method based on multi-source remote sensing data
CN108537102B (en) High-resolution SAR image classification method based on sparse features and conditional random field
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN112686935B (en) Airborne sounding radar and multispectral satellite image registration method based on feature fusion
CN108921057B (en) Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
CN110222215B (en) Crop pest detection method based on F-SSD-IV3
CN111640125A (en) Mask R-CNN-based aerial photograph building detection and segmentation method and device
CN111652317A (en) Hyper-parameter image segmentation method based on Bayesian deep learning
CN110853070A (en) Underwater sea cucumber image segmentation method based on significance and Grabcut
CN108921817B (en) Data enhancement method for skin disease image
CN108320293A (en) A kind of combination improves the quick point cloud boundary extractive technique of particle cluster algorithm
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN111998862B (en) BNN-based dense binocular SLAM method
CN110533774B (en) Three-dimensional model reconstruction method based on smart phone
CN112749675A (en) Potato disease identification method based on convolutional neural network
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN112184762A (en) Gray wolf optimization particle filter target tracking algorithm based on feature fusion
CN111798397A (en) Jitter elimination and rain and fog processing method for laser radar data
CN112396655B (en) Point cloud data-based ship target 6D pose estimation method
CN113627481A (en) Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens
CN113379789A (en) Moving target tracking method in complex environment
CN110348311B (en) Deep learning-based road intersection identification system and method
CN108615240B (en) Non-parametric Bayesian over-segmentation method combining neighborhood information and distance weight
CN111695560A (en) Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination