CN114359089A - Three-dimensional point cloud data denoising method based on point cloud filter network - Google Patents

Three-dimensional point cloud data denoising method based on point cloud filter network Download PDF

Info

Publication number
CN114359089A
CN114359089A CN202111641439.9A CN202111641439A CN114359089A CN 114359089 A CN114359089 A CN 114359089A CN 202111641439 A CN202111641439 A CN 202111641439A CN 114359089 A CN114359089 A CN 114359089A
Authority
CN
China
Prior art keywords
point
point cloud
points
neighborhood
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111641439.9A
Other languages
Chinese (zh)
Other versions
CN114359089B (en
Inventor
王兴涛
朱君
范晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Sucai Information Technology Co ltd
Original Assignee
Jiangsu Sucai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Sucai Information Technology Co ltd filed Critical Jiangsu Sucai Information Technology Co ltd
Priority to CN202111641439.9A priority Critical patent/CN114359089B/en
Publication of CN114359089A publication Critical patent/CN114359089A/en
Application granted granted Critical
Publication of CN114359089B publication Critical patent/CN114359089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional point cloud data denoising method based on a point cloud filter network, and belongs to the technical field of point cloud denoising. The method utilizes the deep neural network in a form highly suitable for a point cloud denoising task by combining a filtering method and a deep learning method, so that a more accurate point cloud denoising result is obtained. Different from the traditional filter, the invention utilizes the deep neural network to adaptively learn the parameters of the filter; different from a neural network directly outputting the coordinates of the denoised point cloud, the method utilizes the coefficient of a filter output by a deep neural network, and then utilizes the coefficient to filter the denoised point cloud so as to obtain the coordinates of the denoised point. Therefore, the defect that parameters need to be defined manually in the traditional filter is overcome.

Description

Three-dimensional point cloud data denoising method based on point cloud filter network
Technical Field
The invention relates to a three-dimensional point cloud data denoising method based on a point cloud filter network, and belongs to the technical field of point cloud denoising.
Background
With the rapid development of point cloud acquisition equipment and three-dimensional reconstruction technology, three-dimensional point cloud data has been widely applied in the fields of robots, autopilot, immersive interaction and the like. However, due to the inherent drawbacks of the acquisition or reconstruction process of the point cloud, the acquired point cloud is often noisy. The point cloud denoising technology can effectively remove noise in the point cloud, so that the point cloud can more accurately show the geometric structure of the target object. The traditional point cloud denoising technology is usually carried out in a filter mode, and the filter can effectively remove point cloud noise by artificially defining parameters of the filter. However, defining the parameters requires a considerable knowledge base, and the complexity of the point cloud data and the unknownness of the noise make the definition of the filter parameters very difficult. In the point cloud denoising technology based on the deep neural network, the neural network takes a point cloud with noise as input and directly outputs the denoised point cloud. The simple neural network utilization mode enables the performance of the method to depend on the learning ability of the network, and brings huge pressure to the training of the neural network, so that the neural network has large demand on training data. If the filtering technology and the deep learning technology can be combined, the definition of the filter parameters is not difficult any more, and the deep neural network is utilized more efficiently, so that a better denoising effect can be achieved.
Disclosure of Invention
The invention aims to provide a three-dimensional point cloud data denoising method based on a point cloud filter network, which combines a deep neural network and a filtering technology to design a deep neural network method highly suitable for point cloud denoising so as to solve the problems in the prior art.
A three-dimensional point cloud data denoising method based on a point cloud filter network comprises the following steps:
s100, acquiring all points in a neighborhood with radius r of a current point, and randomly selecting N points from the neighborhood, wherein the coordinates of the N points are expressed as a matrix of N x 3;
s200, acquiring the average distance of k neighbors of each point in the N points, wherein the average distance of the k neighbors of the N points is represented as a matrix of N x 1;
s300, inputting the coordinate matrix obtained in the S100 and the k neighbor average distance matrix obtained in the S200 into a deep neural network to obtain filter coefficients corresponding to N input points, wherein the filter coefficients are expressed as a matrix of N x 1;
s400, filtering the coordinates of the N points in the S100 by using the filter coefficient generated in the S300 to obtain the denoised point coordinates.
Further, in S100, the process of obtaining N points in the neighborhood specifically includes: if more than N points exist in the neighborhood, randomly sampling the N points; if there are fewer than N points in the neighborhood, the points in the neighborhood may be resampled to fit N points.
Further, in S200, the process of obtaining the average distance of k neighbors of each point is as follows:
s210, inquiring k points closest to each point in the point cloud for each point in the N points;
s220, respectively calculating the distance from the current point to k nearest points;
and S230, calculating the average value of the distances obtained in the S220, namely the k-nearest neighbor average distance.
Further, in S300, the process of obtaining the filter coefficients corresponding to the N points specifically includes the following steps:
s310, inputting the neighborhood point coordinate matrix obtained in S100 and the neighborhood point k neighbor average distance matrix obtained in S200 into an outlier identification network to obtain a probability vector w1The probability that each value in the vector corresponds to N input points as non-outliers;
s320, the neighborhood point coordinate matrix obtained in the S200 and the probability vector w obtained in the S310 are combined1Inputting the data into a point cloud information recovery network to respectively obtain coarse-grained recovery coefficients w2And a fine-grained recovery coefficient w3
S330, the outlier probability vector w obtained in the S3101And the coarse grain size recovery coefficient w obtained in S3202And a fine-grained recovery coefficient w3And combining to obtain the final filter coefficient.
Further, the outlier identification network of S310 and the point cloud information recovery network of S320 are composed of a convolutional layer, a pooling layer, and a stitching layer.
Further, the S310 obtains a probability vector w1The process comprises the following steps:
s311, merging the neighborhood point coordinate matrix obtained in S100 and the k neighbor average distance matrix obtained in S200, inputting the merged matrix into two layers of convolution layers, and obtaining expanded point-by-point characteristics;
s312, inputting the point-by-point features acquired in the S311 into a maximum pooling layer to acquire global features;
s313, the point-by-point features acquired in S311 and the global features acquired in S312 are spliced and input into two convolutional layers to obtain a probability vector w with each point as a non-outlier1
Further, the final filter coefficient is obtained in a manner that w is w in S3301×(w2+w3)。
Further, in S400, the filtering is performed in the form of weighted summation.
Further, in the step S320, a coarse-fine particle size recovery coefficient w is obtained2,w3The process comprises the following steps:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, recording the feature corresponding to the neighborhood center point as a center feature, subtracting the center feature from the point-by-point feature obtained in S321 to obtain a relative feature, and inputting the feature obtained in S321 to a maximum pooling layer to obtain a global feature;
s323, splicing the central feature, the relative feature and the global feature which are obtained in the step S322 together, and using the probability vector w obtained in the step S3101Weighting the spliced features;
s324, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the coarse grain recovery coefficient w2
S325, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the fine-grained recovery coefficient w3
The invention has the following beneficial effects:
1. according to the invention, by combining the filtering technology and the deep neural network technology, the filter coefficient can be learned in a self-adaptive manner, and the point cloud denoising is carried out by utilizing the deep neural network more efficiently. Compared with the conventional neural network-based point cloud denoising method, the PCN-dataset point cloud denoising method has the advantage that the chamfering distance can be reduced by 32.16%.
2. The method greatly improves the outlier identification precision by means of the k nearest neighbor average distance of the points, and effectively suppresses the influence of outliers on denoising by means of accurate outlier identification probability.
3. According to the invention, the coarse granularity information and the fine granularity information of the point cloud are respectively recovered in two steps through a stepped recovery mode, so that the point cloud information recovery precision is greatly improved.
Drawings
FIG. 1 is a schematic flow chart of a deep neural network according to the present invention;
FIG. 2 is a diagram of an outlier identification network architecture in accordance with the present invention;
fig. 3 is a diagram of a point cloud information recovery network structure according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
a three-dimensional point cloud denoising technology combining a filtering technology and a deep neural network technology comprises the following steps:
s100, acquiring all points in a neighborhood with radius r of a current to-be-denoised point, and randomly sampling N points from the neighborhood points, wherein the coordinates of the N points are expressed as a matrix of N x 3;
s200, for the N points in the S100, acquiring the k neighbor average distance of each point, wherein the k neighbor average distance of the N points is represented as a matrix of N x 1;
s300, inputting the coordinate matrix obtained in the S100 and the k neighbor average distance matrix obtained in the S200 into a deep neural network to obtain filter coefficients corresponding to N points, wherein the filter coefficients are expressed as a matrix of N x 1;
s400, filtering the coordinates of the N points in the S100 by using the filter coefficients generated in the S300 to obtain the denoised point coordinates.
Further, in S100, if there are more than N points in the neighborhood, randomly sampling N points; if there are fewer than N points in the neighborhood, the points in the neighborhood may be resampled to fit N points.
Further, the process of obtaining the k-nearest neighbor average distance of each point in S200 is as follows:
s210, for each point in the N points, searching k points closest to the point in the point cloud;
s220, respectively calculating the distance from the current point to k nearest points;
and S230, calculating the average value of the distances obtained in the S220, namely the k neighbor average distance of the current point.
Further, the process of S300 obtaining the filter coefficient is:
s310, inputting the neighborhood point coordinate matrix obtained in S100 and the neighborhood point k neighbor average distance matrix obtained in S200 into an outlier identification network to obtain a probability vector w1The probability that each value in the vector corresponds to N input points as non-outliers;
s320, the neighborhood point coordinate matrix obtained in the S200 and the probability vector w obtained in the S310 are combined1Inputting the data into a point cloud information recovery network to respectively obtain coarse-grained recovery coefficients w2And a fine-grained recovery coefficient w3
S330, the outlier probability vector w obtained in the S3101And the coarse-fine particle size recovery coefficient w obtained in S3202,w3And combining to obtain the final filter coefficient.
Further, the filtering of S400 is performed in the form of weighted summation.
Further, the S310 obtains an outlier probability vector w1The process comprises the following steps:
s311, merging the neighborhood point coordinate matrix obtained in S100 and the k neighbor average distance matrix obtained in S200, inputting the merged matrix into two layers of convolution layers, and obtaining expanded point-by-point characteristics;
s312, inputting the point-by-point features acquired in the S311 into a maximum pooling layer to acquire global features;
s313, the point-by-point features acquired in S311 and the global features acquired in S312 are spliced and input into two convolutional layers to obtain a probability vector w with each point as a non-outlier1
Further, the S320 obtains a coarse grain size recovery coefficient w2And a fine-grained recovery coefficient w3The process comprises the following steps:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, recording the features corresponding to the neighborhood center points as center features, subtracting the center features from the point-by-point features obtained in S321 to obtain relative features, and inputting the features obtained in S321 into a maximum pooling layer to obtain global features;
s323, splicing the central feature, the relative feature and the global feature which are obtained in the step S322 together, and using the probability vector w obtained in the step S3101Weighting the spliced features;
s324, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the coarse grain recovery coefficient w2
S325, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the fine-grained recovery coefficient w3
Example 2:
embodiment 2 is different from embodiment 1 in that the outlier identification network of the present embodiment is directly used to identify and delete outliers in the noise point cloud, instead of participating in the feature weighting and the calculation of the final filter coefficient in S323 in the form of a probability vector.
Example 3:
example 3 is different from example 1 in that the coarse grain size recovery coefficient w is obtained in S3202And a fine-grained recovery coefficient w3The process comprises the following steps:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, recording the feature corresponding to the neighborhood center point as a center feature, subtracting the center feature from the point-by-point feature obtained in S321 to obtain a relative feature, and inputting the feature obtained in S321 to a maximum pooling layer to obtain a global feature;
s323: splicing the central feature, the relative feature and the global feature which are obtained in the step S322 to obtain a splicing feature;
s324, inputting the splicing characteristics obtained in the S323 into the three-layer convolution layer to obtain a coarse-grained recovery coefficient w2
S325, inputting the splicing characteristics obtained in the S323 into the three-layer convolution layer to obtain the fine-grained recovery coefficient w3
Example 4:
example 4 is different from example 1 in that the coarse grain size recovery coefficient w is obtained in S3202And a fine-grained recovery coefficient w3The process comprises the following steps:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, recording the characteristics corresponding to the neighborhood center points as center characteristics, and subtracting the center characteristics from the point-by-point characteristics obtained in S321 to obtain relative characteristics;
s323, splicing the central features and the relative features acquired in the S322 together, and weighting the spliced features by using the probability vector acquired in the S310;
s324, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the coarse grain recovery coefficient w2
S325, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the fine-grained recovery coefficient w3
Example 5:
example 5 is different from example 1 in that the coarse grain size recovery coefficient w is obtained in S3202And a fine-grained recovery coefficient w3The process comprises the following steps:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, recording the features corresponding to the neighborhood center points as center features, and inputting the features acquired in the S321 into a maximum pooling layer to acquire global features;
s323, splicing the central features and the global features acquired in the S322 together, and weighting the spliced features by using the probability vector acquired in the S310;
s324, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the coarse grain recovery coefficient w2
S325, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the fine-grained recovery coefficient w3
Example 6:
example 6 is different from example 1 in that the coarse grain size recovery coefficient w is obtained in S3202And a fine-grained recovery coefficient w3The process comprises the following steps:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, weighting the expanded features by using the probability vector obtained in S310;
s323, recording the feature corresponding to the neighborhood center point as a center feature, subtracting the center feature from the point-by-point feature obtained in S321 to obtain a relative feature, and inputting the feature obtained in S321 to a maximum pooling layer to obtain a global feature;
s324, splicing the central feature, the relative feature and the global feature which are obtained in the S322 to obtain a spliced feature;
s325, inputting the splicing characteristics obtained in the S323 into the three-layer convolution layer to obtain a coarse-grained recovery coefficient w2
S326, inputting the splicing characteristics obtained in the S323 into the three-layer convolution layer to obtain the fine-grained recovery coefficient w3
Example 7:
embodiment 7 differs from embodiment 1 in that the process of S300 obtaining the filter coefficient is:
s310, the neighborhood point coordinate matrix obtained in S100 is close to the neighborhood point k obtained in S200Inputting the adjacent average distance matrix into the outlier identification network to obtain a probability vector w1Each value in the vector corresponds to the probability that the N input points are non-outliers;
s320, the neighborhood point coordinate matrix obtained in the S200 and the probability vector w obtained in the S310 are combined1Inputting the point cloud information into a point cloud information recovery network to obtain a point cloud recovery coefficient w2
S330, the outlier probability vector w obtained in the S3101And the point cloud recovery coefficient w obtained in S3202And combining to obtain the final filter coefficient.
S320, obtaining a point cloud recovery coefficient w2The process comprises the following steps:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, recording the feature corresponding to the neighborhood center point as a center feature, subtracting the center feature from the point-by-point feature obtained in S321 to obtain a relative feature, and inputting the feature obtained in S321 to a maximum pooling layer to obtain a global feature;
s323, splicing the central feature, the relative feature and the global feature which are obtained in the S322, and weighting the spliced feature by using the probability vector which is obtained in the S310;
s324, inputting the weighted characteristics obtained in the S323 into the three-layer convolution layer to obtain the point cloud recovery coefficient w2
Example 8:
embodiment 8 differs from embodiment 1 in that the process of S300 obtaining the filter coefficient is:
s310, inputting the neighborhood point coordinate matrix obtained in S100 and the neighborhood point k neighbor average distance matrix obtained in S200 into an outlier identification network to obtain a probability vector w1Each value in the vector corresponds to the probability that the N input points are non-outliers;
s320, the neighborhood point coordinate matrix obtained in the S200 and the probability vector w obtained in the S310 are combined1Inputting the data into a point cloud information recovery network to respectively obtain recovery coefficients w with different granularities2、w3、w4、w5……;
S330, the outlier probability vector w obtained in the S3101And the different-granularity recovery coefficient w obtained in S3202、w3、w4、w5… …, to obtain the final filter coefficients.
S320, obtaining recovery coefficients w of different particle sizes2、w3、w4、w5… … the process is:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, recording the feature corresponding to the neighborhood center point as a center feature, subtracting the center feature from the point-by-point feature obtained in S321 to obtain a relative feature, and inputting the feature obtained in S321 into a maximum pooling layer to obtain a global feature;
s323, splicing the central feature, the relative feature and the global feature which are obtained in the S322, and weighting the spliced feature by using the probability vector which is obtained in the S310;
s324, inputting the weighting characteristics obtained in S323 into the three-layer convolution layer to obtain the recovery coefficient w2
S325, inputting the weighting characteristics obtained in S323 into the three-layer convolution layer to obtain the recovery coefficient w3
S326, inputting the weighting characteristics obtained in S323 into the three-layer convolution layer to obtain the recovery coefficient w4
S327, inputting the weighting characteristics obtained in S323 into the three-layer convolution layer to obtain the recovery coefficient w5
Example 9:
embodiment 9 is different from embodiment 1 in that the final coefficient is obtained in S330 in such a manner that w is w2+w3
Example 10:
embodiment 10 is different from embodiment 1 in that the deep neural network in S320 can be integrated into one deep neural network through simple transformation in the point cloud recovery network of this embodiment. In principle, the distinction between the different networks S321, S322, S323, S324, S325 is for descriptive convenience and is based on functional distinction. At the time of training and deployment, the whole network is in an end-to-end form, so that conceptually distinguishing network modules is a special case of embodiment 1.
Example 11:
embodiment 11 is different from embodiment 1 in that the point cloud recovery network, the outlier identification network and the point cloud recovery network in S300 can be integrated into a deep neural network by simple transformation. In principle, the outlier identification network and the point cloud recovery network are distinguished from each other for descriptive convenience and according to functions. At the time of training and deployment, the whole network can be in an end-to-end form, so that conceptually distinguishing network modules is a special case of embodiment 1.
Example 12
The difference between the embodiment 12 and the embodiment 1 is that the point cloud data denoising method of this embodiment, the number of layers and the parameters of each neural network module described in S300 may be parameters and number of layers of any neural network, and the point cloud denoising method and process of the deep neural network structure after adjusting the parameters and number of layers of the neural network are the same as those of the embodiment 1.
Example 13
The difference between the embodiment 13 and the embodiment 1 is that the point cloud data denoising method of the embodiment includes attribute information such as reflection intensity, color, normal vector, and the like of each point in addition to coordinate information, and the point cloud denoising method and the process after adding input information are the same as those of the embodiment 1.
Example 1 was tested on PCN-dataset and compared with prior art point cloud denoising techniques bilateral filtering and PointCleanNet, and the results are shown below. It can be seen that the denoising performance of the method of the invention is improved compared with the two methods in the prior art.
Method Bilateral filtering PointCleanNet Point cloud filter network
Distance of chamfer 6.29 2.55 1.73
The invention is different from the prior filter technology of artificially defining parameters, and the invention utilizes a deep neural network to adaptively learn the parameters of the filter; and the method is different from a neural network which directly outputs the coordinates of the denoised point cloud, and utilizes the coefficient of a deep neural network output filter, and then utilizes the coefficient to filter the denoised point cloud so as to obtain the coordinates of the denoised point.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes and modifications can be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. A three-dimensional point cloud data denoising method based on a point cloud filter network is characterized by comprising the following steps:
s100, acquiring all points in a neighborhood with radius r of a current point, and randomly selecting N points from the neighborhood, wherein the coordinates of the N points are expressed as a matrix of N x 3;
s200, acquiring the average distance of k neighbors of each point in the N points, wherein the average distance of the k neighbors of the N points is represented as a matrix of N x 1;
s300, inputting the coordinate matrix obtained in the S100 and the k neighbor average distance matrix obtained in the S200 into a deep neural network to obtain filter coefficients corresponding to N input points, wherein the filter coefficients are expressed as a matrix of N x 1;
s400, filtering the coordinates of the N points in the S100 by using the filter coefficient generated in the S300 to obtain the denoised point coordinates.
2. The method for denoising three-dimensional point cloud data based on a point cloud filter network as claimed in claim 1, wherein in S100, the process of obtaining N points in the neighborhood specifically comprises: if more than N points exist in the neighborhood, randomly sampling the N points; if there are fewer than N points in the neighborhood, the points in the neighborhood may be resampled to fit N points.
3. The method of claim 1, wherein in S200, the process of obtaining the k-nearest neighbor average distance of each point is as follows:
s210, inquiring k points closest to each point in the point cloud for each point in the N points;
s220, respectively calculating the distance from the current point to k nearest points;
and S230, calculating the average value of the distances obtained in the S220, namely the k-nearest neighbor average distance.
4. The method for denoising three-dimensional point cloud data based on a point cloud filter network according to claim 1, wherein in S300, the process of obtaining the filter coefficients corresponding to the N points specifically comprises the following steps:
s310, inputting the neighborhood point coordinate matrix obtained in S100 and the neighborhood point k neighbor average distance matrix obtained in S200 into an outlier identification network to obtain a probability vector w1The probability that each value in the vector corresponds to N input points as non-outliers;
s320, the neighborhood point coordinate matrix obtained in the S200 and the probability vector w obtained in the S310 are combined1Inputting the data into a point cloud information recovery network to respectively obtain coarse-grained recovery coefficients w2And a fine-grained recovery coefficient w3
S330, the outlier probability vector w obtained in the S3101And the coarse grain size recovery coefficient w obtained in S3202And a fine-grained recovery coefficient w3And combining to obtain the final filter coefficient.
5. The filter coefficient obtaining algorithm of claim 4, wherein the outlier identification network of S310 and the point cloud information recovery network of S320 are composed of convolutional layers, pooling layers and stitching layers.
6. The filter coefficient obtaining algorithm of claim 5, wherein the S310 obtains a probability vector w1The process comprises the following steps:
s311, merging the neighborhood point coordinate matrix obtained in S100 and the k neighbor average distance matrix obtained in S200, inputting the merged matrix into two layers of convolution layers, and obtaining expanded point-by-point characteristics;
s312, inputting the point-by-point features acquired in the S311 into a maximum pooling layer to acquire global features;
s313, the point-by-point features acquired in S311 and the global features acquired in S312 are spliced and input into two convolutional layers to obtain a probability vector w with each point as a non-outlier1
7. The filter coefficient obtaining algorithm of claim 4, wherein the final filter coefficient is obtained in a manner w-w in S3301×(w2+w3)。
8. The method of denoising three-dimensional point cloud data based on point cloud filter network of claim 1, wherein in S400, the filtering is performed in the form of weighted summation.
9. The method for denoising three-dimensional point cloud data based on point cloud filter network of claim 1, wherein the step S320 obtains a coarse and fine granularity recovery coefficient w2,w3The process comprises the following steps:
s321, inputting the neighborhood point coordinate matrix obtained in the S100 into a three-layer convolution network to obtain expanded point-by-point characteristics;
s322, recording the feature corresponding to the neighborhood center point as a center feature, subtracting the center feature from the point-by-point feature obtained in S321 to obtain a relative feature, and inputting the feature obtained in S321 to a maximum pooling layer to obtain a global feature;
s323, splicing the central feature, the relative feature and the global feature which are obtained in the step S322 together, and using the probability vector w obtained in the step S3101Weighting the spliced features;
s324, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the coarse grain recovery coefficient w2
S325, inputting the weighting characteristics obtained in the S323 into the three-layer convolution layer to obtain the fine-grained recovery coefficient w3
CN202111641439.9A 2021-12-29 2021-12-29 Three-dimensional point cloud data denoising method based on point cloud filter network Active CN114359089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111641439.9A CN114359089B (en) 2021-12-29 2021-12-29 Three-dimensional point cloud data denoising method based on point cloud filter network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111641439.9A CN114359089B (en) 2021-12-29 2021-12-29 Three-dimensional point cloud data denoising method based on point cloud filter network

Publications (2)

Publication Number Publication Date
CN114359089A true CN114359089A (en) 2022-04-15
CN114359089B CN114359089B (en) 2022-09-27

Family

ID=81104322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111641439.9A Active CN114359089B (en) 2021-12-29 2021-12-29 Three-dimensional point cloud data denoising method based on point cloud filter network

Country Status (1)

Country Link
CN (1) CN114359089B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349094A (en) * 2019-06-12 2019-10-18 西安工程大学 It is peeled off based on statistics and the 3D point cloud denoising method of adaptive bilateral mixed filtering
CN111028327A (en) * 2019-12-10 2020-04-17 深圳先进技术研究院 Three-dimensional point cloud processing method, device and equipment
CN111986115A (en) * 2020-08-22 2020-11-24 王程 Accurate elimination method for laser point cloud noise and redundant data
CN112101278A (en) * 2020-09-25 2020-12-18 湖南盛鼎科技发展有限责任公司 Hotel point cloud classification method based on k nearest neighbor feature extraction and deep learning
CN112990010A (en) * 2021-03-15 2021-06-18 深圳大学 Point cloud data processing method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349094A (en) * 2019-06-12 2019-10-18 西安工程大学 It is peeled off based on statistics and the 3D point cloud denoising method of adaptive bilateral mixed filtering
CN111028327A (en) * 2019-12-10 2020-04-17 深圳先进技术研究院 Three-dimensional point cloud processing method, device and equipment
CN111986115A (en) * 2020-08-22 2020-11-24 王程 Accurate elimination method for laser point cloud noise and redundant data
CN112101278A (en) * 2020-09-25 2020-12-18 湖南盛鼎科技发展有限责任公司 Hotel point cloud classification method based on k nearest neighbor feature extraction and deep learning
CN112990010A (en) * 2021-03-15 2021-06-18 深圳大学 Point cloud data processing method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
R.QI CHARLES: "PointNet:Deep Learning on Point Sets for 3D Classfication and Segmentation", 《IEEE》 *

Also Published As

Publication number Publication date
CN114359089B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110532859B (en) Remote sensing image target detection method based on deep evolution pruning convolution net
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN113033570B (en) Image semantic segmentation method for improving void convolution and multilevel characteristic information fusion
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN112132844A (en) Recursive non-local self-attention image segmentation method based on lightweight
Pistilli et al. Learning robust graph-convolutional representations for point cloud denoising
CN110415280B (en) Remote sensing image and building vector registration method and system under multitask CNN model
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and application
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN110310305A (en) A kind of method for tracking target and device based on BSSD detection and Kalman filtering
CN113989340A (en) Point cloud registration method based on distribution
CN116797787A (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
CN116310095A (en) Multi-view three-dimensional reconstruction method based on deep learning
CN110348299B (en) Method for recognizing three-dimensional object
CN110349176B (en) Target tracking method and system based on triple convolutional network and perceptual interference learning
CN112561947A (en) Image self-adaptive motion estimation method and application
CN114359089B (en) Three-dimensional point cloud data denoising method based on point cloud filter network
WO2024082602A1 (en) End-to-end visual odometry method and apparatus
CN113450364B (en) Tree-shaped structure center line extraction method based on three-dimensional flux model
CN113160247B (en) Anti-noise twin network target tracking method based on frequency separation
CN112115771B (en) Gait image synthesis method based on star-shaped generation confrontation network
CN114708423A (en) Underwater target detection method based on improved Faster RCNN
CN113139991A (en) 3D point cloud registration method based on overlapping region mask prediction
JP6950647B2 (en) Data determination device, method, and program
CN114372944B (en) Multi-mode and multi-scale fused candidate region generation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant