CN114758069A - 3D point cloud simplification algorithm combined with human visual perception characteristics - Google Patents
3D point cloud simplification algorithm combined with human visual perception characteristics Download PDFInfo
- Publication number
- CN114758069A CN114758069A CN202210346993.2A CN202210346993A CN114758069A CN 114758069 A CN114758069 A CN 114758069A CN 202210346993 A CN202210346993 A CN 202210346993A CN 114758069 A CN114758069 A CN 114758069A
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- characteristic
- value
- neighborhood
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000016776 visual perception Effects 0.000 title claims abstract description 21
- 230000006870 function Effects 0.000 claims abstract description 47
- 238000011156 evaluation Methods 0.000 claims abstract description 43
- 238000000034 method Methods 0.000 claims abstract description 29
- 238000013210 evaluation model Methods 0.000 claims abstract description 17
- 230000008447 perception Effects 0.000 claims abstract description 11
- 238000005457 optimization Methods 0.000 claims abstract description 10
- 230000009467 reduction Effects 0.000 claims description 44
- 238000005070 sampling Methods 0.000 claims description 12
- 230000035945 sensitivity Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 238000005056 compaction Methods 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 6
- 238000002474 experimental method Methods 0.000 abstract description 5
- 230000005540 biological transmission Effects 0.000 abstract description 2
- 238000009826 distribution Methods 0.000 description 54
- 230000000694 effects Effects 0.000 description 31
- 238000004806 packaging method and process Methods 0.000 description 16
- 238000005538 encapsulation Methods 0.000 description 14
- 230000000007 visual effect Effects 0.000 description 9
- 241000406668 Loxodonta cyclotis Species 0.000 description 6
- 230000004438 eyesight Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000014759 maintenance of location Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000010146 3D printing Methods 0.000 description 2
- 238000002679 ablation Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000011158 quantitative evaluation Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000012847 principal component analysis method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Generation (AREA)
Abstract
The invention provides a 3D point cloud simplification algorithm combining geometric features with human visual perception characteristics, belongs to the field of 3D point cloud data processing, and aims at solving the problem that the rapid increase of the data volume of dense 3D point clouds aggravates the burden of post-data processing, storage and transmission; according to the method, a one-way perception sharpness function and a local visibility function are established by combining the geometrical characteristics of the point cloud to complete the importance evaluation of the points, and different compaction rules are formulated according to the importance of the points to realize hierarchical compaction of the point cloud. In addition, in order to improve the universality of the mixed feature evaluation model, a dynamic optimization strategy of each evaluation function weight is established, and the real-time update of the weight value guided by the feature evaluation result is realized. The effectiveness of the algorithm is verified through experiments, and compared with the traditional point cloud simplification algorithm, the algorithm provided by the invention can keep the overall uniformity of data and furthest keep the local details of the point cloud.
Description
Technical Field
The invention belongs to the field of 3D point cloud data processing, and particularly relates to a 3D point cloud simplification algorithm combined with human visual perception characteristics.
Background
The rapid development of the 3D reconstruction technology lays a foundation for obtaining the 3D point cloud, and meanwhile, the continuous improvement of the precision and the density of the 3D point cloud enables the 3D point cloud to be more widely applied to tasks such as 3D printing, online detection, target identification and the like. Aiming at different application requirements, various dense 3D point clouds have information redundancy with different degrees, and the processing, storage and transmission efficiency of later-stage 3D data can be effectively improved by adopting a reasonable 3D point cloud simplification technology, so that the research on a 3D point cloud simplification algorithm becomes a hot topic in the current data processing field.
Existing 3D point cloud reduction algorithms are mainly classified into two categories: mesh-based compaction and point-based compaction. The mesh-based simplification algorithm firstly establishes irregular meshes according to point cloud distribution, and then removes redundant meshes by making rules to realize point cloud simplification. S. -m.hur et al uses Delaunay triangulation to remove point data, achieving a reduction in point cloud data. T K Dey et al propose a method for effectively avoiding data oversampling by controlling user input density to improve surface fitting accuracy. Sun Feng et al propose an algorithm driven quantitatively by a shape approximation error metric to progressively simplify the initial intermediate mesh. Li Minglei et al propose a crease contour extraction method based on a mesh filtering technology, and realize the preservation of detail characteristics. Although the mesh-based simplification can effectively retain the overall contour and geometric detail features of the point cloud, the application of the mesh-based simplification in practical tasks is limited due to the huge calculation overhead required for constructing a mesh structure.
The point-based reduction algorithm has low calculation complexity and is the mainstream algorithm for point cloud reduction in recent years. When the algorithm is used for simplifying data, importance evaluation is carried out on each point of the point cloud so as to determine whether the point is reserved. Most of existing algorithms are based on a single characteristic evaluation index when evaluating the importance of point clouds, for example, Zang Yufu et al propose a method of local surface variation combined with adjacent salient point distribution to extract salient points in the point clouds. Wei Xuan et al propose a feature evaluation index based on normal angle local entropy for point cloud reduction algorithm. Gao Yanfeng et al propose a point cloud simplification algorithm using octree coding in combination with curvature feature evaluation. As the single evaluation index is mostly only suitable for the 3D point cloud of a specific scene or specific distribution, the algorithm universality is limited. In order to adapt to the increasingly diversified development trend of 3D point cloud forms, point cloud reduction algorithms based on multiple evaluation indexes are proposed in succession, for example, Ji Chunyang et al propose a multi-feature evaluation index combining vector difference, projection distance, spatial distance and curvature difference by using a method. Yang Yang Yang et al propose a point cloud data segmentation method based on normal vector, angular entropy, curvature and density information to realize point cloud simplification. Leal et al propose a dictionary learning method that achieves point cloud simplification through normal vector coordinates, position coordinates and curvature of a curved surface. Although different characteristic evaluation indexes are adopted in the algorithms, the algorithms pay attention to the retention of the geometric characteristic salient region of the point cloud model, the sensitive region of human vision to the point cloud is ignored, the weight of each characteristic evaluation index is usually set according to experience, and when the shape change of the simplified point cloud is large, the unbalance of the weight value can reduce the later application value of the point cloud.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a 3D point cloud simplification algorithm combined with human visual perception characteristics. The algorithm fully pays attention to the sensitive area of human vision to the point cloud on the basis of compromise retention of the overall geometric outline and the local detail outline of the 3D point cloud, further strengthens the local detail characteristics of the 3D point cloud through the provided human vision perception evaluation function, and establishes dynamic optimization strategies of the weight of each evaluation function so as to enhance the universality of the point cloud simplified model.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: the method is carried out according to the following steps by combining a 3D point cloud reduction algorithm of human visual perception characteristics:
step 1), carrying out K neighborhood search on point clouds;
step 2) calculating the one-way perception sharpness, the local visibility, the curvature, the average distance and the projection distance value of each point, calculating the weight value of each feature by adopting a weight dynamic optimization formula, and obtaining the mixed feature value of each point after different feature values and corresponding weights are weighted and averaged;
step 3) classifying the point clouds according to the mixed characteristic values, and setting a stepwise simplification rule to realize the down-sampling of the point clouds at all levels;
and 4) fusing the down-sampling data at all levels to obtain a simplified point cloud.
Further, the one-way perceived sharpness function in step 2) is defined as follows:
wherein p represents the current point, piK neighborhood points representing p, p (x, y, z) being coordinate values of the current point, pi(x, y, z) is the coordinate value of p neighborhood point, | | p (x, y, z) -pi(x, y, z) | | represents the Euclidean distance between the p point and the adjacent point, p (o) represents the coordinate value of the direction with the maximum variance of x, y and z, | p (o) -pi(o) | denotes a difference in coordinates of a point p in the direction of maximum variance of x, y or z and its neighborhood, p (o) and pi(o) can be calculated by the following formula:
wherein, delta2[p(x)]、δ2[p(y)]And delta2[p(z)]Respectively representing the variance of coordinate values of the point cloud in the x, y and z directions, if delta2[p(x)]>δ2[p(y)]>δ2[p(z)]The value returned by p (o) is the coordinate value in the x direction.
Further, the function expression of the local visibility in the step 2) is as follows:
wherein p isjIs the current point and its K neighborhood points, nθ(pj) Representative point pjThe angle of the normal vector of (a),representing the average of normal vector angles of the current point p and its K neighborhood points, and normal vector angle nθ(p) can be obtained by the following formula:
wherein n ispIs the normal vector of the current point p, npiThe p point K neighborhood normal vector is the p point K neighborhood normal vector, and the included angle between the p point K neighborhood normal vector and the p point K neighborhood normal vector can reflect the steepness degree of the point cloud local area.
Further, the expression of the mixed feature evaluation model of each point in the step 2) is as follows:
wherein,the method comprises the steps of representing a PSSD function and a local visibility LV function of one-way perception sharpness based on human visual perception characteristics respectively;respectively representing the curvature based on the geometric characteristic evaluation, the average distance from the current point p to the K neighborhood point thereof and the projection distance from the current point p to the K neighborhood point fitting plane thereof; w is a1~w5Weights representing different feature evaluation functions;
in order to reduce the scale difference between the characteristic values and ensure the sensitivity of each characteristic function, each characteristic value needs to be normalized, then the normalized characteristic peak value is filtered, and for convenient expression, the characteristic peak value is subjected to filtering treatment Reduced to F1(p)~F5(p), equation (6) can be simplified as:weights w for different feature evaluation functions1~w5And performing dynamic optimization, wherein a specific formula is as follows:
weight w1~w4With x (n) decreasing monotonically, x (n) with the characteristic value Fn(p) monotonically increasing, so that the characteristic value FnThe larger (p) is, the corresponding weight wnThe smaller, the smaller the value of x (n) is in the range of [0, + ∞), w1~w4Has a value range of (0, 0.25)]。
Further, the step-by-step simplification rule in the step 3) is as follows:
firstly, calculating a mixed characteristic value of each point in the point cloud according to a mixed characteristic evaluation model, and dividing the point cloud into I-level characteristic points, II-level characteristic points and III-level characteristic points according to the size of the mixed characteristic value; then, simplifying the point clouds at all levels by adopting different down-sampling rules: all I-level features are reserved, II-level feature points are simplified by adopting a hierarchical random sampling method, and III-level feature points are simplified by adopting a cuboid grid method.
The invention has the advantages and positive effects that: the algorithm is used for improving the efficiency of generating the 3D printing model data. When the regional saliency of the point cloud is detected, a one-way perception sharpness function and a local visibility function are established by the algorithm based on human visual perception characteristics, and the sensitivity of a mixed feature evaluation model is improved by combining a geometric feature evaluation function. Meanwhile, in order to realize the dynamic optimization of the weight values of the feature evaluation functions, a weight real-time updating strategy taking the feature evaluation results as guidance is established. The ablation experiment result shows that after the one-way perception sharpness and the local visibility function are increased, the point cloud detail feature retention effect is better. Compared with other simplification algorithms, the method has the advantages that the holes with the point cloud packaging effect are fewer after the simplification algorithm is adopted, and the local details are better reserved; from the view of geometric error indexes and simplification time, the algorithm effectively improves the point cloud simplification precision under the condition of not obviously increasing time complexity.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention.
Fig. 2 is a comparison of sensitive regions of the human eye based on different point cloud distributions: (a) a "book" point cloud distribution diagram, (b) "cone" point cloud distribution diagram, (c) "book" point cloud encapsulation effect, and (d) "cone" point cloud encapsulation effect.
FIG. 3 is a one-way perceived sharpness under different point cloud distributions: (a) visual sensitive area point cloud distribution, and (b) flat area point cloud distribution.
FIG. 4 is a schematic diagram of an angle between a current point normal vector and a neighborhood point normal vector.
Fig. 5 shows the effect of different feature evaluation functions on 3D point cloud reduction: (a1) - (a2) original point cloud distribution and encapsulation effect, (b1) - (b3) point cloud distribution, encapsulation effect and deviation distribution after geometric feature reduction, (c1) - (c3) point cloud distribution, encapsulation effect and deviation distribution after geometric feature and PSSD function reduction, and (d1) - (d3) point cloud distribution, encapsulation effect and deviation distribution after mixed feature evaluation model reduction.
Fig. 6 is a simplified result of the "elephant" point cloud: (a1) - (a2) distribution and packaging effect of original point clouds, (b1) - (b3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by a DFPSA algorithm, (c1) - (c3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by a bounding box method, (d1) - (d3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by a normal vector method, (e1) - (e3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by a curvature method, and (f1) - (f3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by the algorithm.
FIG. 7 is a simplified result of the "gargyle" point cloud: (a1) - (a2) distribution and packaging effect of original point clouds, (b1) - (b3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by a DFPSA algorithm, (c1) - (c3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by a bounding box method, (d1) - (d3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by a normal vector method, (e1) - (e3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by a curvature method, and (f1) - (f3) distribution, packaging effect and deviation distribution of the point clouds after being reduced by the algorithm.
FIG. 8 shows the geometric error and reduction time of "gargoyle" at different reduction rates: (a) maximum distance error, (b) average distance error, (c) relative volume error, and (d) reduced time.
Detailed Description
The present invention will be described in detail with reference to the following embodiments in order to make the objects, features and advantages thereof comprehensible.
The overall algorithm flow chart is shown in FIG. 1, and step 1) is to perform K neighborhood search on point cloud; step 2) calculating the one-way perception sharpness, the local visibility, the curvature, the average distance and the projection distance value of each point, calculating the weight value of each feature by adopting a weight dynamic optimization formula, and obtaining the mixed feature value of each point after different feature values and corresponding weights are weighted and averaged; step 3) classifying the point clouds according to the mixed characteristic values, and setting a stepwise simplification rule to realize the down-sampling of the point clouds at all levels; and 4) fusing the down-sampling data at all levels to obtain a simplified point cloud.
Visual saliency is an important characteristic of the human visual perception system, which describes the distribution of attention or eye movement of a person in a particular scene. Detecting visually significant regions is an important research field of computer vision and computer graphics, most of the current visual significance detection work is focused on 2D images or videos, and the detection work aiming at the visual significance of 3D point clouds is less. In order to more effectively realize the simplification of the 3D point cloud, the invention needs to establish 2 functions of 3D point cloud significance detection based on human visual perception characteristics, which are specifically as follows:
a. one-way perception of sharpness
Generally speaking, the sensitivity of human eyes to sharp regions of 3D point cloud is greater than that of flat regions, and conventional geometric feature evaluation functions such as Curvature (Curvature), Projection Distance (PD), etc. can be used to determine the sharp regions. Some special "sharp" areas need to be analyzed in combination with the visual perception characteristics. 2 different point cloud distributions as shown in fig. 2(a) and 2(B), it is obvious that the sensitivity of human eyes to the a region is greater than that of the B region, but after the model is packaged, the curvature or projection distance of the p point in the a region is far smaller than that of the q point in the B region, as shown in fig. 2(c) and 2 (d). Based on K neighborhood search, the position difference between the p point and the neighborhood point is found to be concentrated in the same direction, and the position difference between the q point and the neighborhood point is dispersed in a plurality of directions, and based on the analysis, a single direction Perception Sharpness (PSSD) function based on human visual Perception characteristics is provided, which is defined as follows:
wherein p represents the current point, piK neighborhood points representing p, p (x, y, z) being coordinate values of the current point, pi(x, y, z) is the coordinate value of p neighborhood point, | | p (x, y, z) -pi(x, y, z) | | represents the Euclidean distance between the p point and the adjacent point, p (o) represents the coordinate value of the direction with the maximum variance of x, y and z, | p (o) -pi(o) | denotes a difference in coordinates of a point p in the direction of maximum variance of x, y or z and its neighborhood, p (o) and pi(o) can be calculated by the following formula:
wherein, delta2[p(x)]、δ2[p(y)]And delta2[p(z)]Respectively representing the variance of coordinate values of the point cloud in the x, y and z directions, if delta2[p(x)]>δ2[p(y)]>δ2[p(z)]The value returned by p (o) is the coordinate value in the x direction.
To analyze the correctness of equation (1), two limit cases shown in FIG. 3 are taken for discussion, the current point p andthe distribution of K neighboring points is as shown in fig. 3(a), and the positional difference between p point and its neighboring point is concentrated in the x direction, so that p (o) is p (x), and | p (x, y, z) -pi(x,y,z)||=|p(x)-pi(x) L. soThe maximum value may be taken as 1; when the distribution of p points and K neighboring points is as shown in fig. 3(b), and the positional difference between p points and their neighboring points is distributed in the x, y, and z directions, p (o), (x), (y), p (z), and p (o), (x), and p (x) are given as followsTake the minimum valueThe two limit conditions illustrate that the PSSD function provided by the present invention can reflect the directionality of the local position difference of the 3D point cloud.
b. Local visibility
A change in the value of a pixel in a 2D image can be seen as superimposing a changing signal on a uniform background, the amplitude of which must reach a certain intensity in order to be seen by the vision system. Based on the non-linear relationship between the contrast sensitivity threshold and the background brightness, Chai Yi et al propose the concept of image Visibility (VI), which takes into account the characteristics of the visual system, and uses the contrast of the image pixels to measure the signal variation within an image block, which is defined as follows:
wherein I (x, y) represents the gray value of the pixel at position (x, y); m × N represents the size of the image I (x, y); m iskIs the average intensity value of I (x, y); gamma is a visual constant, and the value range of gamma is 0.6-0.7. The larger the value of VI, the higher the representative image visibility.
The 2D image visibility function cannot be directly used for 3D point cloud characteristic evaluation, and in order to establish a 3D point cloud visibility evaluation function based on a human visual system, the 3D point cloud visibility evaluation function is introducedAnd (4) a normal vector angle concept of the point cloud. The normal vector angle represents the average value of the included angle between the normal vector of the current point p and the normal vector of the K neighborhood point, the corresponding normal vector is calculated by a principal component analysis method, and the normal vector angle n of the normal vector angleθ(p) can be obtained by the following formula:
wherein n ispIs the normal vector of the current point p, npiIs a normal vector of the K neighborhood of the p point. The size of the included angle between the normal vector of p and the normal vector of the K neighborhood point can reflect the degree of steepness change of the local area of the point cloud, as shown in fig. 4, p in the steeply changed areaaThe angle theta between the normal vector of a point and the normal vector of its neighborhooda1、θa2Is significantly larger than p in the flat regionbThe angle theta between the normal vector of a point and the normal vector of its neighborhoodb1、θb2。
With a normal vector angle as an input, the expression of the Local visibility function (LV) of the 3D point cloud established by the invention is as follows:
wherein p isjIs the current point and its K neighborhood points, nθ(pj) Representative point pjThe angle of the normal vector of (a),and representing the average value of the vector angle of the current point p and the K neighborhood point method.
In the step 2), a 3D point cloud mixed feature evaluation model combining geometric features with human visual perception characteristics is adopted, and the model adopts the PSSD function and the LV function provided by the invention to evaluate the visual significance. When geometric feature significance evaluation is carried out, 2 evaluation indexes of curvature and projection distance are adopted to realize the expression of point cloud model details, and the average distance is adopted to ensure the uniformity distribution of global points. The expression of the mixed feature evaluation model is as follows:
wherein,the PSSD function and the LV function respectively represent the PSSD function and the LV function based on human visual perception characteristics; respectively representing the curvature based on the geometric characteristic evaluation, the average distance from the current point p to the K neighborhood point thereof and the projection distance from the current point p to the K neighborhood point fitting plane thereof; w is a1~w5Representing the weights of the different feature evaluation functions. In order to reduce the scale difference between the characteristic values and ensure the sensitivity of each characteristic function, each characteristic value needs to be normalized, and then the normalized characteristic peak value is subjected to filtering processing. For convenience of presentation, will Reduced to F1(p)~F5(p), equation (6) can be simplified as:weights w for different feature evaluation functions1~w5And performing dynamic optimization, wherein a specific formula is as follows:
weight w1~w4With x (n) decreasing monotonically, x (n) with the characteristic value Fn(p) monotonically increasing, so that the characteristic value FnThe larger (p) isWhich corresponds to the weight wnThe smaller, the smaller the value of x (n) is in the range of [0, + ∞), w1~w4Has a value range of (0, 0.25)]. Equation (7) passing through F at different points1(p)~F5And (p) continuously updating the value, so that dynamic optimization of the weighted value in the mixed characteristic evaluation model is realized.
According to the mixed characteristic evaluation model, the mixed characteristic value of each point in the point cloud can be calculated, and the point cloud is divided into I-level characteristic points, II-level characteristic points and III-level characteristic points according to the size of the mixed characteristic value. The number of the characteristic points at each level can be set according to the reduction rate in proportion, for example, the I-level characteristic points account for 10% of the total number of the point clouds, the II-level characteristic points account for 60% of the total number of the point clouds, and the III-level characteristic points account for 30% of the total number of the point clouds. The I-level feature points correspond to large mixed feature values, the points are crucial to the expression of the point cloud outline, the III-level feature points correspond to small mixed feature values, and a large amount of redundancy exists when the point cloud geometric distribution is expressed. Simplifying each level of point cloud by adopting different down-sampling rules: all I-level features are reserved, II-level feature points are simplified by adopting a hierarchical random sampling method, and III-level feature points are simplified by adopting a cuboid grid method.
The algorithm of the invention is explained in detail above, in order to verify the effectiveness of the 3D point cloud reduction algorithm provided by the invention, experiments are carried out from three aspects of effectiveness verification of a mixed feature evaluation model, qualitative evaluation of algorithm performance and quantitative evaluation, and specific experimental results and data analysis are detailed as follows.
Validity verification of mixed feature evaluation model
In order to verify the effectiveness of the hybrid feature evaluation model provided by the invention, ablation experiments are carried out on different feature evaluation functions. The "bunny" point cloud is selected as a test object, and the distribution of the original point cloud and the encapsulation effect thereof are shown in fig. 5(a1) and 5(a 2). Firstly, simplifying the point cloud based on the geometric characteristic evaluation function, and evaluating w in the mixed characteristic evaluation model provided by the formula (6)1、w2Zeroing, performing characteristic evaluation by using geometric characteristic indexes (curvature, average distance and projection distance), and performing hierarchical simplification according to characteristic values, wherein the simplified 3D point cloud distribution, encapsulation effect and deviation distribution are shown in FIG. 5(b1) -5(b 3). Second, let w in formula (6)2And setting zero, performing feature evaluation on the point cloud by adopting the geometric characteristics and the PSSD function, wherein the reduction result is shown in fig. 5(c1) -5(c3) under the condition that the reduction rate is not changed. Thirdly, the hybrid feature evaluation model provided by the formula (6) is adopted to reduce the bunny, and the reduction result is shown in fig. 5(d1) -5(d 3). Compared with the local enlarged area of the simplified point cloud, the ear of "bunny" in fig. 5(b2) has obvious holes, and the deviation is larger corresponding to the area in fig. 5(b 3); compared to fig. 5(b2), the area voids become smaller in fig. 5(c2), but there is still a larger deviation in fig. 5(c 3); the detail of the ear enlargement area of "bunny" in fig. 5(d2) is more completely preserved compared to fig. 5(c2), and the area deviation is significantly reduced in fig. 5(d 3). Therefore, the mixed feature evaluation model provided by the invention can better retain the detail information of the original point cloud while keeping the integral integrity of the data.
Visual effect comparison of different compaction algorithms
In order to qualitatively evaluate the performance of the 3D point cloud simplification algorithm provided by the invention, a Detail Feature Point Simplification Algorithm (DFPSA), a bounding box method, a normal vector method, a curvature method and the algorithm provided by the invention are adopted to simplify the point cloud. The "elepthat" point cloud is used as a test object, and the distribution and the encapsulation effect of the original point cloud are shown in fig. 6(a1) and 6(a 2). The reduction rate is controlled to be 50%, the point cloud distribution after reduction obtained by different algorithms is shown in fig. 6(b1) -6(f1), the encapsulation effect is shown in fig. 6(b2) -6(f2), and the deviation distribution is shown in fig. 6(b3) -6(f 3). Comparing fig. 6(a1) -6(f1), it is found that five algorithms can achieve effective simplification of the point cloud, wherein the bounding box method focuses on describing the global uniformity of the point cloud, and the rest of algorithms focus on preserving the detail features of the point cloud. Comparing fig. 6(b2) -6(f2) and 6(b3) -6(f3), it can be seen from the overall encapsulation effect of the point cloud model that holes of different degrees appear on the tail portion of "elephant" in (b2) and the rear sole (dashed box), and on the rear sole (dashed box) of "elephant" in (d2) and (e 2); (c2) although no hole appears in the middle "elephant", compared with (a2), the detail information is lost, which shows that the simplification algorithm provided by the invention can obtain a better encapsulation effect. As can be seen from the partially enlarged regions (solid line boxes) of the encapsulation diagrams, distinct holes appear at the nose of "elephant" in (b2) - (e2), and the deviations are large at the enlarged regions corresponding to (b3) - (e 3); and the information at the "elephant" nose in (f2) is more completely reserved, and the deviation of the corresponding (f3) at the region is minimum, which further explains that the performance of the reduction algorithm provided by the invention is better.
In order to ensure the repeatability of the experimental effect, the 'gargoyle' point cloud is taken as the test object to perform the experiment, and the distribution of the original point cloud and the encapsulation effect thereof are shown in fig. 7(a1) and 7(a 2). The reduction rate is controlled to be 50%, and the reduction results obtained by using different algorithms are shown in fig. 7(b1) -7(f1), fig. 7(b2) -7(f2), and fig. 7(b3) -7(f 3). From the packaging effect of the point cloud after the whole simplification, a large number of holes exist in fig. 7(b2), fig. 7(d2) and fig. 7(e2), fig. 7(c2) is relatively few in number, but local details in the area framed by the dashed frame are lost, and fig. 7(f2) better retains the local details while the whole is kept without packaging holes; from the overall deviation distribution, the deviation existing in fig. 7(f3) is the minimum, which shows that the reduction algorithm provided by the invention can achieve a better reduction effect for different point clouds.
Quantitative evaluation of performance of different compaction algorithms
In order to objectively evaluate the performance of each simplified algorithm, the geometric errors of the simplified point cloud are analyzed, and the adopted geometric error evaluation indexes comprise maximum distance errors, average distance errors and relative volume errors.
The expression for the maximum distance error is:
wherein S is an original point cloud; s' is simplified point cloud; d (p, S ') represents the Euclidean distance from a point p in S to the nearest triangular patch on S ' after the mesh segmentation of S '.
The expression for the average distance error is:
wherein, N is the number of the original point clouds.
The expression for the relative volume error is:
wherein, VSIs the volume of the original point cloud; vS′The volume of the simplified point cloud is obtained.
Under the condition of ensuring that the reduction rates are approximately equal, the geometric error and the reduction time (Matlab, AMD Ryzen 75800H, 3.2GHz, 8 kernels, 16GB memory) of different point cloud models under each reduction algorithm are shown in table 1. In the table, the inclined bold font represents the minimum value of the same evaluation index under different reduction algorithms, the black bold font represents the second minimum value, and the distribution of the minimum values reflects the performance of the different reduction algorithms. The maximum distance error, the average distance error and the relative volume error of the algorithm are obviously lower than those of other algorithms because the average distance is adopted as a constraint function when the overall characteristics of point clouds are maintained in the simplified algorithm provided by the invention, the generation of cavities in the point cloud packaging is avoided, the geometric characteristics are adopted to be combined with the human visual perception characteristics as an evaluation function when the local details are maintained, the sensitivity of the local significance evaluation function is increased, and finally, better compromise is carried out between the overall uniformity maintenance and the local significance characteristics. In the aspect of reducing time, the bounding box algorithm only adopts simple Euclidean distance as judgment to remove redundant points, and the time complexity is lower than that of the algorithm. Although the feature evaluation function of the algorithm is more complex than that of a curvature method and a normal vector method, the algorithm complexity is lower when the data in the later period is gradually reduced, so that the running time of the whole algorithm is shorter. The DFPSA algorithm (Ji Chunyang, Li Ying, Fan Jiahao, Lan Shumei. ANovel Simplification Method for 3D Geometric Point Cloud Based on the interface of the Point J. IEEE Access,2019,7:129029 and 129042.) has the longest reduction time, because the algorithm adopts the octree algorithm to reduce weak feature points, and the construction of the octree seriously increases the time complexity.
TABLE 1 Performance comparison of various compaction algorithms for different point cloud models
In order to further evaluate the advantages of the algorithm of the present invention under different reduction rates, a graph of geometric error and reduction time varying with the reduction rate is plotted, taking "gargoyle" point cloud as an example, as shown in fig. 8. Analyzing the overall distribution of the curves in fig. 8(a) -8(c), it can be seen that the geometric errors of the algorithms gradually increase with the increase of the reduction rate, but the geometric errors of the algorithms provided by the present invention are all the smallest at the same reduction rate. In addition, the solid line representing the algorithm error of the invention grows most gradually, and the performance of the algorithm provided by the invention is most stable along with the increase of the reduction rate. As can be seen from fig. 8(d), the reduction time of each reduction algorithm under different reduction rates remains substantially unchanged, and the algorithm proposed by the present invention ranks the second in time complexity. The algorithm of the invention effectively improves the precision of 3D point cloud simplification while not obviously increasing the time complexity of the algorithm.
Although the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.
Claims (5)
1. The 3D point cloud simplification algorithm combined with human visual perception characteristics is characterized by comprising the following steps of:
step 1), carrying out K neighborhood search on point clouds;
step 2) calculating the one-way perception sharpness, the local visibility, the curvature, the average distance and the projection distance value of each point, calculating the weight value of each feature by adopting a weight dynamic optimization formula, and obtaining the mixed feature value of each point after different feature values and corresponding weights are weighted and averaged;
step 3) classifying the point clouds according to the mixed characteristic values, and setting a stepwise simplification rule to realize the down-sampling of the point clouds at all levels;
and 4) fusing all levels of down-sampling data to obtain a simplified point cloud.
2. 3D point cloud reduction algorithm in combination with human visual perception characteristics according to claim 1, characterized in that: the one-way perceived sharpness function in the step 2) is defined as follows:
wherein p represents the current point, piK neighborhood points representing p, p (x, y, z) being coordinate values of the current point, pi(x, y, z) is the coordinate value of p neighborhood point, | | p (x, y, z) -pi(x, y, z) | | denotes the Euclidean distance between the p point and its neighboring point, p (o) represents the coordinate value of the direction of maximum variance of x, y, z, | p (o) -pi(o) | denotes a difference in coordinates of a point p in the direction of maximum variance of x, y or z and its neighborhood, p (o) and pi(o) can be calculated by the following formula:
wherein, delta2[p(x)]、δ2[p(y)]And delta2[p(z)]Respectively representing the variance of the point cloud in the coordinate values of x, y and z directions, if delta2[p(x)]>δ2[p(y)]>δ2[p(z)]The value returned by p (o) is the coordinate value in the x direction.
3. 3D point cloud reduction algorithm combining human visual perception characteristics according to claim 1, characterized in that: the function expression of the local visibility in the step 2) is as follows:
wherein p isjIs the current point and its K neighborhood points, nθ(pj) Representative point pjThe angle of the normal vector of (a),representing the average of normal vector angles of the current point p and its K neighborhood points, and normal vector angle nθ(p) can be obtained by the following formula:
4. 3D point cloud reduction algorithm combining human visual perception characteristics according to claim 1, characterized in that: the expression of the mixed characteristic evaluation model of each point in the step 2) is as follows:
wherein,respectively representing a PSSD function and a LV function of the one-way perception sharpness based on human visual perception characteristics; fc G(p)、Respectively representing the curvature based on geometric characteristic evaluation, the average distance from the current point p to the K neighborhood point thereof, and the average distance from the current point p to the K neighborhood point thereofThe projection distance of the domain point fitting plane; w is a1~w5Weights representing different feature evaluation functions;
in order to reduce the scale difference between the characteristic values and ensure the sensitivity of each characteristic function, each characteristic value needs to be normalized, then the normalized characteristic peak value is filtered, and for convenient expression, the characteristic peak value is subjected to filtering treatmentFc G(p)、Reduced to F1(p)~F5(p), equation (6) can be simplified as:weights w for different feature evaluation functions1~w5And performing dynamic optimization, wherein a specific formula is as follows:
weight w1~w4Monotonically decreasing with x (n), x (n) with the characteristic value Fn(p) monotonically increasing, so that the characteristic value FnThe larger (p) is, the corresponding weight wnThe smaller, x (n) ranges from [0, + ∞ ], w1~w4Has a value range of (0, 0.25)]。
5. The 3D point cloud reduction algorithm combining human visual perception characteristics according to claim 4, wherein: the step-by-step simplification rule in the step 3) is as follows:
firstly, calculating a mixed characteristic value of each point in the point cloud according to a mixed characteristic evaluation model, and dividing the point cloud into I-level characteristic points, II-level characteristic points and III-level characteristic points according to the size of the mixed characteristic value; then, simplifying the point clouds at all levels by adopting different down-sampling rules: all I-level features are reserved, II-level feature points are simplified by adopting a hierarchical random sampling method, and III-level feature points are simplified by adopting a cuboid grid method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210346993.2A CN114758069A (en) | 2022-04-01 | 2022-04-01 | 3D point cloud simplification algorithm combined with human visual perception characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210346993.2A CN114758069A (en) | 2022-04-01 | 2022-04-01 | 3D point cloud simplification algorithm combined with human visual perception characteristics |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114758069A true CN114758069A (en) | 2022-07-15 |
Family
ID=82328763
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210346993.2A Pending CN114758069A (en) | 2022-04-01 | 2022-04-01 | 3D point cloud simplification algorithm combined with human visual perception characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114758069A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024175012A1 (en) * | 2023-02-21 | 2024-08-29 | Douyin Vision Co., Ltd. | Method, apparatus, and medium for video processing |
-
2022
- 2022-04-01 CN CN202210346993.2A patent/CN114758069A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024175012A1 (en) * | 2023-02-21 | 2024-08-29 | Douyin Vision Co., Ltd. | Method, apparatus, and medium for video processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110929578B (en) | Anti-shielding pedestrian detection method based on attention mechanism | |
CN110827213B (en) | Super-resolution image restoration method based on generation type countermeasure network | |
WO2021208275A1 (en) | Traffic video background modelling method and system | |
CN109377448B (en) | Face image restoration method based on generation countermeasure network | |
CN108648161B (en) | Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network | |
CN108038906B (en) | Three-dimensional quadrilateral mesh model reconstruction method based on image | |
CN106228528B (en) | A kind of multi-focus image fusing method based on decision diagram and rarefaction representation | |
CN111488865A (en) | Image optimization method and device, computer storage medium and electronic equipment | |
CN107153816A (en) | A kind of data enhancement methods recognized for robust human face | |
CN106951840A (en) | A kind of facial feature points detection method | |
CN101371273A (en) | Video sequence partition | |
CN101371274A (en) | Edge comparison in video sequence partition | |
CN111179189B (en) | Image processing method and device based on generation of countermeasure network GAN, electronic equipment and storage medium | |
CN109766866B (en) | Face characteristic point real-time detection method and detection system based on three-dimensional reconstruction | |
CN113052976A (en) | Single-image large-pose three-dimensional color face reconstruction method based on UV position map and CGAN | |
CN111951196B (en) | Graph-based progressive point cloud downsampling method and device | |
CN116071415B (en) | Stereo matching method based on improved Census algorithm | |
CN117115359B (en) | Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion | |
CN116883588A (en) | Method and system for quickly reconstructing three-dimensional point cloud under large scene | |
CN114758069A (en) | 3D point cloud simplification algorithm combined with human visual perception characteristics | |
CN117710603B (en) | Unmanned aerial vehicle image three-dimensional building modeling method under constraint of linear geometry | |
CN113345089B (en) | Regularized modeling method based on power tower point cloud | |
CN116342519A (en) | Image processing method based on machine learning | |
CN114926591A (en) | Multi-branch deep learning 3D face reconstruction model training method, system and medium | |
CN110570375A (en) | image processing method, image processing device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |