CN116883590A - Three-dimensional face point cloud optimization method, medium and system - Google Patents

Three-dimensional face point cloud optimization method, medium and system Download PDF

Info

Publication number
CN116883590A
CN116883590A CN202310755275.5A CN202310755275A CN116883590A CN 116883590 A CN116883590 A CN 116883590A CN 202310755275 A CN202310755275 A CN 202310755275A CN 116883590 A CN116883590 A CN 116883590A
Authority
CN
China
Prior art keywords
point cloud
point
calculating
cloud data
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310755275.5A
Other languages
Chinese (zh)
Inventor
周安斌
晏武志
焦兴鸽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jindong Digital Creative Co ltd
Original Assignee
Shandong Jindong Digital Creative Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jindong Digital Creative Co ltd filed Critical Shandong Jindong Digital Creative Co ltd
Priority to CN202310755275.5A priority Critical patent/CN116883590A/en
Publication of CN116883590A publication Critical patent/CN116883590A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a three-dimensional face point cloud optimization method, medium and system, belonging to the technical field of point cloud optimization, wherein the method comprises the following steps: acquiring initial point cloud data of a face and preprocessing the initial point cloud data to obtain preprocessed point cloud; calculating the preprocessing point cloud by using a facial model based on the expression to obtain a template point cloud; constructing a contour point cloud data set according to the preprocessed point cloud and the corresponding template point cloud; selecting an initial angle based on a Bayesian optimization algorithm, and rotating the contour point cloud data set; calculating the gravity centers of the rotated outline point cloud data set and the template point cloud, and translating to enable the gravity centers to overlap; registering and calculating the preprocessed point cloud and the corresponding template point cloud, extracting rotation parameters and translation parameters, and estimating the pose of the human face; according to the obtained rotation parameters, translation parameters and facial pose, optimizing and rebuilding the preprocessing point cloud to obtain a basic optimized point cloud; and carrying out post-processing on the basic optimization point cloud, and outputting optimized three-dimensional face point cloud data.

Description

Three-dimensional face point cloud optimization method, medium and system
Technical Field
The invention belongs to the technical field of point cloud optimization, and particularly relates to a three-dimensional face point cloud optimization method, medium and system.
Background
With the rapid development of computer vision and graphics technology, three-dimensional face reconstruction and recognition have become a hot spot area of research. The three-dimensional face model can be widely applied to the fields of face recognition, virtual reality, film special effects and the like. The face point cloud data is used as a basis of a three-dimensional face model, and the quality of the face point cloud data directly influences the final application effect. However, due to the reasons of acquisition equipment, environmental factors and the like, the acquired initial point cloud data generally has the problems of noise, incompleteness, low resolution and the like, and the acquired initial point cloud data needs to be optimized, so that high-quality three-dimensional face point cloud data is obtained.
At present, the optimization method for the three-dimensional face point cloud data mainly comprises the steps of preprocessing, template fitting, registration and the like. The preprocessing mainly comprises the operations of filtering, noise reduction, smoothing and the like on the initial point cloud data so as to eliminate noise and irregularity. The template fitting refers to calculating the preprocessed point cloud data by using a facial model based on the expression to obtain a template point cloud similar to the point cloud data. The matching criterion is to align the preprocessed point cloud data with the template point cloud so as to estimate the pose of the face and optimize and reconstruct the point cloud data.
However, some problems still exist in the existing three-dimensional face point cloud optimization method. First, in the preprocessing stage, the conventional filtering and noise reduction method may cause the surface of the point cloud to become unsmooth, and influence the subsequent processing effect. Secondly, in a template fitting stage, the existing facial model based on the expression may not be fully adapted to various expressions and postures, so that an error between a template point cloud and an actual point cloud is large. In addition, in the registration stage, the selection of rotation and translation parameters has a great influence on the final optimization effect, so that the conventional method often needs to perform multiple iterative calculations, and the time consumption is long.
Disclosure of Invention
In view of the above, the invention provides a three-dimensional face point cloud optimization method, medium and system, which can solve the technical problem that the existing face model based on expression can not be completely adapted to various expressions and postures, so that the error between the template point cloud and the actual point cloud is larger.
The invention is realized in the following way:
the first aspect of the invention provides a three-dimensional face point cloud optimization method, which comprises the following steps:
s10, acquiring initial point cloud data of a human face;
s20, preprocessing the initial point cloud data, including filtering and noise reduction, to obtain preprocessed point cloud;
s30, calculating the preprocessing point cloud by using a facial model based on the expression to obtain a template point cloud;
s40, constructing a contour point cloud data set according to the preprocessing point cloud and the corresponding template point cloud;
s50, selecting an initial angle based on a Bayesian optimization algorithm, and rotating the contour point cloud data set;
s60, calculating the gravity centers of the rotated outline point cloud data set and the template point cloud, and translating to enable the gravity centers to overlap;
s70, carrying out registration calculation on the preprocessed point cloud and the corresponding template point cloud by adopting an ICP algorithm, after iteration is carried out for preset times, calculating fitness, extracting rotation parameters and translation parameters according to the fitness, and estimating the pose of the face;
s80, optimizing and reconstructing the preprocessed point cloud according to the obtained rotation parameters, translation parameters and the facial pose to obtain a basic optimized point cloud;
and S90, performing post-processing on the basic optimization point cloud, and outputting optimized three-dimensional face point cloud data.
On the basis of the technical scheme, the three-dimensional face point cloud optimization method can be further improved as follows:
the step of calculating the preprocessing point cloud by using the facial model based on the expression to obtain the template point cloud specifically comprises the following steps:
acquiring known multiple three-dimensional face shape data as a 3dmm training set;
aligning the shape data of each three-dimensional face shape data in the 3dmm training set;
calculating the average shape of each three-dimensional face shape data in the 3dmm training set;
calculating the shape difference of the shape data of each three-dimensional face in the 3dmm training set;
connecting the obtained shape differences to establish a difference matrix;
performing principal component analysis on the difference matrix to obtain a matrix of principal components;
obtaining partial columns of the principal component matrix to obtain partial principal component matrix;
projecting the preprocessed point cloud onto part of the principal component matrix to obtain a shape parameter vector;
and generating a template point cloud according to the shape parameter vector.
The step of constructing a contour point cloud data set according to the preprocessing point cloud and the corresponding template point cloud specifically includes:
performing nearest neighbor search on each point in the preprocessing point cloud and the template point cloud to obtain a neighbor point set;
calculating the normal vector of each point according to the adjacent point set;
calculating an included angle between the normal vector of each point and the normal vector of the adjacent point;
and extracting the contour point set according to the included angle and the included angle threshold value.
The step of selecting an initial angle based on a Bayesian optimization algorithm and rotating the contour point cloud data set specifically comprises the following steps:
constructing an objective function for measuring the coincidence degree of the rotated contour point cloud data set and the template point cloud;
initializing a Gaussian process model according to an objective function, selecting a constant mean function and a square index covariance function, and selecting a greedy sampling strategy as a sampling criterion;
iteratively updating the Gaussian process model until convergence conditions are met, and obtaining an optimal rotation angle;
and rotating the contour point cloud data set according to the optimal rotation angle to obtain a rotated contour point cloud data set.
The step of calculating the gravity centers of the rotated outline point cloud data set and the template point cloud and translating to enable the gravity centers to overlap specifically comprises the following steps:
calculating the gravity centers of the rotated contour point cloud data set and the template point cloud;
calculating translation vectors between the centers of gravity;
and translating the rotated contour point cloud data set along the translation vector to obtain a contour point cloud data set with overlapped center of gravity.
The step of carrying out registration calculation on the preprocessed point cloud and the corresponding template point cloud by adopting an ICP algorithm, calculating fitness after iteration for preset times, extracting rotation parameters and translation parameters according to the fitness, and estimating the pose of the face specifically comprises the following steps:
step 1, initializing a rotation matrix and a translation vector;
step 2, for each point in the preprocessing point cloud, searching a point closest to the point in the template point cloud M to form a point pair;
step 3, calculating an optimal rotation matrix and a translation vector according to the point pairs, so that the square sum of the distances between the point pairs is minimum;
step 4, updating the preprocessing point cloud;
judging a convergence condition, and stopping iteration if the preset iteration times are reached or the change of the distance between the point pairs is smaller than a preset threshold value; otherwise, returning to the step 2, and continuing to calculate the point pairs.
The method comprises the steps of carrying out optimization reconstruction on the preprocessing point cloud according to the obtained rotation parameters, translation parameters and human face pose, and obtaining basic optimization point cloud, wherein the steps of obtaining basic optimization point cloud are as follows: and transforming each point in the preprocessed point cloud through the rotation matrix and the translation vector to obtain the optimized point cloud.
The step of post-processing the basic optimization point cloud specifically comprises the following steps: smoothing and detail restoration processing.
A second aspect of the present invention provides a computer readable storage medium having stored therein program instructions that when executed are configured to perform a three-dimensional face point cloud optimization method as described above.
A second aspect of the present invention provides a three-dimensional face point cloud optimization system, which includes the above-mentioned computer-readable storage medium.
Compared with the prior art, the three-dimensional face point cloud optimization method, medium and system provided by the invention have the beneficial effects that:
1. improving quality of point cloud data
According to the method, the original point cloud data is preprocessed, including operations such as filtering and noise reduction, noise and irregularity in the point cloud data are effectively eliminated, and the quality of the point cloud data is improved. In addition, the facial model based on the expression is adopted to calculate the preprocessing point cloud, so that the template point cloud matched with the original point cloud data is generated, and the accuracy and reliability of the point cloud data are further improved.
2. Accurate extraction of contour point cloud data sets
According to the method, a contour point cloud data set is constructed according to the preprocessed point cloud and the corresponding template point cloud. By calculating the normal vector of each point in the point cloud and judging the contour point according to the included angle between the normal vectors, the contour point cloud data set can be accurately extracted, and effective input information is provided for subsequent rotation, translation and registration calculation.
3. Efficient searching for optimal rotation angle
According to the invention, the contour point cloud data set is rotated by adopting a Bayesian optimization algorithm, and the optimal rotation angle which enables the superposition degree of the rotated contour point cloud data set and the template point cloud to be maximum can be found in a short time by defining an objective function and adopting a Gaussian process model. Compared with the traditional traversal search method, the Bayesian optimization algorithm provided by the invention has higher search efficiency and lower calculation complexity.
4. Accurate estimation of face pose
According to the invention, the ICP algorithm is adopted to perform registration calculation on the preprocessed point cloud and the corresponding template point cloud, and after iteration is performed for preset times, the fitness is calculated, and the rotation parameters and the translation parameters are extracted according to the fitness, so that the pose of the face is accurately estimated. Compared with the traditional point cloud registration method, the ICP algorithm has higher registration accuracy and faster convergence speed.
5. Optimizing and reconstructing three-dimensional face point cloud data
According to the obtained rotation parameters, translation parameters and facial pose, the preprocessing point cloud is optimized and rebuilt to obtain the basic optimized point cloud. By post-processing the basic optimized point cloud, including operations such as smoothing and detail recovery, the method and the device can output the optimized three-dimensional face point cloud data. Compared with the original point cloud data, the optimized three-dimensional face point cloud data has higher quality, more accurate shape and richer detail information, and is beneficial to improving the accuracy and reliability of the application of the follow-up three-dimensional face modeling, recognition, analysis and the like.
In summary, the invention provides a three-dimensional face point cloud optimization method, which performs operations such as preprocessing, template construction, contour point cloud data set construction, rotation, translation, registration calculation, optimization reconstruction and the like on point cloud data through a plurality of advanced algorithms, and finally outputs optimized three-dimensional face point cloud data. Compared with the prior art, the method has the advantages and technical effects of higher point cloud data quality, more accurate contour point cloud data set extraction, more efficient optimal rotation angle searching, more accurate face pose estimation, more optimal reconstruction of three-dimensional face point cloud data and the like. The method is suitable for the fields of three-dimensional face modeling, recognition, analysis and the like, and has wide application prospect and great market potential.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a three-dimensional face point cloud optimization method provided by the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, based on the embodiments of the invention, which are apparent to those of ordinary skill in the art without inventive faculty, are intended to be within the scope of the invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
As shown in fig. 1, an embodiment of a three-dimensional face point cloud optimization method according to a first aspect of the present invention includes the following steps:
s10, acquiring initial point cloud data of a human face;
s20, preprocessing initial point cloud data, including filtering and noise reduction, to obtain preprocessed point cloud;
s30, calculating the preprocessing point cloud by using a facial model based on the expression to obtain a template point cloud;
s40, constructing a contour point cloud data set according to the preprocessed point cloud and the corresponding template point cloud;
s50, selecting an initial angle based on a Bayesian optimization algorithm, and rotating the contour point cloud data set;
s60, calculating the gravity centers of the rotated outline point cloud data set and the template point cloud, and translating to enable the gravity centers to overlap;
s70, carrying out registration calculation on the preprocessed point cloud and the corresponding template point cloud by adopting an ICP algorithm, after iterating for preset times, calculating fitness, extracting rotation parameters and translation parameters according to the fitness, and estimating the pose of the face;
s80, optimizing and reconstructing the preprocessed point cloud according to the obtained rotation parameters, translation parameters and the facial pose to obtain a basic optimized point cloud;
and S90, performing post-processing on the basic optimization point cloud, and outputting optimized three-dimensional face point cloud data.
In step S10, we need to acquire initial point cloud data of a face. Point cloud data is a collection of coordinates representing points in three-dimensional space that can be used to describe the shape of an object's surface. There are many methods for acquiring initial point cloud data of a face, for example: stereo cameras, laser scanners, structured light sensors, etc. Next, we will describe how to acquire initial point cloud data of a face from these devices.
The stereo camera acquires face point cloud data
A stereo camera is a camera with depth perception capability that calculates the depth of an object by capturing two or more images of the same scene at different perspectives. The depth information of the face can be extracted from the image acquired by the stereo camera and converted into point cloud data. The method comprises the following specific steps:
(1) First, we need to correct the images acquired by the stereo cameras, eliminating lens distortion and parallax, so that the images of the left and right cameras are aligned. Let the original left and right images be I respectively L And I R Obtaining corrected image I 'through camera internal and external parameter correction' L And I' R
(2) Next, we need to calculate the corrected image I' L And I' R A disparity map D between them. Each pixel value in the disparity map represents the horizontal offset of the corresponding spatial point in the left and right images. Common parallax calculation methods include SGBM (Semi-Global Block Matching) and WLS (Weighted Least Squares).
(3) According to the disparity map D and the internal and external parameters of the camera, the coordinates (X, Y, Z) of each pixel point in the three-dimensional space can be calculated, so that the point cloud data P can be obtained.
Laser scanner for acquiring face point cloud data
A laser scanner is a device that measures the distance of the surface of an object by emitting a laser beam and receiving a reflected laser beam. We can scan a face using a laser scanner and generate point cloud data from the measurements. The method comprises the following specific steps:
(1) First, we need to set parameters of the laser scanner, such as scan range, resolution, etc., to ensure that enough face point cloud data can be obtained.
(2) Next, we need to align the laser scanner to the face to start scanning. The laser scanner emits a laser beam and receives the reflected laser beam, and calculates the object surface distance from the travel time of the laser beam.
(3) From the measurement results of the laser scanner, we can calculate the coordinates (X, Y, Z) of each measurement point in the three-dimensional space, thereby obtaining the point cloud data P.
Structured light sensor for acquiring face point cloud data
A structured light sensor is a device that calculates the three-dimensional shape of an object by projecting a specific pattern of light (e.g., stripes, grids, etc.) onto the object surface and capturing the deformation of the pattern of light reflected back from the object surface. We can scan a face using a structured light sensor and generate point cloud data from the measurements. The method comprises the following specific steps:
(1) First, we need to set parameters of the structured light sensor, such as projection mode, resolution, etc., to ensure that enough face point cloud data can be obtained.
(2) Next, we need to align the structured light sensor to the face to start scanning. The structured light sensor projects a specific light pattern onto the face surface and captures the deformation of the light pattern reflected back by the face surface.
(3) From the deformation of the captured light pattern, we can calculate the coordinates (X, Y, Z) of each measurement point in three-dimensional space, thereby obtaining the point cloud data P.
In summary, in step S10, we can acquire initial point cloud data of a face through a stereo camera, a laser scanner, or a structured light sensor, etc. These devices can provide sufficient three-dimensional information to implement the subsequent face point cloud optimization method.
In step S20, the acquired initial point cloud data P needs to be preprocessed to eliminate noise and irregularities. This step includes filtering and noise reduction, resulting in a preprocessed point cloud P'. To achieve this, we employ the following method.
First, we downsample the point cloud data P. Downsampling is the process of reducing the size of data by reducing the number of points in the point cloud data. This may reduce the complexity of subsequent computations while preserving the basic shape of the point cloud. In this embodiment we use voxel grid filtering (Voxel Grid Filter) for downsampling. Voxel grid filtering divides the point cloud space into a grid of cubes, each cube being referred to as a voxel. The points within each voxel are then replaced with the center point of that voxel, thereby reducing the number of points. Voxel grid filtering can be expressed as:
P v =VoxelGridFilter(P,l);
wherein P is v For the down-sampled point cloud, l is the side length of the voxel.
Next, we apply to the downsampled point cloud P v And (5) denoising. The purpose of the denoising process is to eliminate noise introduced due to sensor errors, environmental factors, and the like. In this embodiment we use radius filtering (Radius Outlier Filter) for denoising. The principle of radius filtering is to calculate the distance between each point in the point cloud and the adjacent points, and if the number of the adjacent points of a certain point is smaller than a preset threshold value, the point is regarded as noise and rejected. Radius filtering can be expressed as:
P r =RadiusOutlierFilter(P v ,r,N);
wherein P is r For the denoised point cloud, r is a radius threshold, and N is a number of neighboring points threshold. The radius threshold is generally a shortest distance between all points in the point cloud, and the number of adjacent points is generally 6, orSo as to be adjusted according to actual conditions.
After downsampling and denoising, we obtain a preprocessed point cloud P' =p r . However, these treatments may cause the point cloud surface to become non-smooth. Therefore, we need to smooth the pre-processing point cloud p'. In this embodiment, we use the moving least squares method (Moving Least Squares, MLS) for smoothing. The basic idea of the MLS method is to fit a local surface to each point in the point cloud and its neighbors, and then replace the original point with the point on the fitted surface. The MLS method can be expressed as:
P′=MLSSmoothing(P r ,k,p);
where k is the number of neighboring points and p is the polynomial fitting order.
So far, we have completed preprocessing of the point cloud data P, resulting in a preprocessed point cloud P'. The whole flow of the step S20 can be expressed as:
downsampling: p (P) v =VoxelGridFilter(P,l);
Denoising: p (P) r =RadiusOutlierFilter(P v ,r,N);
Smoothing: p' =mlssmototing (P r ,k,p);
In practical application, parameters such as the side length l of voxel grid filtering, the radius threshold r and the adjacent point number threshold N of radius filtering, the adjacent point number k and the polynomial fitting order p of MLS smoothing treatment can be adjusted according to the characteristics of the point cloud data and the subsequent processing requirements, so that the optimal preprocessing effect is achieved.
In the above technical solution, the step of calculating the preprocessed point cloud by using the facial model based on the expression to obtain the template point cloud specifically includes:
acquiring known multiple three-dimensional face shape data as a 3dmm training set;
shape data alignment of each three-dimensional face shape data in a 3dmm training set
Calculating the average shape of each three-dimensional face shape data in the 3dmm training set;
calculating the shape difference of the shape data of each three-dimensional face in the 3dmm training set;
connecting the obtained shape differences to establish a difference matrix;
performing principal component analysis on the difference matrix to obtain a matrix of principal components;
obtaining partial columns of the principal component matrix to obtain partial principal component matrix;
projecting the preprocessed point cloud onto part of the principal component matrix to obtain a shape parameter vector;
and generating a template point cloud according to the shape parameter vector.
In step S30, we need to calculate the preprocessed point cloud P' using the facial model based on the expression to obtain the template point cloud M. To achieve this goal, we construct a face template using a three-dimensional shape model (3D Morphable Model,3DMM). 3DMM is a method of generating new face shapes and textures by learning a large amount of three-dimensional data of known face shapes and textures. In this example, we used a 3DMM method based on principal component analysis (Principal Component Analysis, PCA).
First, we need to construct a 3DMM training set of face shapes. Let us assume that we have N known three-dimensional face shape data as the 3DMM training set, denoted S i =(X i ,Y i ,Z i ) I=1, 2,..n. We need to align these shape data so that they have the same reference coordinate system. The alignment method may employ Procrustes analysis, the basic idea of which is to minimize the distance between two shapes by rotation, translation and scaling operations. The aligned shape data is represented as
Next, we need to calculate the average shape of all the shapes in the 3DMM training set
Expressed as:
we then need to calculate the difference between each shape and the average shape in the 3DMM training set, expressed as
These disparity vectors are connected to form a matrix a, denoted:
A=[ΔS 1 ,ΔS 2 ,…,ΔS N ];
next, we need to perform Principal Component Analysis (PCA) on matrix a to extract the dominant pattern of change in face shape. The basic idea of PCA is to project the original data into a new coordinate system by linear transformation such that the variance of the projected data on the new coordinate axis is maximized. PCA can be achieved by singular value decomposition (Singular Value Decomposition, SVD). SVD is carried out on the matrix A, and the obtained result is: a=u Σv T Wherein U is a left singular vector matrix, sigma is a singular value matrix, and V is a right singular vector matrix.
We can obtain a matrix U containing k principal components by truncating the first k columns of U k . These principal components represent the principal modes of variation of the shape of the face. Then, we can project the preprocessed point cloud P' onto these principal components to obtain a shape parameter vector α, expressed as:
from the shape parameter vector α, we can generate a template point cloud M by means of linear combination, expressed as:
so far, we have completed the process of calculating the preprocessing point cloud P' using the expression-based face model, resulting in the template point cloud M. The entire flow of step S30 can be expressed as:
acquiring known multiple three-dimensional face shape data as a 3DMM training set;
aligning shape data of each three-dimensional face shape data in the 3DMM training set:
calculating the average shape of each three-dimensional face shape data in the 3DMM training set:
calculating the shape difference of each three-dimensional face shape data in the 3DMM training set:
the resulting shape differences are connected to create a difference matrix a, where a= [ Δs ] 1 ,ΔS 2 ,...,ΔS N
Performing principal component analysis on the difference matrix to obtain a matrix of principal components: a=u Σv T
Obtaining part of the columns of the principal component matrix to obtain part of the principal componentsDividing a matrix: the first k columns of U are intercepted to obtain partial principal component matrix U k
Projecting the preprocessed point cloud onto part of the principal component matrix to obtain a shape parameter vector:
generating a template point cloud M according to the shape parameter vector, wherein,
in practical application, the number k of principal components reserved in principal component analysis can be adjusted according to the characteristics of the point cloud data and the subsequent processing requirements so as to achieve the optimal template generation effect. Meanwhile, in order to improve the expression capacity of the model, the factors such as texture information, illumination information and the like can be considered to be introduced into the 3DMM model so as to generate a more real and accurate template point cloud.
In the above technical solution, the step of constructing the contour point cloud data set according to the preprocessed point cloud and the corresponding template point cloud specifically includes:
performing nearest neighbor search on each point in the preprocessing point cloud and the template point cloud to obtain a neighbor point set;
calculating the normal vector of each point according to the adjacent point set;
calculating an included angle between the normal vector of each point and the normal vector of the adjacent point;
and extracting the contour point set according to the included angle and the included angle threshold value.
In step S40, we need to construct a contour point cloud dataset from the preprocessed point cloud P' and the corresponding template point cloud M. To achieve this, we employ the following method.
First, we need to extract contour points of the pre-processing point cloud P' and the template point cloud M. Contour points refer to points in the point cloud that have large normal vector differences, which are typically located at the edges of the object surface. In this embodiment, we use a normal vector based contour extraction method. Specifically, for each point in the point cloud, we calculate the normal vector of its neighboring points, and then determine whether the point is a contour point according to the included angle between the normal vectors. We can implement this method by the following steps.
First, we need to calculate the normal vector for each point in the preprocessed point cloud P' and the template point cloud M. To achieve this goal, we use the nearest neighbor Search (K-Nearest Neighbor Search, KNN Search) method to find the neighbors of each point. Specifically, for each point p in the point cloud i We search k nearest points around it to form a set of adjacent points N i =n i1 ,n i2 ,...,n ik . Then, we are based on the set of neighboring points N i Calculate the point p i Normal vector n of (2) i . In this embodiment, we calculate the algorithm vector using Principal Component Analysis (PCA) method. The basic idea of the PCA method is to project the original data into a new coordinate system by linear transformation so that the variance of the projected data on the new coordinate axis is maximized. For point cloud data, we can understand the normal vector as the direction with the smallest variance. Specifically, we first compute a set of neighbor points N i Covariance matrix C of (2) i Expressed as:
wherein the method comprises the steps ofN is a set of adjacent points i Is defined by a center point of the lens. Then we apply to covariance matrix C i Decomposing the characteristic value to obtain a characteristic value lambda 1 ,λ 2 ,λ 3 And corresponding feature vector e 1 ,e 2 ,e 3 . We take the feature vector with the smallest feature value as point p i Normal vector n of (2) i
Next, we need toAnd extracting contour points according to the normal vector. To achieve this goal, we use a judgment method based on the normal vector angle. Specifically, we calculate the angle between the normal vector of each point and the normal vector of its neighboring points. If the included angle is greater than the preset included angle thresholdThe point is considered to be a contour point. The calculation formula of the included angle is as follows:
wherein the method comprises the steps ofFor point p i The angle between the normal vector nij of the adjacent point nij and the normal vector nij of the adjacent point nij. By comparing the included angle with the included angle threshold->We can extract the contour point sets CP 'and C in the pre-processing point cloud P' and the template point cloud M M
So far we have completed the process of constructing the contour point cloud dataset from the pre-processed point cloud P' and the corresponding template point cloud M. The entire flow of step S40 can be expressed as:
nearest neighbor searching is carried out on each point in the preprocessing point cloud P' and the template point cloud M to obtain a neighbor point set N i
According to the set N of adjacent points i Calculating the normal vector n of each point i
Calculating the included angle between the normal vector of each point and the normal vector of the adjacent point
Extracting a contour point set C according to the included angle and the included angle threshold value theta P′ And C M Typically, the angle threshold is 5 °.
In practical application, parameters such as the number k of adjacent points in nearest neighbor search, a characteristic value decomposition method of normal vector quantity and an included angle threshold value theta of contour extraction can be adjusted according to the characteristics of the point cloud data and the subsequent processing requirements, so that the best contour point cloud data set construction effect is achieved. Meanwhile, in order to improve the accuracy of contour extraction, other features (such as curvature, density and the like) can be considered to be introduced into the contour extraction process so as to realize more accurate contour point cloud data set construction.
In the above technical solution, the step of selecting the initial angle based on the bayesian optimization algorithm and rotating the contour point cloud data set specifically includes:
constructing an objective function for measuring the coincidence degree of the rotated contour point cloud data set and the template point cloud;
initializing a Gaussian process model according to an objective function, selecting a constant mean function and a square index covariance function, and selecting a greedy sampling strategy as a sampling criterion;
iteratively updating the Gaussian process model until convergence conditions are met, and obtaining an optimal rotation angle;
and rotating the contour point cloud data set according to the optimal rotation angle to obtain a rotated contour point cloud data set.
In this embodiment we need to rotate the contour point cloud dataset. To simplify the problem we assume that rotation only occurs in the X-Y plane, i.e. only the rotation angle about the Z axis needs to be considered. We denote the rotation angle by θ, the goal being to find an optimal rotation angle θ * And enabling the superposition degree of the rotated outline point cloud data set and the template point cloud to be maximum.
To achieve this, we first need to define an objective function f (θ) that measures the degree of coincidence of the rotated set of contour point cloud data with the template point cloud. In this embodiment we use the distance between the point clouds as the objective function. Specifically, we compute a rotated contour point cloud dataset C P′ (θ) and template Point cloud C M The euclidean distance between them, expressed as:
wherein |C P′ (θ) | and |C M I respectively represents the points of the outline point cloud data set and the template point cloud after rotation, and I.I. 2 Representing the euclidean distance.
Next, we need to find the optimal solution of the objective function f (θ) using bayesian optimization algorithms. The main steps of the Bayesian optimization algorithm are as follows:
a gaussian process model is initialized. The gaussian process is a random process whose joint distribution is gaussian for any finite subset. The gaussian process can be described by a mean function μ (θ) and a covariance function k (θ, θ'), expressed as:
in this embodiment we can choose a constant mean function and a square-index covariance function, expressed as:
where c is a constant, sigma 2 Is the variance and l is the length scale. These parameters can be estimated by training data.
A sampling strategy is selected. In a bayesian optimization algorithm, we need to select a new sample point in each iteration to update the gaussian process model. To achieve this, we need to define a sampling criterion (Acquisition Function) for measuring the goodness of each candidate point. In this embodiment, we employ a greedy sampling strategy (Greedy Sampling Strategy), expressed as:
where Θ represents the search space, σ (θ) represents the standard deviation of the gaussian process model at point θ, and κ is a trade-off factor controlling exploration and utilization. A larger kappa value would result in an algorithm more prone to exploring unknown regions, while a smaller kappa value would result in an algorithm more prone to utilizing known information.
And updating the Gaussian process model. From newly sampled point theta t+1 We can calculate the objective function f (θ t+1 ) And then adding the point to the training dataset to update the gaussian process model.
And judging the convergence condition. Stopping iteration if the preset iteration times are reached or the change of the objective function value is smaller than a preset threshold value; otherwise, returning to the step 2, and continuing to select a new sample point.
By the steps, we can obtain the optimal solution theta of the objective function f (theta) * . Then, we are based on the optimal rotation angle θ * Contour point cloud data set C P′ Rotating to obtain a rotated contour point cloud data set C P′* )。
In practical application, parameters in the Bayesian optimization algorithm, such as parameters of a mean function and a covariance function, a weighing factor kappa of a sampling strategy and the like, can be adjusted according to the characteristics of the point cloud data and the subsequent processing requirements so as to achieve the optimal rotation effect. Meanwhile, in order to improve the rotation precision, other optimization algorithms (such as genetic algorithm, particle swarm optimization and the like) can be considered to be introduced into the rotation process so as to realize more accurate rotation of the contour point cloud data set.
In the above technical solution, the step of calculating the center of gravity of the rotated contour point cloud data set and the template point cloud, and translating to overlap the center of gravity specifically includes:
calculating the gravity centers of the rotated contour point cloud data set and the template point cloud;
calculating translation vectors between the centers of gravity;
and translating the rotated contour point cloud data set along the translation vector to obtain a contour point cloud data set with overlapped center of gravity.
In step S60, firstWe need to calculate the rotated contour point cloud dataset C P′* ) And template point cloud C M Is defined by the center of gravity of the container. The center of gravity is the mean of all points in the pointing cloud data, and can be calculated by the following formula:
wherein G is P′* ) And G M Represents the center of gravity of the rotated contour point cloud data set and template point cloud respectively, |C P′* ) I and C M And the I represents the points of the outline point cloud data set and the template point cloud after rotation.
Next, we need to rotate the rotated contour point cloud data set C P′* ) Translating to make its center of gravity and template point cloud C M Is overlapped. To achieve this, we need to calculate a translation vector t between the centers of gravity, expressed as: t=g M -G P′* );
Then we will rotate the rotated contour point cloud dataset C P′* ) Translating along the translation vector t to obtain a contour point cloud data set with overlapped center of gravityExpressed as:
so far, we have completed the process of calculating the center of gravity of the rotated contour point cloud data set and the template point cloud, and overlapping the center of gravity thereof. The entire flow of step S60 can be expressed as: computing rotated contour point cloud data set C P′* ) And template point cloud C M Is defined by the center of gravity: g P′* ) And G M
Calculating a translation vector t between the barycenters;
the rotated contour pointsCloud dataset C P′* ) Translating along the translation vector t to obtain a contour point cloud data set with overlapped center of gravity
In practical application, the method for calculating the gravity center and the translation vector can be adjusted according to the characteristics of the point cloud data and the subsequent processing requirements so as to achieve the optimal gravity center overlapping effect. Meanwhile, in order to improve the accuracy of barycenter overlapping, other optimization algorithms (such as genetic algorithm, particle swarm optimization and the like) can be considered to be introduced into the barycenter overlapping process, so that more accurate barycenter overlapping of the contour point cloud data set and the template point cloud is realized.
In the above technical solution, the step of performing registration calculation on the preprocessed point cloud and the corresponding template point cloud by adopting an ICP algorithm, and after iterating for a preset number of times, calculating a fitness, extracting a rotation parameter and a translation parameter according to the fitness, and estimating a face pose specifically includes:
step 1, initializing a rotation matrix and a translation vector;
step 2, for each point in the preprocessing point cloud, searching a point closest to the point in the template point cloud M to form a point pair;
step 3, calculating an optimal rotation matrix and a translation vector according to the point pairs, so that the square sum of the distances between the point pairs is minimum;
step 4, updating the preprocessing point cloud;
judging a convergence condition, and stopping iteration if the preset iteration times are reached or the change of the distance between the point pairs is smaller than a preset threshold value; otherwise, returning to the step 2, and continuing to calculate the point pairs.
In step S70, we need to use an iterative closest point (Iterative Closest Point, ICP) algorithm to register the preprocessed point cloud P' and the corresponding template point cloud M. The main objective of the ICP algorithm is to find an optimal rotation matrix Γ and translation vector ψ such that the point-to-point distance between the preprocessed point cloud P' and the template point cloud M is minimized. The main steps of the ICP algorithm are as follows:
initializing a rotation matrix gamma and a translation vector psi;
for each point P in the preprocessed point cloud P i Searching a nearest point M in the template point cloud M j Form a pair of points (p i ,m j );
Based on the point pairs (p i ,m j ) Calculating an optimal rotation matrix Γ and translation vector ψ so that the sum of squares of the distances between pairs of points is minimum;
updating the pre-processing point cloud P', expressed as: p'. i =ΓP i +Ψ;
And judging the convergence condition. Stopping iteration if the preset iteration times are reached or the change of the distance between the point pairs is smaller than a preset threshold value; otherwise, returning to the step 2, and continuing to calculate the point pairs.
In practical application, parameters in the ICP algorithm, such as iteration times, convergence threshold values and the like, can be adjusted according to the characteristics of the point cloud data and the subsequent processing requirements so as to achieve the optimal registration effect. Meanwhile, in order to improve the registration accuracy, other optimization algorithms (such as genetic algorithm, particle swarm optimization and the like) can be considered to be introduced into the registration process so as to realize more accurate point cloud data registration.
In the above technical solution, according to the obtained rotation parameter, translation parameter and face pose, the steps of optimizing and reconstructing the preprocessed point cloud to obtain the basic optimized point cloud are as follows: and transforming each point in the preprocessed point cloud through the rotation matrix and the translation vector to obtain the optimized point cloud.
In step S80, we need to perform an optimal reconstruction on the preprocessed point cloud P' according to the obtained rotation matrix Γ and translation vector ψ, and the face pose, to obtain a basic optimal point cloud p″. Specifically, we will pre-process each point P in the point cloud P i Transforming by the rotation matrix Γ and the translation vector ψ, an optimized point cloud p″ is obtained, expressed as: p' i =ΓP′ i +Ψ,i=1,2,...,|P′|。
In the above technical solution, the step of post-processing the basic optimization point cloud specifically includes: smoothing and detail restoration processing.
In step S90, we need to post-process the basic optimized point cloud p″ including smoothing and detail restoration, and output the optimized three-dimensional face point cloud data. To achieve this goal, we can use a method similar to that in step S20 to smooth the basic optimization point cloud P ", such as the moving least squares method (Moving Least Squares, MLS), and so on. Meanwhile, the basic optimization point cloud P' can be subjected to detail recovery according to the detail information in the template point cloud M, so that more real and accurate three-dimensional face point cloud data can be obtained. The specific detail restoration method can be selected according to actual application requirements, such as detail restoration based on local features, detail restoration based on deep learning and the like.
A second aspect of the present invention provides a computer readable storage medium having stored therein program instructions that when executed are configured to perform a three-dimensional face point cloud optimization method as described above.
A second aspect of the present invention provides a three-dimensional face point cloud optimization system, which includes the above-mentioned computer-readable storage medium.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The three-dimensional face point cloud optimization method is characterized by comprising the following steps of:
s10, acquiring initial point cloud data of a human face;
s20, preprocessing the initial point cloud data, including filtering and noise reduction, to obtain preprocessed point cloud;
s30, calculating the preprocessing point cloud by using a facial model based on the expression to obtain a template point cloud;
s40, constructing a contour point cloud data set according to the preprocessing point cloud and the corresponding template point cloud;
s50, selecting an initial angle based on a Bayesian optimization algorithm, and rotating the contour point cloud data set;
s60, calculating the gravity centers of the rotated outline point cloud data set and the template point cloud, and translating to enable the gravity centers to overlap;
s70, carrying out registration calculation on the preprocessed point cloud and the corresponding template point cloud by adopting an ICP algorithm, after iteration is carried out for preset times, calculating fitness, extracting rotation parameters and translation parameters according to the fitness, and estimating the pose of the face;
s80, optimizing and reconstructing the preprocessed point cloud according to the obtained rotation parameters, translation parameters and the facial pose to obtain a basic optimized point cloud;
and S90, performing post-processing on the basic optimization point cloud, and outputting optimized three-dimensional face point cloud data.
2. The method for optimizing three-dimensional face point cloud according to claim 1, wherein the step of calculating the preprocessed point cloud by using the face model based on the expression to obtain the template point cloud specifically comprises the following steps:
acquiring known multiple three-dimensional face shape data as a 3dmm training set;
aligning the shape data of each three-dimensional face shape data in the 3dmm training set;
calculating the average shape of each three-dimensional face shape data in the 3dmm training set;
calculating the shape difference of the shape data of each three-dimensional face in the 3dmm training set;
connecting the obtained shape differences to establish a difference matrix;
performing principal component analysis on the difference matrix to obtain a matrix of principal components;
obtaining partial columns of the principal component matrix to obtain partial principal component matrix;
projecting the preprocessed point cloud onto part of the principal component matrix to obtain a shape parameter vector;
and generating a template point cloud according to the shape parameter vector.
3. The method for optimizing three-dimensional face point cloud according to claim 1, wherein the step of constructing a contour point cloud data set according to the preprocessed point cloud and the corresponding template point cloud specifically comprises:
performing nearest neighbor search on each point in the preprocessing point cloud and the template point cloud to obtain a neighbor point set;
calculating the normal vector of each point according to the adjacent point set;
calculating an included angle between the normal vector of each point and the normal vector of the adjacent point;
and extracting the contour point set according to the included angle and the included angle threshold value.
4. The method for optimizing three-dimensional face point cloud according to claim 1, wherein the step of selecting an initial angle based on a bayesian optimization algorithm and rotating the contour point cloud data set specifically comprises:
constructing an objective function for measuring the coincidence degree of the rotated contour point cloud data set and the template point cloud;
initializing a Gaussian process model according to an objective function, selecting a constant mean function and a square index covariance function, and selecting a greedy sampling strategy as a sampling criterion;
iteratively updating the Gaussian process model until convergence conditions are met, and obtaining an optimal rotation angle;
and rotating the contour point cloud data set according to the optimal rotation angle to obtain a rotated contour point cloud data set.
5. The method for optimizing three-dimensional face point cloud as claimed in claim 1, wherein the step of calculating the center of gravity of the rotated contour point cloud data set and the template point cloud and translating to overlap the center of gravity comprises:
calculating the gravity centers of the rotated contour point cloud data set and the template point cloud;
calculating translation vectors between the centers of gravity;
and translating the rotated contour point cloud data set along the translation vector to obtain a contour point cloud data set with overlapped center of gravity.
6. The method for optimizing three-dimensional face point cloud according to claim 1, wherein the step of performing registration calculation on the preprocessed point cloud and the corresponding template point cloud by using an ICP algorithm, and after iterating for a preset number of times, calculating fitness, extracting rotation parameters and translation parameters according to the fitness, and estimating face pose specifically comprises:
step 1, initializing a rotation matrix and a translation vector;
step 2, for each point in the preprocessing point cloud, searching a point closest to the point in the template point cloud M to form a point pair;
step 3, calculating an optimal rotation matrix and a translation vector according to the point pairs, so that the square sum of the distances between the point pairs is minimum;
step 4, updating the preprocessing point cloud;
judging a convergence condition, and stopping iteration if the preset iteration times are reached or the change of the distance between the point pairs is smaller than a preset threshold value; otherwise, returning to the step 2, and continuing to calculate the point pairs.
7. The method for optimizing the three-dimensional human face point cloud according to claim 1, wherein the step of optimizing and reconstructing the preprocessed point cloud according to the obtained rotation parameter, translation parameter and human face pose to obtain the basic optimized point cloud comprises the following steps: and transforming each point in the preprocessed point cloud through the rotation matrix and the translation vector to obtain the optimized point cloud.
8. The method for optimizing the three-dimensional face point cloud according to claim 1, wherein the step of post-processing the basic optimized point cloud is specifically: smoothing and detail restoration processing.
9. A computer readable storage medium, wherein program instructions are stored in the computer readable storage medium, the program instructions being operable to perform a three-dimensional face point cloud optimization method according to any one of claims 1-8.
10. A three-dimensional face point cloud optimization system comprising the computer-readable storage medium of claim 9.
CN202310755275.5A 2023-06-25 2023-06-25 Three-dimensional face point cloud optimization method, medium and system Pending CN116883590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310755275.5A CN116883590A (en) 2023-06-25 2023-06-25 Three-dimensional face point cloud optimization method, medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310755275.5A CN116883590A (en) 2023-06-25 2023-06-25 Three-dimensional face point cloud optimization method, medium and system

Publications (1)

Publication Number Publication Date
CN116883590A true CN116883590A (en) 2023-10-13

Family

ID=88261277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310755275.5A Pending CN116883590A (en) 2023-06-25 2023-06-25 Three-dimensional face point cloud optimization method, medium and system

Country Status (1)

Country Link
CN (1) CN116883590A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315211A (en) * 2023-11-29 2023-12-29 苏州元脑智能科技有限公司 Digital human synthesis and model training method, device, equipment and storage medium thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315211A (en) * 2023-11-29 2023-12-29 苏州元脑智能科技有限公司 Digital human synthesis and model training method, device, equipment and storage medium thereof
CN117315211B (en) * 2023-11-29 2024-02-23 苏州元脑智能科技有限公司 Digital human synthesis and model training method, device, equipment and storage medium thereof

Similar Documents

Publication Publication Date Title
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
US10484663B2 (en) Information processing apparatus and information processing method
Remondino et al. Dense image matching: Comparisons and analyses
CN110335234B (en) Three-dimensional change detection method based on antique LiDAR point cloud
Muratov et al. 3DCapture: 3D Reconstruction for a Smartphone
Ummenhofer et al. Point-based 3D reconstruction of thin objects
Kroemer et al. Point cloud completion using extrusions
Wang et al. Plane-based optimization of geometry and texture for RGB-D reconstruction of indoor scenes
CN116883590A (en) Three-dimensional face point cloud optimization method, medium and system
Kallwies et al. Triple-SGM: stereo processing using semi-global matching with cost fusion
CN116129037A (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
Achenbach et al. Accurate Face Reconstruction through Anisotropic Fitting and Eye Correction.
Brunton et al. Wavelet model-based stereo for fast, robust face reconstruction
CN114612412A (en) Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
Tong et al. 3D point cloud initial registration using surface curvature and SURF matching
Dai et al. A novel two-stage algorithm for accurate registration of 3-D point clouds
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
Maninchedda et al. Face reconstruction on mobile devices using a height map shape model and fast regularization
Jisen A study on target recognition algorithm based on 3D point cloud and feature fusion
CN113256693A (en) Multi-view registration method based on K-means and normal distribution transformation
CN111915632A (en) Poor texture target object truth value database construction method based on machine learning
CN117456114B (en) Multi-view-based three-dimensional image reconstruction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination