CN111581412B - Method, device, equipment and storage medium for constructing face shape library - Google Patents

Method, device, equipment and storage medium for constructing face shape library Download PDF

Info

Publication number
CN111581412B
CN111581412B CN202010524242.6A CN202010524242A CN111581412B CN 111581412 B CN111581412 B CN 111581412B CN 202010524242 A CN202010524242 A CN 202010524242A CN 111581412 B CN111581412 B CN 111581412B
Authority
CN
China
Prior art keywords
face
data
face data
data set
disturbance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010524242.6A
Other languages
Chinese (zh)
Other versions
CN111581412A (en
Inventor
王盛
林祥凯
暴林超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010524242.6A priority Critical patent/CN111581412B/en
Publication of CN111581412A publication Critical patent/CN111581412A/en
Application granted granted Critical
Publication of CN111581412B publication Critical patent/CN111581412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for constructing a face shape library, and relates to the field of image processing. The method comprises the following steps: acquiring a face data set; carrying out data disturbance processing on face data in the face data set to obtain at least two groups of disturbance data; the face model base of the face data set is obtained by carrying out principal component analysis on the face data set; and in response to the fitting error of the target disturbance data in the disturbance data being greater than an error threshold, adding the target disturbance data into a face data set, and iteratively updating the face data set to obtain a face shape library, wherein the fitting error is an error value of fitting the target disturbance data and the target disturbance data, and the fitting target disturbance data is the face data obtained by fitting the target disturbance data according to the face shape of the face data set. The method can obtain the face shape library with strong expression capability by using limited face data.

Description

Method, device, equipment and storage medium for constructing face shape library
Technical Field
The embodiment of the application relates to the field of image processing, in particular to a method, a device, equipment and a storage medium for constructing a face shape library.
Background
The face shape library is used for providing standard face shape bases, and when the computer equipment collects real face data, a three-dimensional face model similar to the real face data can be constructed based on the standard face shape bases in the face shape library.
Taking a Basel Face Model (BFM) as an example, acquiring original Face data through a high-precision three-dimensional scanner, registering corresponding original Face data through non-rigid registration (non-rigid registration, named as non-rib icp or nricp) after the original Face data is acquired, obtaining a Face grid similar to the original Face data under a specific Model, and then processing the obtained Face grid through a principal component analysis technology to construct a BFM shape library.
In the above technical solution, the BFM shape library needs to collect a large amount of high-precision face data, and the process of constructing the face shape library is difficult.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for constructing a face shape library, which can obtain the face shape library with strong expression capability by using limited face data. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for constructing a face shape library, where the method includes:
Acquiring a face data set, wherein the face data set stores at least one group of face data;
carrying out data disturbance processing on the face data in the face data set to obtain at least two groups of disturbance data, wherein the data disturbance processing comprises at least one of rotation processing, translation processing and scaling processing;
the face shape base of the face data set is obtained by carrying out principal component analysis on the face data set;
and in response to the fitting error of the target disturbance data in the disturbance data being greater than an error threshold, adding the target disturbance data into the face data set, and iteratively updating the face data set to obtain a face shape library, wherein the fitting error is an error value of the fitting target disturbance data and the target disturbance data, and the fitting target disturbance data is the face data obtained by fitting the target disturbance data according to the face shape base of the face data set.
In another aspect, an embodiment of the present application provides a device for constructing a face shape library, where the device includes:
the acquisition module is used for acquiring a face data set, and the face data set stores at least one group of face data;
the disturbance module is used for carrying out data disturbance processing on the face data in the face data set to obtain at least two groups of disturbance data, wherein the data disturbance processing comprises at least one of rotation processing, translation processing and scaling processing;
The computing module is used for obtaining the face model base of the face data set by carrying out principal component analysis on the face data set;
the iteration module is used for responding to the fact that the fitting error of the target disturbance data in the disturbance data is larger than an error threshold, adding the target disturbance data into the face data set, and iteratively updating the face data set to obtain a face shape library, wherein the fitting error is an error value of fitting the target disturbance data and the target disturbance data, and the fitting target disturbance data is the face data obtained by fitting the target disturbance data according to the face shape base of the face data set.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one section of program, a code set, or an instruction set, and the at least one instruction, the at least one section of program, the code set, or the instruction set is loaded and executed by the processor to implement a method for building a face shape library according to the above aspect.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement a method of building a face shape library as described in the above aspect.
In another aspect, there is provided a computer program product which, when run on a computer, causes the computer to perform the method of constructing a face shape library as described in the above aspects.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
the disturbance data is obtained by carrying out data disturbance processing on a small amount of face data in the face data set, the disturbance data is fitted according to the face shape base of the face data set, the fitting error of the fitting result and the disturbance data is calculated, the fitting error is large, the disturbance data with weak expression capacity of the face shape base is added to the face data set to form a face shape library, and the face shape base of the obtained face shape library has better expression capacity. The method only utilizes a small amount of high-precision face data, expands the high-precision face data through data disturbance, does not need a user to collect a large amount of high-precision face data, simplifies the construction process of the face shape library, and improves the construction efficiency. And screening part of high-precision face data from the extended high-precision face data by utilizing principal component analysis and error analysis, adding the part of high-precision face data into a face data set, accurately extending the face data with weaker expression capability of the first face type base, and efficiently improving the expression capability of a face shape library.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for constructing a face shape library according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a data preparation process provided by an exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating a method for constructing a face shape library according to another exemplary embodiment of the present application;
fig. 5 is a flowchart illustrating a method for constructing a face shape library according to another exemplary embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a mirror augmentation process provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of an alternative expansion process provided by an exemplary embodiment of the present application;
FIG. 8 is a block diagram of a construction apparatus for a face shape library according to an exemplary embodiment of the present application;
Fig. 9 is a schematic diagram showing a structure of a computer device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of researching how to make a machine "look at", and more specifically, to replace a human eye with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing, so that the Computer processes the target into an image more suitable for human eye observation or transmission to an instrument for detection. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, optical character recognition (Optical Character Recognition, OCR), video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, map construction, etc., as well as common biometric recognition techniques such as face recognition, fingerprint recognition, etc.
The method for constructing the face shape library provided by the embodiment of the application can be applied to the following scenes:
1. face shape construction of virtual character
Under the application scene, the high-precision face shape library constructed by the method provided by the embodiment of the application can be applied to a terminal or a server for providing the virtual character face shape construction function. The method comprises the steps that face acquisition data (usually low-precision data) acquired by a user's own face through a camera shooting assembly of a terminal are fitted by utilizing a high-precision face shape library, and high-precision face shapes are output, so that the high-precision face shapes are restored and displayed by utilizing virtual characters, the face shapes of the virtual characters are consistent with the actual faces of the user, and the construction of the face shapes of the virtual characters is realized.
2. Virtual character construction in gaming applications
Under the application scene, the high-precision face shape library constructed by the method provided by the embodiment of the application can be applied to a background server of a game application program. When the virtual character is constructed, the user uses the terminal to collect face data of the user's face, and the collected face data is uploaded to the background server. The background server generates a high-precision face shape according to the face data and the high-precision face shape library, feeds the high-precision face shape back to the game application program, and the game application program reconstructs the face of the virtual character according to the high-precision face shape, so that the virtual character with the same face as the user is finally constructed in the game application.
Of course, the above description is only given by taking two application scenarios as examples, and the method provided by the embodiment of the present application may be applied to other scenarios (such as video call using virtual characters, virtual person construction in virtual reality technology, etc.) where high-precision face data needs to be constructed, and the embodiment of the present application does not limit specific application scenarios.
The method for constructing the face shape library provided by the embodiment of the application can be applied to computer equipment with stronger data processing capability. In a possible implementation manner, the method for constructing the face shape library provided by the embodiment of the application can be applied to a personal computer, a workstation or a server, namely, the face shape library can be constructed through the personal computer, the workstation or the server.
The constructed face shape library can be realized to be a part of an application program and is installed in a terminal, so that the terminal has the function of generating high-precision face data according to low-precision data; alternatively, the face shape library is provided in a background server of the application program, so that the terminal installed with the application program realizes a related function (such as construction of a face shape) based on high-precision face data by means of the background server.
Referring to FIG. 1, a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application is shown. The implementation environment includes a terminal 210 and a server 220, where data communication is performed between the terminal 210 and the server 220 through a communication network, alternatively, the communication network may be a wired network or a wireless network, and the communication network may be at least one of a local area network, a metropolitan area network, and a wide area network.
The terminal 210 has installed therein an application program having a face shape construction requirement, which may be a virtual reality application program, a game application program, a dynamic expression application program, or an artificial intelligence (Artificial Intelligence, AI) application program having a face construction dynamic function, which is not limited in the embodiment of the present application.
Alternatively, the terminal 210 may be a mobile terminal such as a tablet computer, a laptop portable notebook computer, or a desktop computer, a projection computer, which is not limited in the embodiment of the present application.
The server 220 may be implemented as a server or a server cluster formed by a group of servers, which may be a physical server or a cloud server. In one possible implementation, server 220 is a background server for applications in terminal 210.
As shown in fig. 1, in the embodiment, in the face shape library construction stage, the server 220 acquires the input original face data 31 in advance, the original face data 31 is subjected to data perturbation processing to obtain the perturbation data 33, the face shape base of the original face data 31 is then used for fitting the perturbation data 33 to obtain the face shape coefficient of the perturbation data, the face shape coefficient and the face shape base can obtain the fitted perturbation data, the perturbation data and the target perturbation data with larger error value of the fitted perturbation data are selected, the target perturbation data and the original face data 31 together form the face shape library 35, the steps are repeated, the face shape library 35 is iteratively updated, and the face data in the face shape library is expanded to obtain the final face shape library. When receiving low-precision face collection data sent by the terminal 210 (collected when a user shoots a front face by using a shooting component of the terminal 210), the server 220 performs data fitting on the low-precision face collection data through the face shape library 35 to obtain high-precision face generation data 36, feeds the high-precision face generation data 36 back to the terminal 210, and an application program in the terminal 210 constructs and displays a face of a virtual character according to the high-precision face generation data 36.
In other possible embodiments, the face shape library may be set in an application program, and the terminal outputs the high-precision face generation data locally according to the input low-precision face acquisition data, without using the server 220, which is not limited in this embodiment.
For convenience of description, the following embodiments are described as examples of a method for constructing a face shape library executed by a computer device.
Referring to fig. 2, a flowchart of a method for constructing a face shape library according to an exemplary embodiment of the present application is shown. This embodiment will be described by taking the method for a computer device as an example, and the method includes the following steps.
Step 201, a face data set is obtained, wherein the face data set stores at least one group of face data.
In some embodiments, the face data in the face data set includes face data of a plurality of objects, i.e., face data of a plurality of face types including a plurality of objects. Illustratively, the face data in the face data set is high-precision face data. In one possible implementation, the face data is high-precision face mesh (mesh) data, and the 3D faces corresponding to each group of face data have the same 3D vertex number and semantics. For example, the face mesh data is face three-dimensional deformable model (3D Morphable Model,3DMM) data.
The data precision of the high-precision face data is higher compared to the low-precision face data, and therefore, the face details constructed from the high-precision face data are more accurate compared to the face constructed from the low-precision face data. Faces constructed using low-precision face data are difficult to show detailed features of real faces, such as single or double eyelid, head-up lines, canthus lines, etc.
Illustratively, the high-precision face data is face data having a precision higher than a threshold value, and the low-precision face data is face data having a precision lower than the threshold value. Illustratively, the high-precision face data is face data obtained by scanning and processing a real face by using a high-precision scanning device (such as a high-speed 3D camera), and the low-precision face data is face data obtained by scanning and processing a real face by using a consumer-level device (such as a Kinect RGBD device, a smart phone, etc.).
In some embodiments, the face data is face data obtained by scanning and processing a real face using a high-precision scanning device (such as a high-speed 3D camera). Or, face data in existing face shape libraries, such as the BFM shape library, are employed.
For example, for the collected face data, the computer device may perform data processing on the collected face data. The face data collected directly is point cloud data, and interference noise in the point cloud data is removed, and the removal of the interference noise is achieved manually. Then, the point cloud data is registered into a specific face grid through a non-rigid registration (non-rigid registration, named as non-rib icp or nricp) algorithm, so as to obtain face (grid) data with consistent vertex number and uniform topological structure. And finally, aligning the face data to the same coordinate system to obtain the face data in the first face database.
For face data in the existing face shape library, the face grids of the face data set need to be attached to the face data in the existing face shape library due to different grid expansion topologies adopted by the face data in the different face shape libraries, so that the face data similar to the face data pattern in the existing face shape library is obtained. Illustratively, the fitting may be accomplished using grid fitting software, such as wrap software. For example, as shown in fig. 3, face data 301 in an existing face shape library is attached using a face mesh of a face data set, and face data 302 in the face data set is obtained. For example, the face data obtained by lamination are also aligned to the same coordinate system to obtain the face data in the first face database.
Step 202, performing data disturbance processing on face data in the face data set to obtain at least two groups of disturbance data, wherein the data disturbance processing comprises at least one of rotation processing, translation processing and scaling processing.
The data perturbation process includes changing at least one value in the face data. Illustratively, the face data includes a plurality of vertices, each vertex having vertex coordinates, and the vertex coordinates of at least one vertex in the face data are changed by data perturbation processing. For example, the nose tip coordinates in the first face data are (0, 1), and the nose tip coordinates are changed into (1, 0, 1) through data disturbance processing, so as to obtain first disturbance data. For example, in order to prevent disturbance data obtained after the data disturbance processing from deviating from the shape of a normal face, the data disturbance processing may change the numerical value in the face data only in a small range. Illustratively, the data perturbation process includes altering a value in the face data by at least one of rotation, translation, and scaling.
By way of example, a set of disturbance data can be obtained by performing one-time data disturbance processing on a set of face data, and a plurality of sets of different disturbance data can be obtained by performing multiple-time data disturbance processing. That is, one face can obtain a plurality of different faces through data disturbance processing. For each group of face data in the face data set, at least one disturbance data group may be obtained through at least one data disturbance. The computer device can perform data disturbance processing on each group of face data in the face data set, and can also select part of face data in the face data set to perform data disturbance processing.
Step 203, the face model base of the face data set is obtained by performing principal component analysis on the face data set.
In one possible implementation, the computer device processes the face data in the face data set through principal component analysis (Principal Components Analysis, PCA) techniques, and uses the dimension reduction concept to reduce the data dimension, reduce redundant or interference data, and obtain the face shape basis of the face data set.
Regarding the principal component analysis mode, in one possible implementation mode, the computer device first calculates the average value of face data in the face data set, then calculates the feature covariance matrix corresponding to the face data, so as to obtain the feature vector and the feature value of the covariance according to the feature covariance matrix, further selects a plurality of feature vectors with the largest feature value from the feature vectors according to the descending order of the feature values, forms a principal component feature vector matrix, and generates principal component variances (vectors) according to the feature values corresponding to the feature vectors in the principal component feature vector matrix.
In an illustrative example, the computer device performs PCA processing on a face data matrix (face_all) in the face data set to obtain a mean mu, a principal component feature vector matrix (principal component coefficient) pc, and a principal component variance ev_f. The three-dimensional coordinates of each group of face data face are expanded into a column of data, for example, N vertices are expanded into [3n×1] vectors, and a face data matrix face_all is obtained for M face data as a [3n×m ] matrix. mu is a [3N x 1] vector, pc is a [3N x t ] matrix, ev_f is a [ t x 1] vector, t is the number of selected feature vectors, and N is the number of vertices of the face mesh.
The computer device constructs a face form basis from the mean, the principal component feature vector matrix, and the principal component variance.
In one possible implementation, the computer device also needs to process the principal component variance in order for the PAC-derived data to be used for face fitting (i.e., to be able to function as a face-form basis).
Optionally, when the principal component variance is a column vector, the computer device performs matrix transformation on the principal component variance to obtain a principal component diagonal matrix, so as to determine the mean principal component feature vector matrix and the principal component diagonal matrix as face bases, where the principal component diagonal matrix is formed by principal component standard deviations.
In combination with the above example in the step, the computer device performs the squaring process on the [ t×1] column vector ev_f (i.e., the principal component standard deviation is obtained from the principal component variance), to obtain the principal component diagonal matrix ev of [ t×t ]. Correspondingly, the face-type base is composed of [ 3Nx1 ] mu, [ 3Nxt ] pc, and [ t x t ] ev.
For example, for a 3DMM face shape library, face data may be calculated using face shape bases and face shape coefficients. The formula is as follows, wherein id is the face shape factor.
face=mu+pc*ev*id
Illustratively, the initial face shape basis mu0, pc0 and ev0 of the face data set are calculated from the face data set.
And 204, in response to the fitting error of the target disturbance data in the disturbance data being greater than an error threshold, adding the target disturbance data into a face data set, and iteratively updating the face data set to obtain a face shape library, wherein the fitting error is an error value of fitting the target disturbance data and the target disturbance data, and the fitting target disturbance data is the face data obtained by fitting the target disturbance data according to the face shape of the face data set.
Illustratively, after the face shape base of the face data set is obtained, the computer equipment uses the face shape base to fit the disturbance data to obtain fit disturbance data, and disturbance data with poor fitting effect in the fit disturbance data are selected and added into the face data set to form a face shape library. The poor fitting effect indicates that the face shape base has weak expression capability on the disturbance data, the part of disturbance data is added into the face data set to obtain a face shape base, and a second face shape base of the face shape base has better expression capability on the part of disturbance data, so that the disturbance data is iteratively added into the face shape base, the face shape base can be iteratively updated, and the expression capability of the face shape base is improved.
In order to improve the fitting ability of the face-shape base to various faces, in one possible implementation, the computer device uses the disturbance data to perform data expansion on the face data set, and reconstructs the face-shape base according to the expanded face shape library. The fitting effect of the face model base on various human faces is continuously improved through continuous data expansion and iterative updating of the face model base. And when the iteration ending condition is met, the computer equipment determines the face shape base obtained by updating the last iteration as a face shape library.
In the subsequent use process, the computer equipment can output high-precision face generation data according to the input low-precision face acquisition data by utilizing the face shape library.
Illustratively, a method of determining target disturbance data from disturbance data is presented.
First, the computer device fits the disturbance data based on the face-type basis to obtain fitted disturbance data.
In the embodiment of the application, when the face shape base is matched with the face shape coefficient, the face data can be fitted. Illustratively, the face shape coefficients of the disturbance data are obtained by fitting the disturbance data with the face shape base. Fitting disturbance data can be obtained according to the face shape coefficient and the face shape base of the disturbance data.
In one illustrative example, the fitting function is as follows:
wherein alpha is the shape coefficient of the target face; face_rot is a group of disturbance data, and is a matrix of [3N multiplied by 1] after being unfolded; the x-th number in the face_rot (x) perturbation data; mu0 is the average value in the face model base of the face data set, pc0 is the principal component feature vector matrix in the face model base of the face data set; ev0 is the principal component diagonal matrix in the face shape basis of the face dataset; alpha is the shape coefficient of the face to be adjusted; (mu+pc ×ev ×α) (x) represents the number of x in the obtained fitting disturbance data after the face base and the disturbance data are fitted; lambda is a regularization coefficient; n is the number of vertices of the face data.
The computer device minimizes the function value of the fitting function by adjusting the face shape factor and determines a target face shape factor corresponding to the minimized function value. The closer the face shape coefficient is to the face shape corresponding to the disturbance data to be fitted, the closer the obtained fitting disturbance data is to the disturbance data, and accordingly, the smaller the function value of the fitting function is. Therefore, in this embodiment, the computer apparatus minimizes the function value of the fitting function by adjusting the face shape factor, thereby determining the face shape factor when the minimized function value is taken as the target face shape factor.
The computer equipment further carries out data fitting according to the target human face shape coefficient and the face shape base to obtain fitting disturbance data, wherein the fitting disturbance data can be expressed as: mu+pc × ev × α.
In combination with the above example, when mu is a [3N×1] vector, pc is a [3N×t ] matrix, ev is a [ t×t ] matrix, α is a [ t×1] vector, and fitting disturbance data obtained by fitting is a [3N×1] vector.
The computer device then calculates fitting errors that fit the disturbance data and the disturbance data.
In order to measure the fitting effect of the current face base fitting disturbance data, the computer equipment makes a difference between the disturbance data and the fitting disturbance data, so that the fitting error between the disturbance data and the fitting disturbance data is determined.
Illustratively, the fit error is calculated as follows:
in one possible implementation, when the perturbation data and the fitted perturbation data are both [3N 1] vectors, the computer device calculates the Euclidean distance between the vectors, thereby determining the Euclidean distance as the fitting error. Of course, the computer device may also determine the mahalanobis distance or the cosine distance as the fitting error, which is not limited in this embodiment.
Finally, the computer device determines disturbance data with fitting errors greater than an error threshold as target disturbance data.
Illustratively, the computer apparatus selects disturbance data with poor fitting effect (large fitting error) from the disturbance data to determine as target disturbance data.
Illustratively, after one-time data disturbance processing, disturbance data fitting and fitting error judgment, the computer equipment selects part of disturbance data from the disturbance data and adds the part of disturbance data into the face data set to obtain a new face shape library. After that, the computer equipment can continue to perform data disturbance processing, disturbance data fitting and fitting error judgment based on the face shape library, and continue to update the face shape library, so that the face shape library is iteratively updated repeatedly.
The computer device may also obtain mirror image target disturbance data after mirror image processing, add the target disturbance data and the mirror image target disturbance data into the face data set, and perform iterative update on the face data set to obtain a final face database.
Illustratively, step 204 further includes steps 2041 and 2042, as shown in FIG. 4.
In step 2041, in response to the iteration end condition not being satisfied, the face data set added with the target disturbance data is redetermined as the face data set.
By way of example, the iteration end condition may be the number of iterations, for example, ending the iteration when the 11 th face database is obtained 10 times. That is, in response to the number of iterations being less than the number of iterations threshold, the face dataset that added the target disturbance data is redetermined as the face dataset.
The iteration end condition may also be, for example, ending the iteration when there is no target disturbance data in the disturbance data. That is, the iteration is ended when the fitting errors of the disturbance data are all less than the error threshold. That is, in response to the disturbance data having the target disturbance data with the fitting error greater than the error threshold, the face data set added to the target disturbance data is re-determined as the face data set.
Namely, carrying out data disturbance processing on face data in a j-th face database to obtain at least two j-th disturbance data, wherein j is an integer larger than i; the j face shape base of the j face shape library is obtained through principal component analysis of the j face shape library; and stopping iteration in response to the j-th fitting error of the j-th disturbance data being less than an error threshold.
And 2042, re-executing the step of performing data disturbance processing on the face data in the face data set to obtain at least two groups of disturbance data.
When the iteration end condition is not met, the computer equipment redetermines the updated face shape library into a face data set, data disturbance processing is conducted on the face data set again, face shape bases of the face data set are recalculated, then new target disturbance data are selected from the new disturbance data, the face data set is updated again, and the process is repeated until the iteration end condition is met.
In summary, according to the method provided by the embodiment, the disturbance data is obtained by performing data disturbance processing on a small amount of face data in the face data set, the disturbance data is fitted according to the face shape base of the face data set, the fitting result and the fitting error of the disturbance data are calculated, the fitting error is large, and the disturbance data with weak expression ability of the face shape base is added to the face data set to form the face shape base, so that the face shape base of the obtained face shape base has better expression ability. The method only utilizes a small amount of high-precision face data, expands the high-precision face data through data disturbance, does not need a user to collect a large amount of high-precision face data, simplifies the construction process of the face shape library, and improves the construction efficiency. And screening part of high-precision face data from the extended high-precision face data by utilizing principal component analysis and error analysis, adding the part of high-precision face data into a face data set, accurately extending face data with weaker face form expression capability, and efficiently improving the expression capability of a face form library.
According to the method provided by the embodiment, when the iteration ending condition is not met, the steps of data disturbance processing, face shape base calculation and fitting error judgment are repeated on the face data set, the face data set is iteratively updated, the face data in the face data set is expanded, and the expression capacity of the face shape base is improved.
In the embodiment of the present application, as shown in fig. 5, constructing a high-precision face shape library may be divided into a data preparation stage 401, a data expansion stage 402, and an iterative update face shape base stage 403. The computer device collects face data in a data preparation stage 401, and performs data processing (removing noise, registering to a face grid, and aligning to a unified coordinate system) on the face data to obtain an original face data set. Then, in the data expansion stage 402, the original face data set is subjected to data expansion, so as to generate a face data set. And in the phase of iteratively updating the face shape base, carrying out iterative updating on the face data set to obtain a face shape library, and further outputting the face shape base of the face shape library.
Because the embodiment of the application only uses a small amount of high-precision face data (compared with the method of constructing the face shape library based on the high-precision face data in the related art), the computer equipment expands the data of the high-precision face data after the data preparation stage, thereby improving the quality of the generated face shape base.
Exemplary ways of performing data augmentation include: at least one of a mirror extension and a replacement extension. Illustratively, the data expansion means includes at least a mirror expansion. For example, the data expansion modes include: mirror image expansion; mirror image expansion and replacement expansion; at least one of them.
In one possible implementation, after the face data is obtained, the computer device performs mirror expansion on the face data to obtain expanded mirror image face data, where the mirror expansion includes turning left and right along a face center line. Illustratively, a computer device obtains an original face dataset comprising at least one set of face data; carrying out mirror image processing on face data in the original face data set to obtain a first mirror image face data set; the original face dataset and the first mirrored face dataset are determined as face datasets. The original face data set is illustratively a face data set collected by a computer device, or a face data set obtained by using face data in an existing face database.
Taking the data expansion process of the high-precision face data as an example for explanation, in some embodiments, the computer device performs coordinate system alignment on the high-precision face data first, and then performs left-right overturn on the aligned high-precision face data along the center line of the face to realize mirror image expansion. The data volume of the high-precision face data is expanded to be twice of the original data volume through mirror image expansion.
Illustratively, as shown in fig. 6, after the face data 501 is mirrored, mirrored face data 502 is obtained.
In one possible implementation manner, after the face data is obtained, the computer device performs replacement expansion on the face data to obtain expanded replacement face data, where the replacement expansion includes: and selecting two groups of face data, and respectively replacing five sense organs of the two groups of face data to obtain a plurality of groups of replaced face data. For example, the computer device may replace at least one of the nose, mouth, eyes, eyebrows, ears of the two sets of face data to obtain the replacement face data.
Illustratively, a computer device obtains an original face dataset comprising at least two sets of face data; carrying out replacement processing on the face data in the original face data set to obtain a first replacement face data set, wherein the replacement processing comprises the steps of replacing a first area in second face data with a first area in the first face data to obtain replacement face data; the original face data set and the first replacement face data set are determined to be face data sets.
Illustratively, the replacement process is used to replace any surface area at the same location on both sets of face data. Illustratively, the face data has the same vertex number and semantics, e.g., number 1 represents the tip of the nose and number 2 represents the center point of the mouth. The replacement process may replace one set of vertices with the same number in the two sets of face data, for example, replace the area from vertex 1 to vertex 10 where the nose is located.
Illustratively, the first region is a region corresponding to at least one five sense organs in the face data, for example, the first region is eyes, or the first region is eyes and mouth. For example, the computer device may arbitrarily select two face data from the original face data set to replace.
Illustratively, as shown in fig. 7, eyes of the first original face data 601 and the second original face data 602 are replaced, so as to obtain first replaced face data 603 and second replaced face data 604.
In one possible implementation, the computer device may expand the acquired face data multiple times. For example, the image expansion is performed once, the replacement expansion is performed once again, and the image expansion is performed once again.
Illustratively, a computer device obtains an original face dataset comprising at least one set of face data; carrying out mirror image processing on the face data in the original face data set to obtain a second mirror image face data set; determining the original face data set and the second mirror face data set as a first extended face data set; carrying out replacement processing on the face data in the first extended face data set to obtain a second replaced face data set, wherein the replacement processing comprises the steps of replacing a first area in the second face data with a first area in the first face data to obtain replaced face data; the first extended face data set and the second alternate face data set are determined to be the second extended face data set. Carrying out mirror image processing on the face data in the second extended face data set to obtain a third mirror image face data set; the second extended face data set and the third mirrored face data set are determined as original data sets.
In this embodiment, in the data preparation stage, data expansion is performed by means of mirror image expansion, replacement expansion and the like, so that the data volume of face data is increased, the expression capability of a subsequently constructed face model base is improved, and the quality of a finally generated high-precision face shape library is improved.
For example, in the stage of iteratively updating the face model base, the disturbance data is obtained by performing data disturbance processing on the face data in the face data set, and three data disturbance processing modes are provided in this embodiment, which are respectively as follows: rotation processing, translation processing, and scaling processing.
First, a rotation process is performed.
The computer equipment rotates at least one point in the face data around a reference point to any angle in any direction to obtain at least two groups of disturbance data, wherein the reference point is any point in the face data.
The computer device may use any vertex in the face data as a reference point, rotate another vertex in the face data around the reference point, and the rotation direction and angle may be arbitrary, so as to change the spatial position of the point, and obtain a set of disturbance data after rotating at least one vertex in the face data. For example, if the reference point is a first vertex, the second vertex in the face data is rotated from the first position to the second position around the first vertex, and the distance from the second vertex to the first vertex is unchanged before and after rotation.
For example, the reference point may be the vertex corresponding to the tip of the nose. Illustratively, to avoid the face distortion of the obtained disturbance data, the rotation angle may be controlled within ±5°.
Illustratively, the rotation process may change the degree of waviness of vertices in the face data.
Then, the translation process.
The computer device moves at least one point in the face data to a first direction by a first distance to obtain at least two groups of disturbance data.
The computer device may translate at least one vertex in the face data a distance in any one direction to obtain the disturbance data. The first direction may be, for example, a direction from the hindbrain scoop toward the tip of the nose. Illustratively, to avoid face distortion of the disturbance data, the maximum distance of translation may be determined from the face data in the face data set. For example, the distance from the midpoint of the hindbrain scoop to the tip of the nose of each set of face data in the face data set is calculated, and the difference between the maximum diameter distance and the minimum diameter distance is determined as the maximum distance of translation. The maximum distance of translation may also be a fixed value, for example.
For example, the computer device moves the vertex corresponding to the tip of the nose in the face data by 1cm in the first direction, increasing the height of the nose, and making the nose of the obtained disturbance data more stereoscopic.
Finally, the scaling process.
The computer equipment takes the central point of the connecting line of two ears in the face data as the origin of coordinates, and the numerical value of the face data is amplified in a same ratio to obtain at least two groups of disturbance data.
For example, the computer device may scale up and down the face data to zoom in and out the entire head, obtaining disturbance data of a large face or a small face.
For example, in order to enable scaling to preserve the original shape of a face, it is necessary to first align the face data, re-determine the coordinates of each vertex in the face data with the midpoint of the two-ear line as the origin of the coordinate system, and then scaling up and down the new coordinates in a comparable manner. For example, multiplying the new coordinates by 1.2 yields an amplified disturbance data. For example, the average value of all the vertices in the face data may be used as the origin of the coordinate system, the coordinates of each vertex in the face data may be redetermined, and then the scaling may be performed.
In summary, according to the method provided by the embodiment, the disturbance data is obtained by performing at least one of rotation, translation and scaling on a small amount of face data in the face data set, and then, according to the fitting result of the disturbance data, part of target disturbance data is selected from the disturbance data and added into the face data set, so as to obtain the face shape library. The method only utilizes a small amount of high-precision face data, expands the high-precision face data through data disturbance, does not need a user to collect a large amount of high-precision face data, simplifies the construction process of the face shape library, and improves the construction efficiency.
Fig. 8 is a block diagram of a construction apparatus for a face shape library according to an exemplary embodiment of the present application, the apparatus including:
an obtaining module 801, configured to obtain a face data set, where the face data set stores at least one group of face data;
a perturbation module 802, configured to perform data perturbation processing on the face data in the face data set to obtain at least two groups of perturbation data, where the data perturbation processing includes at least one of rotation processing, translation processing, and scaling processing;
a calculation module 803, configured to obtain a face shape base of the face data set by performing principal component analysis on the face data set;
and an iteration module 804, configured to, in response to a fitting error of target disturbance data in the disturbance data being greater than an error threshold, add the target disturbance data to the face data set, and iteratively update the face data set to obtain a face shape library, where the fitting error is an error value of fitting target disturbance data and the target disturbance data, and the fitting target disturbance data is the face data obtained by fitting the target disturbance data according to the face shape base of the face data set.
In an optional embodiment, the iteration module 804 is further configured to, in response to not meeting an iteration end condition, redefine the face data set added to the target disturbance data as the face data set;
The iteration module 804 is further configured to repeat the above steps from the step of performing data perturbation processing on the face data in the face data set to obtain at least two sets of perturbation data.
In an optional embodiment, the iteration module 804 is further configured to, in response to the target disturbance data having the fitting error greater than an error threshold being present in the disturbance data, re-determine the face data set added to the target disturbance data as the face data set;
or alternatively, the first and second heat exchangers may be,
the iteration module 804 is further configured to re-determine the face data set added to the target disturbance data as the face data set in response to the iteration number being less than a number threshold.
In an alternative embodiment, the data perturbation process comprises a rotation process;
the perturbation module 802 is further configured to rotate at least one point in the face data by an arbitrary angle around a reference point in an arbitrary direction to obtain at least two perturbation data, where the reference point is an arbitrary point in the face data.
In an alternative embodiment, the data perturbation process includes a translation process;
the perturbation module 802 is further configured to move at least one point in the face data to a first direction by a first distance to obtain at least two perturbation data.
In an alternative embodiment, the data perturbation process includes a scaling process;
the perturbation module 802 is further configured to obtain at least two perturbation data by using a central point of a connecting line of two ears in the face data as an origin of coordinates, and amplifying the values of the face data in a same ratio.
In an alternative embodiment, the apparatus further comprises:
the acquiring module 801 is further configured to acquire an original face data set, where the original face data set includes at least one group of face data;
the mirroring module 805 is configured to mirror the face data in the original face data set to obtain a first mirrored face data set;
a determining module 806 is configured to determine the original face data set and the first mirrored face data set as the face data set.
In an alternative embodiment, the apparatus further comprises:
the acquiring module 801 is further configured to acquire an original face data set, where the original face data set includes at least two sets of face data;
a replacing module 807, configured to perform a replacing process on the face data in the original face data set to obtain a first replaced face data set, where the replacing process includes replacing a first area in the first face data with a first area in the second face data to obtain replaced face data;
A determining module 806 is configured to determine the original face data set and the first substitute face data set as the face data set.
In an alternative embodiment, the apparatus further comprises:
the acquiring module 801 is further configured to acquire an original face data set, where the original face data set includes at least one group of face data;
the mirror image module 805 is configured to mirror the face data in the original face data set to obtain a second mirror image face data set;
a determining module 806, configured to determine the original face data set and the second mirrored face data set as a first extended face data set;
a replacing module 807, configured to perform a replacing process on the face data in the first extended face data set to obtain a second replaced face data set, where the replacing process includes replacing a first area in the first face data with a first area in the second face data to obtain replaced face data;
the determining module 806 is further configured to determine the first extended face data set and the second substitute face data set as a second extended face data set;
the mirroring module 805 is further configured to mirror the face data in the second extended face data set to obtain a third mirrored face data set;
The determining module 806 is further configured to determine the second extended face data set and the third mirrored face data set as the face data set.
It should be noted that: the construction device for the face shape library provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the device for constructing the face shape library provided in the above embodiment and the method embodiment for constructing the face shape library belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and will not be described herein.
Referring to fig. 9, a schematic diagram of a computer device according to an exemplary embodiment of the present application is shown. Specifically, the present application relates to a method for manufacturing a semiconductor device. The computer apparatus 1000 includes a central processing unit (Central Processing Unit, CPU) 1001, a system memory 1004 including a random access memory 1002 and a read only memory 1003, and a system bus 1005 connecting the system memory 1004 and the central processing unit 1001. The computer device 1000 also includes a basic Input/Output system (I/O) 1006, which helps to transfer information between various devices within the computer, and a mass storage device 1007 for storing an operating system 1013, application programs 1014, and other program modules 1015.
The basic input/output system 1006 includes a display 1008 for displaying information and an input device 1009, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1008 and the input device 1009 are connected to the central processing unit 1001 via an input output controller 1010 connected to a system bus 1005. The basic input/output system 1006 may also include an input/output controller 1010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 1010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1007 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 1007 and its associated computer-readable media provide non-volatile storage for the computer device 1000. That is, the mass storage device 1007 may include a computer readable medium (not shown) such as a hard disk or drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes random access Memory (RAM, random Access Memory), read Only Memory (ROM), flash Memory or other solid state Memory technology, compact disk (CD-ROM), digital versatile disk (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1004 and mass storage devices 1007 described above may be collectively referred to as memory.
The memory stores one or more programs configured to be executed by the one or more central processing units 1001, the one or more programs containing instructions for implementing the methods described above, the central processing unit 1001 executing the one or more programs to implement the methods provided by the various method embodiments described above.
According to various embodiments of the application, the computer device 1000 may also operate by being connected to a remote computer on a network, such as the Internet. I.e., the computer device 1000 may be connected to the network 1012 through a network interface unit 1011 connected to the system bus 1005, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 1011.
The memory also includes one or more programs stored in the memory, the one or more programs including steps for performing the methods provided by the embodiments of the present application, as performed by the computer device.
The embodiment of the application also provides a computer readable storage medium, wherein at least one instruction, at least one section of program, code set or instruction set is stored in the readable storage medium, and the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by a processor to realize the method for constructing the face shape library according to any one of the above embodiments.
The application also provides a computer program product, which when running on a computer, causes the computer to execute the method for constructing the face shape library provided by the method embodiments.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing related hardware, and the program may be stored in a computer readable storage medium, which may be a computer readable storage medium included in the memory of the above embodiments; or may be a computer-readable storage medium, alone, that is not incorporated into the terminal. The computer readable storage medium stores at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the method for constructing a face shape library according to any one of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but is intended to cover all modifications, equivalents, alternatives, and improvements falling within the spirit and principles of the application.

Claims (14)

1. A method for constructing a face shape library, the method comprising:
acquiring a face data set, wherein the face data set stores at least one group of face data;
carrying out data disturbance processing on the face data in the face data set to obtain at least two groups of disturbance data, wherein the data disturbance processing comprises at least one of rotation processing, translation processing and scaling processing;
the face shape base of the face data set is obtained by carrying out principal component analysis on the face data set;
in response to the fitting error of the target disturbance data in the disturbance data being greater than an error threshold, adding the target disturbance data into the face data set, and iteratively updating the face data set to obtain a face shape library, wherein the fitting error is an error value of fitting the target disturbance data and the target disturbance data, and the fitting target disturbance data is the face data obtained by fitting the target disturbance data according to the face shape base of the face data set;
Wherein the fitting is performed using a fitting function comprising:
wherein alpha is the shape coefficient of the target face; face_rot is a group of disturbance data, and is a matrix of [3N multiplied by 1] after being unfolded; face_rot (x) is the x-th number in the disturbance data; mu0 is the mean value in the face model base of the face data set, pc0 is the principal component feature vector matrix in the face model base of the face data set; ev0 is a principal component diagonal matrix in the face shape base of the face dataset; alpha is the shape coefficient of the face to be adjusted; (mu+pc × ev × α) (x) represents the number of the obtained fitting disturbance data after the face form base and the disturbance data are fitted; lambda is a regularization coefficient; and N is the number of vertexes of the face data.
2. The method of claim 1, wherein iteratively updating the face dataset to obtain a face shape library comprises:
determining the face data set added with the target disturbance data as the face data set again in response to the fact that the iteration ending condition is not met;
and repeating the steps from the step of obtaining at least two groups of disturbance data by carrying out data disturbance processing on the face data in the face data set.
3. The method of claim 2, wherein the re-determining the face data set that incorporates the target disturbance data as the face data set in response to the iteration end condition not being met comprises:
re-determining the face data set added with the target disturbance data as the face data set in response to the target disturbance data with the fitting error larger than an error threshold value in the disturbance data;
or alternatively, the first and second heat exchangers may be,
and re-determining the face data set added with the target disturbance data as the face data set in response to the iteration times being smaller than a time threshold.
4. A method according to any one of claims 1 to 3, wherein the data perturbation process comprises a rotation process;
the step of performing data disturbance processing on the face data in the face data set to obtain at least two groups of disturbance data includes:
and rotating at least one point in the face data around a reference point to any angle in any direction to obtain at least two groups of disturbance data, wherein the reference point is any point in the face data.
5. A method according to any one of claims 1 to 3, wherein the data perturbation process comprises a translation process;
The step of performing data disturbance processing on the face data in the face data set to obtain at least two groups of disturbance data includes:
and moving at least one point in the face data to a first direction for a first distance to obtain at least two groups of disturbance data.
6. A method according to any one of claims 1 to 3, wherein the data perturbation process comprises a scaling process;
the step of performing data disturbance processing on the face data in the face data set to obtain at least two groups of disturbance data includes:
and taking the central point of the connecting line of the two ears in the face data as the origin of coordinates, and amplifying the numerical value of the face data in a same ratio to obtain at least two groups of disturbance data.
7. A method according to any one of claims 1 to 3, wherein said acquiring a face dataset comprises:
acquiring an original face data set, wherein the original face data set comprises at least one group of face data;
mirroring the face data in the original face data set to obtain a first mirrored face data set;
and determining the original face data set and the first mirror face data set as the face data set.
8. A method according to any one of claims 1 to 3, wherein said acquiring a face dataset comprises:
acquiring an original face data set, wherein the original face data set comprises at least two groups of face data;
performing replacement processing on the face data in the original face data set to obtain a first replacement face data set, wherein the replacement processing comprises the steps of replacing a first area in first face data with the first area in second face data to obtain replacement face data;
and determining the original face data set and the first replacement face data set as the face data set.
9. A method according to any one of claims 1 to 3, wherein said acquiring a face dataset comprises:
acquiring an original face data set, wherein the original face data set comprises at least one group of face data;
mirroring the face data in the original face data set to obtain a second mirrored face data set;
determining the original face data set and the second mirror face data set as a first extended face data set;
performing replacement processing on the face data in the first extended face data set to obtain a second replacement face data set, wherein the replacement processing comprises the steps of replacing a first area in the first face data with the first area in the second face data to obtain replacement face data;
Determining the first extended face data set and the second replacement face data set as a second extended face data set;
mirroring the face data in the second extended face data set to obtain a third mirrored face data set;
and determining the second extended face data set and the third mirror image face data set as the face data set.
10. A human face shape library construction apparatus, the apparatus comprising:
the acquisition module is used for acquiring a face data set, and the face data set stores at least one group of face data;
the disturbance module is used for carrying out data disturbance processing on the face data in the face data set to obtain at least two groups of disturbance data, wherein the data disturbance processing comprises at least one of rotation processing, translation processing and scaling processing;
the computing module is used for obtaining the face model base of the face data set by carrying out principal component analysis on the face data set;
the iteration module is used for responding to the fact that the fitting error of the target disturbance data in the disturbance data is larger than an error threshold, adding the target disturbance data into the face data set, and iteratively updating the face data set to obtain a face shape library, wherein the fitting error is an error value of fitting the target disturbance data and the target disturbance data, and the fitting target disturbance data is the face data obtained by fitting the target disturbance data according to the face shape base of the face data set;
Wherein the fitting is performed using a fitting function comprising:
wherein alpha is the shape coefficient of the target face; face_rot is a group of disturbance data, and is a matrix of [3N multiplied by 1] after being unfolded; face_rot (x) is the x-th number in the disturbance data; mu0 is the mean value in the face model base of the face data set, pc0 is the principal component feature vector matrix in the face model base of the face data set; ev0 is a principal component diagonal matrix in the face shape base of the face dataset; alpha is the shape coefficient of the face to be adjusted; (mu+pc × ev × α) (x) represents the number of the obtained fitting disturbance data after the face form base and the disturbance data are fitted; lambda is a regularization coefficient; and N is the number of vertexes of the face data.
11. The apparatus of claim 10, wherein the iteration module is further configured to re-determine the face dataset that incorporates the target disturbance data as the face dataset in response to not satisfying an iteration end condition;
the iteration module is further configured to repeat the above steps from the step of performing data disturbance processing on the face data in the face data set to obtain at least two sets of disturbance data.
12. The apparatus of claim 11, wherein the iteration module is further configured to re-determine the face dataset added to the target perturbation data as the face dataset in response to the target perturbation data having the fitting error greater than an error threshold being present in the perturbation data;
or alternatively, the first and second heat exchangers may be,
and the iteration module is further used for re-determining the face data set added with the target disturbance data as the face data set in response to the iteration times being smaller than a time threshold.
13. A computer device, characterized in that it comprises a processor and a memory, in which at least one instruction, at least one program, a set of codes or a set of instructions is stored, which is loaded and executed by the processor to implement the method of constructing a face shape library according to any one of claims 1 to 9.
14. A computer readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the readable storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions are loaded and executed by a processor to implement the method of constructing a face shape library according to any one of claims 1 to 9.
CN202010524242.6A 2020-06-10 2020-06-10 Method, device, equipment and storage medium for constructing face shape library Active CN111581412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010524242.6A CN111581412B (en) 2020-06-10 2020-06-10 Method, device, equipment and storage medium for constructing face shape library

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010524242.6A CN111581412B (en) 2020-06-10 2020-06-10 Method, device, equipment and storage medium for constructing face shape library

Publications (2)

Publication Number Publication Date
CN111581412A CN111581412A (en) 2020-08-25
CN111581412B true CN111581412B (en) 2023-11-10

Family

ID=72125744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010524242.6A Active CN111581412B (en) 2020-06-10 2020-06-10 Method, device, equipment and storage medium for constructing face shape library

Country Status (1)

Country Link
CN (1) CN111581412B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006343791A (en) * 2005-06-07 2006-12-21 Hitachi Ltd Face image database preparation method
CN101261677A (en) * 2007-10-18 2008-09-10 周春光 New method-feature extraction layer amalgamation for face and iris
KR20130121360A (en) * 2012-04-27 2013-11-06 제너럴 일렉트릭 캄파니 Optimal gradient pursuit for image alignment
DE102012103738A1 (en) * 2012-04-27 2013-11-14 General Electric Co. Method for aligning face image of person e.g. for detection of face features, involves training appearance model component with training data to estimate score function and to minimize angle between gradient- and ideal way directions
CN105608710A (en) * 2015-12-14 2016-05-25 四川长虹电器股份有限公司 Non-rigid face detection and tracking positioning method
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108229276A (en) * 2017-03-31 2018-06-29 北京市商汤科技开发有限公司 Neural metwork training and image processing method, device and electronic equipment
CN110569756A (en) * 2019-08-26 2019-12-13 长沙理工大学 face recognition model construction method, recognition method, device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006343791A (en) * 2005-06-07 2006-12-21 Hitachi Ltd Face image database preparation method
CN101261677A (en) * 2007-10-18 2008-09-10 周春光 New method-feature extraction layer amalgamation for face and iris
KR20130121360A (en) * 2012-04-27 2013-11-06 제너럴 일렉트릭 캄파니 Optimal gradient pursuit for image alignment
DE102012103738A1 (en) * 2012-04-27 2013-11-14 General Electric Co. Method for aligning face image of person e.g. for detection of face features, involves training appearance model component with training data to estimate score function and to minimize angle between gradient- and ideal way directions
CN105608710A (en) * 2015-12-14 2016-05-25 四川长虹电器股份有限公司 Non-rigid face detection and tracking positioning method
CN108229276A (en) * 2017-03-31 2018-06-29 北京市商汤科技开发有限公司 Neural metwork training and image processing method, device and electronic equipment
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN110569756A (en) * 2019-08-26 2019-12-13 长沙理工大学 face recognition model construction method, recognition method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于形变模型的三维人脸重建及识别;项聪颖;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;2019年(第2期);I138-2181 *

Also Published As

Publication number Publication date
CN111581412A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
US10304244B2 (en) Motion capture and character synthesis
CN111369681B (en) Three-dimensional model reconstruction method, device, equipment and storage medium
CN112614213B (en) Facial expression determining method, expression parameter determining model, medium and equipment
CN111028330B (en) Three-dimensional expression base generation method, device, equipment and storage medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
EP3326156B1 (en) Consistent tessellation via topology-aware surface tracking
Ngo et al. Template-based monocular 3D shape recovery using laplacian meshes
CN110866864A (en) Face pose estimation/three-dimensional face reconstruction method and device and electronic equipment
CN112085835B (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN109685873B (en) Face reconstruction method, device, equipment and storage medium
CN111710035B (en) Face reconstruction method, device, computer equipment and storage medium
CN116563493A (en) Model training method based on three-dimensional reconstruction, three-dimensional reconstruction method and device
CN113593001A (en) Target object three-dimensional reconstruction method and device, computer equipment and storage medium
Wu et al. [Retracted] 3D Film Animation Image Acquisition and Feature Processing Based on the Latest Virtual Reconstruction Technology
Lalos et al. Signal processing on static and dynamic 3d meshes: Sparse representations and applications
CN116363320B (en) Training of reconstruction model and three-dimensional model reconstruction method, device, equipment and medium
Kazmi et al. Efficient sketch‐based creation of detailed character models through data‐driven mesh deformations
CN111581412B (en) Method, device, equipment and storage medium for constructing face shape library
CN114757822B (en) Binocular-based human body three-dimensional key point detection method and system
CN111651623B (en) Method, device, equipment and storage medium for constructing high-precision facial expression library
CN111581411B (en) Method, device, equipment and storage medium for constructing high-precision face shape library
Lee et al. Holistic 3D face and head reconstruction with geometric details from a single image
WO2023127005A1 (en) Data augmentation device, data augmentation method, and computer-readable recording medium
GAO et al. Face Reconstruction Algorithm based on Lightweight Convolutional Neural Networks and Channel-wise Attention
CN117197401A (en) Test method and device for point cloud construction, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028365

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant