CN111428579A - Face image acquisition method and system - Google Patents
Face image acquisition method and system Download PDFInfo
- Publication number
- CN111428579A CN111428579A CN202010141606.2A CN202010141606A CN111428579A CN 111428579 A CN111428579 A CN 111428579A CN 202010141606 A CN202010141606 A CN 202010141606A CN 111428579 A CN111428579 A CN 111428579A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- depth information
- processed
- face
- face structure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method for acquiring a face image, which comprises the following steps: acquiring a picture to be processed, wherein the picture to be processed comprises a face image of a user to be processed; inputting the face image into a key point detection model to obtain a face image containing face key points and key point coordinates corresponding to the face key points; inputting the picture to be processed into a depth prediction model to obtain depth information of the picture to be processed; reconstructing a three-dimensional face structure of the user to be processed according to the depth information and the key point coordinates; calculating to obtain a target face structure according to the three-dimensional face structure and a preset rotation angle; and projecting the target face structure to obtain a target image. The invention also discloses a system for acquiring the face image. The embodiment of the invention has the beneficial effects that: the accuracy of acquisition of the face image and the rotation of the picture is improved.
Description
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a method and a system for acquiring a face image.
Technical Field
Most of the conventional 3D face reconstruction methods are based on image information, such as 3D face reconstruction based on one or more information modeling techniques of image brightness, edge information, linear perspective, color, relative height, parallax, and the like. The 3D face reconstruction method based on the model is a popular 3D face reconstruction method at present; the 3D model is mainly represented by triangular mesh or point cloud, the popular models at present are a universal face model (CANDIDE-3), a three-dimensional deformation model (3DMM) and a variant model thereof, and the 3D face reconstruction algorithm based on the universal face model and the three-dimensional deformation model has both a traditional algorithm and a deep learning algorithm.
Those skilled in the art will appreciate that the three-dimensional deformation model essentially uses a principal component analysis method to construct a statistical model, and the principal component analysis method is essentially a low-pass filter. Thus, such methods are still not ideal in restoring the detail features of the face. More specifically, for example, in order to present complex facial expressions, a countless number of tiny wrinkles and tiny changes of colors and stripes are not negligible, and the three-dimensional deformation model adopts a low-pass filtering method, so that tiny details cannot be accurately captured and restored, and the presenting capability of the facial expressions is relatively weak. And if the three-dimensional deformation model is used for rotating the established three-dimensional face, the obtained effect is not ideal and the accuracy is not enough.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide a method and a system for acquiring a face image, which improve accuracy of acquiring the face image and image rotation.
In order to achieve the above object, an embodiment of the present invention provides a method for acquiring a face image, including:
acquiring a picture to be processed, wherein the picture to be processed comprises a face image of a user to be processed;
inputting the face image into a key point detection model to obtain a face image containing face key points and key point coordinates corresponding to the face key points;
inputting the picture to be processed into a depth prediction model to obtain depth information of the picture to be processed;
reconstructing a three-dimensional face structure of the user to be processed according to the depth information and the key point coordinates;
calculating to obtain a target face structure according to the three-dimensional face structure and a preset rotation angle;
and projecting the target face structure to obtain a target image.
Further, training the deep prediction network comprises:
acquiring sample depth information and sample pictures of a plurality of sample users through a depth camera;
taking each sample picture and a plurality of sample depth information as the input of a deep learning network model, and outputting the target depth information corresponding to the maximum position degree of each sample picture;
judging whether the target depth information is sample depth information corresponding to each sample picture;
if yes, the depth prediction model is successfully trained; if not, recalculating the confidence coefficient through the loss function so as to enable the target depth information corresponding to the maximum locality degree to be the sample depth information.
Further, reconstructing the three-dimensional face structure of the user to be processed according to the depth information and the key point coordinates includes:
inputting the depth information and the key point coordinates into a three-dimensional model to obtain a rough three-dimensional face structure of the user to be processed;
carrying out affine transformation on the key point coordinates and the depth information according to the three-dimensional model to obtain reconstructed point coordinates;
and inputting the reconstructed point coordinates into a three-dimensional model so as to correct the rough three-dimensional face structure to obtain the three-dimensional face structure of the user to be processed.
Further, inputting the depth information and the key point coordinates into a three-dimensional model to obtain a rough three-dimensional face structure of the user to be processed includes:
establishing a two-dimensional face model in the three-dimensional model according to the key point coordinates;
and inputting the depth information into the three-dimensional model so as to obtain the rough three-dimensional face structure according to the two-dimensional face model and the depth information.
Further, performing affine transformation on the key point coordinates and the depth information according to the three-dimensional model to obtain reconstructed point coordinates includes:
determining the vertex coordinates of the key point coordinates;
performing affine transformation on the key point coordinates based on the vertex coordinates to obtain reconstructed point coordinates corresponding to the key point coordinates;
and carrying out affine transformation on the depth information to obtain a reconstruction point coordinate corresponding to the depth information.
Further, according to the three-dimensional face structure and the preset rotation angle, calculating to obtain a target face structure comprises:
determining an Euler angle of the three-dimensional face structure according to the preset rotation angle;
calculating the Euler angle according to the rotation matrix to obtain a rotation gravity center value of the three-dimensional face structure;
and rotating the three-dimensional face structure by the preset rotation angle according to the gravity center value to obtain a target face structure.
Further, projecting the target face structure to obtain a target image includes:
acquiring two-dimensional RGB information of the picture to be processed;
filling the two-dimensional RGB information into the target face structure to obtain a three-dimensional face image;
and performing two-dimensional projection on the three-dimensional face image, and performing difference value operation correction to obtain the target image.
In order to achieve the above object, an embodiment of the present invention may further provide a system for acquiring a face image, including:
the device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a picture to be processed, and the picture to be processed comprises a face image of a user to be processed;
the second acquisition module is used for inputting the face image into the key point detection model so as to obtain face key points and key point coordinates corresponding to the key points;
the third acquisition module is used for inputting the face image and the face key points into a depth prediction model so as to acquire depth information of the face key points;
the reconstruction module is used for reconstructing a three-dimensional face structure of the user to be processed according to the depth information and the key point coordinates;
the calculation module is used for calculating to obtain a target face structure according to the three-dimensional face structure and a preset rotation angle;
and the projection module is used for projecting the target face structure to obtain a target image.
To achieve the above object, an embodiment of the present invention may further provide a computer device, where the computer device includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the computer program, when executed by the processor, implements the steps of the method for acquiring a face image as described above.
To achieve the above object, the present invention may further provide a computer-readable storage medium, in which a computer program is stored, where the computer program is executable by at least one processor, so as to cause the at least one processor to execute the steps of the above-mentioned face image acquisition method.
According to the method and the system for acquiring the face image, provided by the embodiment of the invention, the depth information of the two-dimensional image is obtained through the depth prediction model, the three-dimensional face structure is reconstructed according to the depth information and the key point coordinates, and the three-dimensional face structure is rotated according to the preset rotation angle to obtain the target image, so that the accuracy of acquiring the face image and rotating the image is improved.
Drawings
Fig. 1 is a flowchart of a first embodiment of a method for acquiring a face image according to the present invention.
Fig. 2 is a flowchart of training the deep prediction network according to a first embodiment of the present invention.
Fig. 3 is a flowchart of step S106 according to an embodiment of the present invention.
Fig. 4 is a flowchart of step S106A according to an embodiment of the present invention.
Fig. 5 is a flowchart of step S106B according to an embodiment of the present invention.
Fig. 6 is a flowchart of step S108 according to an embodiment of the present invention.
Fig. 7 is a flowchart of step S110 according to an embodiment of the present invention.
Fig. 8 is a schematic view of a program module of a second embodiment of the system for acquiring a face image according to the present invention.
Fig. 9 is a schematic diagram of a hardware structure of a third embodiment of the computer apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a method for acquiring a face image according to a first embodiment of the present invention is shown. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. The following description is made by taking a server as an execution subject. The details are as follows.
Step S100, obtaining a picture to be processed, wherein the picture to be processed comprises a face image of a user to be processed.
Specifically, a picture to be processed, which is shot by a user to be processed through camera or mobile phone photographic software, is obtained, and the picture to be processed comprises a face front image of the user to be processed.
Step S102, inputting the face image into a key point detection model to obtain a face image containing face key points and key point coordinates corresponding to the face key points.
Specifically, the key point detection model divides key points of the human face into internal key points and contour key points according to the human face image, wherein the internal key points comprise 51 key points including eyebrows, eyes, a nose and a mouth, the contour key points comprise 17 key points (the embodiment of the invention adopts a CNN algorithm for training, and other key point values can be obtained by using other algorithms for training). The method adopts a four-level cascade network for detecting the internal 51 key points, wherein L evel-1 is mainly used for obtaining a boundary frame of a facial organ, the output of L evel-2 is the predicted positions of the 51 key points, a rough positioning function is performed to initialize L evel-3, L evel-3 performs rough to fine positioning according to different organs, the input of L evel-4 is to perform certain rotation on the output of L evel-3, the positions of the 51 key points are finally output, the external 17 critical points are detected by using only two networks, the detection is performed by adding two key points 357-357, the rough positioning is performed by adding the key points, and the main key points are obtained by the cascade network, and the rough positioning is performed by adding the key point positioning of the main key points, and the main key point detection of the main evel-3526, and the main key point obtained by the cascade network.
And step S104, inputting the picture to be processed into a depth prediction model to obtain the depth information of the picture to be processed.
Specifically, the depth prediction model has a characteristic of outputting corresponding depth information according to an input to-be-processed picture, and is obtained by training in advance.
Illustratively, referring to fig. 2, training the deep prediction network includes:
in step S104A, sample depth information and sample pictures of a plurality of sample users are acquired by the depth camera.
The Structured light (Structured L light) technology is relatively complex, the technology projects coded grating or line light source and the like onto a measured object, and demodulates the three-dimensional information of the measured object according to the distortion generated by the grating or line light source and the like.
Step S104B is to take each sample picture and a plurality of sample depth information as input of a deep learning network model, and output target depth information corresponding to each sample picture with the largest locality.
Specifically, sample pictures are input into a deep learning network model, and the depth information of each sample picture is input into one cell of a first network layer of the deep learning network model until each face key point is input into the cell in a one-to-one correspondence manner; initializing a decoder of the deep learning network model so that a calculation value of a previous sample picture does not influence a sample picture currently tested; setting a decoder of a deep learning network model to output a single target parameter for each input sample picture, wherein the parameter is depth information; before the target depth information is output, calculating the confidence coefficient corresponding to each depth information and the sample picture through a softmax function, wherein the maximum confidence coefficient is the target depth information.
Step S104C, determining whether the target depth information is sample depth information corresponding to each sample picture.
Specifically, whether the target depth information is the sample depth information corresponding to each sample picture is judged, and preparation is made for a subsequent convergence model.
Step S104D, if yes, the depth prediction model is successfully trained; if not, recalculating the confidence coefficient through the loss function so as to enable the target depth information corresponding to the maximum locality degree to be the sample depth information.
If the target depth information is not the sample depth information corresponding to the sample picture, L2 loss functions are further used to maximize confidence degrees of the sample picture and the corresponding depth information, so that each sample picture obtains the corresponding depth information thereof, and a depth prediction network is obtained.
And step S106, reconstructing a three-dimensional face structure of the user to be processed according to the depth information and the key point coordinates.
Specifically, the key point coordinates are converted into a three-dimensional model according to the depth information, and the three-dimensional face structure is obtained through correction.
Exemplarily, referring to fig. 3, step S106 further includes:
and S106A, inputting the depth information and the key point coordinates into a three-dimensional model to obtain a rough three-dimensional face structure of the user to be processed.
Specifically, the rough three-dimensional face structure obtained according to the depth information and the key point coordinates does not process the edge, and the obtained picture is not accurate during rotation, so that further processing is required.
Exemplarily, referring to fig. 4, the step S106A further includes:
and step S106A1, establishing a two-dimensional face model in the three-dimensional model according to the key point coordinates.
Specifically, the coordinates of the key points are input into the three-dimensional model, and the coordinates correspond to an x plane and a y plane of the three-dimensional model, so that the two-dimensional face model is established.
Step S106A2, inputting the depth information into the three-dimensional model, so as to obtain the rough three-dimensional face structure according to the two-dimensional face model and the depth information.
Specifically, the depth information is used as a z plane of the three-dimensional model, and the two-dimensional face model is processed to obtain a coarse three-dimensional face structure. The two-dimensional coordinates of the key point coordinates and the depth information are converted into three-dimensional coordinates, and the three-dimensional coordinates are displayed in a three-dimensional model.
And step S106B, carrying out affine transformation on the key point coordinates and the depth information according to the three-dimensional model to obtain reconstructed point coordinates.
Specifically, the coordinates of the key points and the depth information are subjected to linear transformation in the three-dimensional model, so that the coarse three-dimensional face structure is more three-dimensional. And mapping the coordinates of the key points and the depth information onto the three-dimensional model by the original face image through affine transformation to obtain the coordinates of the reconstruction points.
Exemplarily, referring to fig. 5, the step S106B further includes:
and step S106B1, determining the vertex coordinates of the key point coordinates.
Specifically, the vertex coordinate is the maximum value of the key point coordinate, and when the vertex coordinate is used for performing radial transformation on the key point of the human face, the human face image is transformed based on the vertex to obtain a reconstructed point coordinate, and the key point coordinate can be subjected to multi-angle transformation through affine transformation.
And step S106B2, performing affine transformation on the key point coordinates based on the vertex coordinates to obtain reconstructed point coordinates corresponding to the key point coordinates.
Specifically, affine transformation is performed on each key point coordinate based on the vertex coordinates, and the key point coordinates are mapped into the three-dimensional model, so that reconstructed point coordinates corresponding to the key point coordinates are obtained. Affine transformation is prior art and will not be described herein.
Step S106B3, performing affine transformation on the depth information to obtain the reconstruction point coordinates corresponding to the depth information.
Specifically, the depth information is mapped to a vector on the Z axis, and affine transformation is performed on the vector to obtain a reconstruction point coordinate corresponding to the depth information in the three-dimensional model.
Step S106C, the reconstruction point coordinates are input into a three-dimensional model, so that the rough three-dimensional face structure is corrected to obtain the three-dimensional face structure of the user to be processed.
Specifically, the rough three-dimensional face structure is corrected by the reconstructed point coordinates, and the three-dimensional face structure of the user to be processed in the three-dimensional model is obtained. Affine transformation is to map a face image to a three-dimensional face structure of a three-dimensional model for correction.
And step S108, calculating to obtain a target face structure according to the three-dimensional face structure and a preset rotation angle.
Specifically, the preset rotation angle is set according to the user requirement, for example, an angle such as 10 degrees, 20 degrees, and the like, and preferably within 30 degrees, at this time, the accuracy of the face image obtained after the three-dimensional face structure is rotated is high, and then the preset rotation angle is calculated through the rotation matrix.
Exemplarily, referring to fig. 6, step S108 further includes:
step S108A, determining the Euler angle of the three-dimensional face structure according to the preset rotation angle.
Specifically, three angles of yaw, pitch and roll (euler angles) are determined according to the preset rotation angle, corresponding values respectively represent the rotation angles of the three-dimensional face structure around three axes (x, y and z axes) of the coordinate system, and when the preset rotation angle is only rotated by 30 degrees on the x axis, the corresponding euler angle is (30, 0, 0).
Step S108B, calculating the Euler angles according to the rotation matrix to obtain the rotation barycenter value of the three-dimensional face structure.
Specifically, the gravity center value of the three-dimensional face structure for rotation is obtained by calculating according to the following formula:
R=Rz(α)Ry(β)Rx(γ),
wherein R represents a central value, theta is a preset rotation angle, and Rz(α)、Ry(β)、Rx(gamma) represents the value of Euler angle, and Rx(θ)、Ry(θ)、Rz(θ) corresponds.
Step S108C, rotating the three-dimensional face structure by the preset rotation angle according to the gravity center value to obtain a target face structure.
Specifically, the target face structure is obtained by rotating a preset rotation angle according to a point corresponding to the gravity center value of the three-dimensional face structure.
And step S110, projecting the target face structure to obtain a target image.
Specifically, the rotated target face structure is subjected to two-dimensional projection to obtain a two-dimensional picture, namely a target picture, and the color of the image is not changed in the target image obtained after the picture to be processed is rotated.
Exemplarily, referring to fig. 7, the step S110 further includes:
step S110A, acquiring the two-dimensional RGB information of the to-be-processed picture.
Specifically, two-dimensional RGB information, i.e., a gray value, of the picture to be processed is obtained.
Step S110B, filling the two-dimensional RGB information into the target face structure, to obtain a three-dimensional face image.
Specifically, the two-dimensional RGB information is filled into a target face structure, pixels corresponding to the target face structure are filled, when the two-dimensional RGB information is filled, the coordinates of corresponding key points after rotation are found, and the pixels are filled to obtain a three-dimensional face image.
And step S110C, performing two-dimensional projection on the three-dimensional face image, and performing difference value operation correction to obtain the target image.
Specifically, when projection conversion is performed, coordinates of pixels of two-dimensional RGB information are transformed to corresponding points on a target image, the transformed corresponding coordinates are a decimal, correction is performed through difference operation, and an outline and a bright-dark area of the image are displayed more obviously. The difference operation method can be a nearest neighbor interpolation method, a bilinear interpolation method and a cubic polynomial interpolation method.
Example two
Referring to fig. 8, a schematic diagram of program modules of a second embodiment of the system for acquiring a face image according to the present invention is shown. In this embodiment, the system 20 for acquiring a face image may include or be divided into one or more program modules, and the one or more program modules are stored in a storage medium and executed by one or more processors to implement the present invention and implement the method for acquiring a face image. The program module referred to in the embodiments of the present invention refers to a series of computer program instruction segments capable of performing specific functions, and is more suitable for describing the execution process of the face image acquisition system 20 in a storage medium than the program itself. The following description will specifically describe the functions of the program modules of the present embodiment:
the first obtaining module 200 is configured to obtain a to-be-processed picture, where the to-be-processed picture includes a face image of a to-be-processed user.
Specifically, a picture to be processed, which is shot by a user to be processed through camera or mobile phone photographic software, is obtained, and the picture to be processed comprises a face front image of the user to be processed.
The second obtaining module 202 is configured to input the face image into a key point detection model, so as to obtain a face key point and a key point coordinate corresponding to the key point.
Specifically, the key point detection model divides key points of the human face into internal key points and contour key points according to the human face image, wherein the internal key points comprise 51 key points including eyebrows, eyes, a nose and a mouth, the contour key points comprise 17 key points (the embodiment of the invention adopts a CNN algorithm for training, and other key point values can be obtained by using other algorithms for training). The method adopts a four-level cascade network for detecting the internal 51 key points, wherein L evel-1 is mainly used for obtaining a boundary frame of a facial organ, the output of L evel-2 is the predicted positions of the 51 key points, a rough positioning function is performed to initialize L evel-3, L evel-3 performs rough to fine positioning according to different organs, the input of L evel-4 is to perform certain rotation on the output of L evel-3, the positions of the 51 key points are finally output, the external 17 critical points are detected by using only two networks, the detection is performed by adding two key points 357-357, the rough positioning is performed by adding the key points, and the main key points are obtained by the cascade network, and the rough positioning is performed by adding the key point positioning of the main key points, and the main key point detection of the main evel-3526, and the main key point obtained by the cascade network.
A third obtaining module 204, configured to input the face image and the face key points into a depth prediction model, so as to obtain depth information of the face key points.
Specifically, the depth prediction model has a characteristic of outputting corresponding depth information according to an input to-be-processed picture, and is obtained by training in advance.
Illustratively, the third obtaining module 204 is further configured to train the depth prediction network:
sample depth information and sample pictures are obtained for a plurality of sample users by a depth camera.
The Structured light (Structured L light) technology is relatively complex, the technology projects coded grating or line light source and the like onto a measured object, and demodulates the three-dimensional information of the measured object according to the distortion generated by the grating or line light source and the like.
And taking each sample picture and a plurality of sample depth information as the input of a deep learning network model, and outputting the target depth information corresponding to the maximum position degree of each sample picture.
Specifically, sample pictures are input into a deep learning network model, and the depth information of each sample picture is input into one cell of a first network layer of the deep learning network model until each face key point is input into the cell in a one-to-one correspondence manner; initializing a decoder of the deep learning network model so that a calculation value of a previous sample picture does not influence a sample picture currently tested; setting a decoder of a deep learning network model to output a single target parameter for each input sample picture, wherein the parameter is depth information; before the target depth information is output, calculating the confidence coefficient corresponding to each depth information and the sample picture through a softmax function, wherein the maximum confidence coefficient is the target depth information.
And judging whether the target depth information is sample depth information corresponding to each sample picture.
Specifically, whether the target depth information is the sample depth information corresponding to each sample picture is judged, and preparation is made for a subsequent convergence model.
If yes, the depth prediction model is successfully trained; if not, recalculating the confidence coefficient through the loss function so as to enable the target depth information corresponding to the maximum locality degree to be the sample depth information.
If the target depth information is not the sample depth information corresponding to the sample picture, L2 loss functions are further used to maximize confidence degrees of the sample picture and the corresponding depth information, so that each sample picture obtains the corresponding depth information thereof, and a depth prediction network is obtained.
A reconstructing module 206, configured to reconstruct a three-dimensional face structure of the user to be processed according to the depth information and the key point coordinates.
Specifically, the key point coordinates are converted into a three-dimensional model according to the depth information, and the three-dimensional face structure is obtained through correction.
Illustratively, the reconstruction module 206 is further configured to:
and inputting the depth information and the key point coordinates into a three-dimensional model to obtain a rough three-dimensional face structure of the user to be processed.
Specifically, the rough three-dimensional face structure obtained according to the depth information and the key point coordinates does not process the edge, and the obtained picture is not accurate during rotation, so that further processing is required.
And carrying out affine transformation on the key point coordinates and the depth information according to the three-dimensional model to obtain reconstructed point coordinates.
Specifically, the coordinates of the key points and the depth information are subjected to linear transformation in the three-dimensional model, so that the coarse three-dimensional face structure is more three-dimensional. And mapping the coordinates of the key points and the depth information onto the three-dimensional model by the original face image through affine transformation to obtain the coordinates of the reconstruction points.
And inputting the reconstructed point coordinates into a three-dimensional model so as to correct the rough three-dimensional face structure to obtain the three-dimensional face structure of the user to be processed.
Specifically, the rough three-dimensional face structure is corrected by the reconstructed point coordinates, and the three-dimensional face structure of the user to be processed in the three-dimensional model is obtained. Affine transformation is to map a face image to a three-dimensional face structure of a three-dimensional model for correction.
And the calculating module 208 is configured to calculate to obtain a target face structure according to the three-dimensional face structure and the preset rotation angle.
Specifically, the preset rotation angle is set according to the user requirement, for example, an angle such as 10 degrees, 20 degrees, and the like, and preferably within 30 degrees, at this time, the accuracy of the face image obtained after the three-dimensional face structure is rotated is high, and then the preset rotation angle is calculated through the rotation matrix.
Illustratively, the calculation module 208 is further configured to:
and determining the Euler angle of the three-dimensional face structure according to the preset rotation angle.
Specifically, three angles of yaw, pitch and roll (euler angles) are determined according to the preset rotation angle, corresponding values respectively represent the rotation angles of the three-dimensional face structure around three axes (x, y and z axes) of the coordinate system, and when the preset rotation angle is only rotated by 30 degrees on the x axis, the corresponding euler angle is (30, 0, 0).
And calculating the Euler angle according to the rotation matrix to obtain a rotation gravity center value of the three-dimensional face structure.
Specifically, the gravity center value of the three-dimensional face structure for rotation is obtained by calculating according to the following formula:
R=Rz(α)Ry(β)Rx(γ),
wherein R represents a central value, theta is a preset rotation angle, and Rz(α)、Ry(β)、Rx(gamma) represents the value of Euler angle, and Rx(θ)、Ry(θ)、Rz(θ) corresponds.
And rotating the three-dimensional face structure by the preset rotation angle according to the gravity center value to obtain a target face structure.
Specifically, the target face structure is obtained by rotating a preset rotation angle according to a point corresponding to the gravity center value of the three-dimensional face structure.
And the projection module 210 is configured to project the target face structure to obtain a target image.
Specifically, the rotated target face structure is subjected to two-dimensional projection to obtain a two-dimensional picture, namely a target picture, and the color of the image is not changed in the target image obtained after the picture to be processed is rotated.
Illustratively, the projection module 210 is further configured to:
and acquiring the two-dimensional RGB information of the picture to be processed.
Specifically, two-dimensional RGB information, i.e., a gray value, of the picture to be processed is obtained.
And filling the two-dimensional RGB information into the target face structure to obtain a three-dimensional face image.
Specifically, the two-dimensional RGB information is filled into a target face structure, pixels corresponding to the target face structure are filled, when the two-dimensional RGB information is filled, the coordinates of corresponding key points after rotation are found, and the pixels are filled to obtain a three-dimensional face image.
And performing two-dimensional projection on the three-dimensional face image, and performing difference value operation correction to obtain the target image.
Specifically, when projection conversion is performed, coordinates of pixels of two-dimensional RGB information are transformed to corresponding points on a target image, the transformed corresponding coordinates are a decimal, correction is performed through difference operation, and an outline and a bright-dark area of the image are displayed more obviously. The difference operation method can be a nearest neighbor interpolation method, a bilinear interpolation method and a cubic polynomial interpolation method.
EXAMPLE III
Fig. 9 is a schematic diagram of a hardware architecture of a computer device according to a third embodiment of the present invention. In the present embodiment, the computer device 2 is a device capable of automatically performing numerical calculation and/or information processing in accordance with a preset or stored instruction. The computer device 2 may be a rack server, a blade server, a tower server or a rack server (including an independent server or a server cluster composed of a plurality of servers), and the like. As shown in fig. 9, the computer device 2 includes, but is not limited to, at least a memory 21, a processor 22, a network interface 23, and a face image acquisition system 20, which are communicatively connected to each other via a system bus. Wherein:
in this embodiment, the memory 21 includes at least one type of computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the computer device 2. Of course, the memory 21 may also comprise both internal and external memory units of the computer device 2. In this embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 2 and various application software, such as the program code of the face image acquisition system 20 in the second embodiment. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 2. In this embodiment, the processor 22 is configured to execute the program code stored in the memory 21 or process data, for example, execute the facial image acquisition system 20, so as to implement the facial image acquisition method according to the first embodiment.
The network interface 23 may comprise a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing communication connection between the server 2 and other electronic devices. For example, the network interface 23 is used to connect the server 2 to an external terminal via a network, establish a data transmission channel and a communication connection between the server 2 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, and the like. It is noted that fig. 9 only shows the computer device 2 with components 20-23, but it is to be understood that not all shown components are required to be implemented, and that more or less components may be implemented instead.
In this embodiment, the system 20 for acquiring a face image stored in the memory 21 can be further divided into one or more program modules, and the one or more program modules are stored in the memory 21 and executed by one or more processors (in this embodiment, the processor 22) to complete the present invention.
For example, fig. 8 shows a schematic diagram of program modules of a second embodiment of the system 20 for acquiring a face image, in which the system 20 for acquiring a face image can be divided into a first acquiring module 200, a second acquiring module 202, a third acquiring module 204, a reconstructing module 206, a calculating module 208, and a projecting module 210. The program module referred to in the present invention refers to a series of computer program instruction segments capable of performing specific functions, and is more suitable than a program for describing the execution process of the face image acquisition system 20 in the computer device 2. The specific functions of the program modules 200 and 210 have been described in detail in the second embodiment, and are not described herein again.
EXAMPLE III
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the present embodiment is used for storing the system 20 for acquiring a face image, and when being executed by a processor, the system implements the method for acquiring a face image according to the first embodiment.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A method for acquiring a face image is characterized by comprising the following steps:
acquiring a picture to be processed, wherein the picture to be processed comprises a face image of a user to be processed;
inputting the face image into a key point detection model to obtain a face image containing face key points and key point coordinates corresponding to the face key points;
inputting the picture to be processed into a depth prediction model to obtain depth information of the picture to be processed;
reconstructing a three-dimensional face structure of the user to be processed according to the depth information and the key point coordinates;
calculating to obtain a target face structure according to the three-dimensional face structure and a preset rotation angle;
and projecting the target face structure to obtain a target image.
2. The method of claim 1, wherein training the deep prediction network comprises:
acquiring sample depth information and sample pictures of a plurality of sample users through a depth camera;
taking each sample picture and a plurality of sample depth information as the input of a deep learning network model, and outputting the target depth information corresponding to the maximum position degree of each sample picture;
judging whether the target depth information is sample depth information corresponding to each sample picture;
if yes, the depth prediction model is successfully trained; if not, recalculating the confidence coefficient through the loss function so as to enable the target depth information corresponding to the maximum locality degree to be the sample depth information.
3. The method of claim 1, wherein reconstructing the three-dimensional face structure of the user to be processed from the depth information and the key point coordinates comprises:
inputting the depth information and the key point coordinates into a three-dimensional model to obtain a rough three-dimensional face structure of the user to be processed;
carrying out affine transformation on the key point coordinates and the depth information according to the three-dimensional model to obtain reconstructed point coordinates;
and inputting the reconstructed point coordinates into a three-dimensional model so as to correct the rough three-dimensional face structure to obtain the three-dimensional face structure of the user to be processed.
4. The method of claim 3, wherein inputting the depth information and the key point coordinates into a three-dimensional model to obtain a rough three-dimensional face structure of the user to be processed comprises:
establishing a two-dimensional face model in the three-dimensional model according to the key point coordinates;
and inputting the depth information into the three-dimensional model so as to obtain the rough three-dimensional face structure according to the two-dimensional face model and the depth information.
5. The method of claim 3, wherein performing affine transformation on the key point coordinates and the depth information according to the three-dimensional model to obtain reconstructed point coordinates comprises:
determining the vertex coordinates of the key point coordinates;
performing affine transformation on the key point coordinates based on the vertex coordinates to obtain reconstructed point coordinates corresponding to the key point coordinates;
and carrying out affine transformation on the depth information to obtain a reconstruction point coordinate corresponding to the depth information.
6. The method of claim 1, wherein calculating the target face structure according to the three-dimensional face structure and a preset rotation angle comprises:
determining an Euler angle of the three-dimensional face structure according to the preset rotation angle;
calculating the Euler angle according to the rotation matrix to obtain a rotation gravity center value of the three-dimensional face structure;
and rotating the three-dimensional face structure by the preset rotation angle according to the gravity center value to obtain a target face structure.
7. The method of claim 1, wherein projecting the target face structure to obtain a target image comprises:
acquiring two-dimensional RGB information of the picture to be processed;
filling the two-dimensional RGB information into the target face structure to obtain a three-dimensional face image;
and performing two-dimensional projection on the three-dimensional face image, and performing difference value operation correction to obtain the target image.
8. A system for acquiring a face image, comprising:
the device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a picture to be processed, and the picture to be processed comprises a face image of a user to be processed;
the second acquisition module is used for inputting the face image into the key point detection model so as to obtain face key points and key point coordinates corresponding to the key points;
the third acquisition module is used for inputting the face image and the face key points into a depth prediction model so as to acquire depth information of the face key points;
the reconstruction module is used for reconstructing a three-dimensional face structure of the user to be processed according to the depth information and the key point coordinates;
the calculation module is used for calculating to obtain a target face structure according to the three-dimensional face structure and a preset rotation angle;
and the projection module is used for projecting the target face structure to obtain a target image.
9. A computer device, characterized in that it comprises a memory, a processor, said memory having stored thereon a computer program operable on said processor, said computer program, when executed by said processor, implementing the steps of the method for acquiring a face image according to any one of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which is executable by at least one processor to cause the at least one processor to perform the steps of the method for acquiring a face image according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010141606.2A CN111428579A (en) | 2020-03-03 | 2020-03-03 | Face image acquisition method and system |
PCT/CN2020/135077 WO2021174939A1 (en) | 2020-03-03 | 2020-12-10 | Facial image acquisition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010141606.2A CN111428579A (en) | 2020-03-03 | 2020-03-03 | Face image acquisition method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111428579A true CN111428579A (en) | 2020-07-17 |
Family
ID=71547535
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010141606.2A Pending CN111428579A (en) | 2020-03-03 | 2020-03-03 | Face image acquisition method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111428579A (en) |
WO (1) | WO2021174939A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111985384A (en) * | 2020-08-14 | 2020-11-24 | 深圳地平线机器人科技有限公司 | Method and device for acquiring 3D coordinates of face key points and 3D face model |
CN112163509A (en) * | 2020-09-25 | 2021-01-01 | 咪咕文化科技有限公司 | Image processing method, image processing device, network equipment and storage medium |
CN112233161A (en) * | 2020-10-15 | 2021-01-15 | 北京达佳互联信息技术有限公司 | Hand image depth determination method and device, electronic equipment and storage medium |
CN112487923A (en) * | 2020-11-25 | 2021-03-12 | 奥比中光科技集团股份有限公司 | Method and system for acquiring training data of human face head posture |
CN112541484A (en) * | 2020-12-28 | 2021-03-23 | 平安银行股份有限公司 | Face matting method, system, electronic device and storage medium |
CN112613357A (en) * | 2020-12-08 | 2021-04-06 | 深圳数联天下智能科技有限公司 | Face measurement method, face measurement device, electronic equipment and medium |
WO2021174939A1 (en) * | 2020-03-03 | 2021-09-10 | 平安科技(深圳)有限公司 | Facial image acquisition method and system |
CN113435342A (en) * | 2021-06-29 | 2021-09-24 | 平安科技(深圳)有限公司 | Living body detection method, living body detection device, living body detection equipment and storage medium |
CN113627394A (en) * | 2021-09-17 | 2021-11-09 | 平安银行股份有限公司 | Face extraction method and device, electronic equipment and readable storage medium |
CN113961734A (en) * | 2021-12-22 | 2022-01-21 | 松立控股集团股份有限公司 | User and vehicle image construction method based on parking data and APP operation log |
CN114067059A (en) * | 2021-11-29 | 2022-02-18 | 广州虎牙科技有限公司 | Three-dimensional face reconstruction method, electronic device and computer-readable storage medium |
CN114373056A (en) * | 2021-12-17 | 2022-04-19 | 云南联合视觉科技有限公司 | Three-dimensional reconstruction method and device, terminal equipment and storage medium |
CN116503524A (en) * | 2023-04-11 | 2023-07-28 | 广州赛灵力科技有限公司 | Virtual image generation method, system, device and storage medium |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113902768B (en) * | 2021-10-11 | 2024-08-13 | 浙江博采传媒有限公司 | Three-dimensional face model edge optimization method and system based on micro-rendering |
CN113920282B (en) * | 2021-11-15 | 2022-11-04 | 广州博冠信息科技有限公司 | Image processing method and device, computer readable storage medium, and electronic device |
CN114387388B (en) * | 2021-11-30 | 2024-09-06 | 中国电子科技集团公司第十五研究所 | Close-range three-dimensional face reconstruction device |
CN114373043A (en) * | 2021-12-16 | 2022-04-19 | 聚好看科技股份有限公司 | Head three-dimensional reconstruction method and equipment |
CN114266860B (en) * | 2021-12-22 | 2024-08-13 | 西交利物浦大学 | Three-dimensional face model building method and device, electronic equipment and storage medium |
CN114581627B (en) * | 2022-03-04 | 2024-04-16 | 合众新能源汽车股份有限公司 | ARHUD-based imaging method and system |
CN114758076A (en) * | 2022-04-22 | 2022-07-15 | 北京百度网讯科技有限公司 | Training method and device for deep learning model for building three-dimensional model |
CN115620094B (en) * | 2022-12-19 | 2023-03-21 | 南昌虚拟现实研究院股份有限公司 | Key point marking method and device, electronic equipment and storage medium |
CN116758124B (en) * | 2023-06-16 | 2024-09-24 | 北京代码空间科技有限公司 | 3D model correction method and terminal equipment |
CN117808989B (en) * | 2023-12-22 | 2024-10-18 | 成都国星宇航科技股份有限公司 | Method and equipment for generating building model by using building elevation image |
CN118411751B (en) * | 2024-07-03 | 2024-09-10 | 宁波星巡智能科技有限公司 | Heart rate measurement stability augmentation method, device and equipment based on facial image processing |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054291A (en) * | 2009-11-04 | 2011-05-11 | 厦门市美亚柏科信息股份有限公司 | Method and device for reconstructing three-dimensional face based on single face image |
US20160328601A1 (en) * | 2014-04-25 | 2016-11-10 | Tencent Technology (Shenzhen) Company Limited | Three-dimensional facial recognition method and system |
CN108549873A (en) * | 2018-04-19 | 2018-09-18 | 北京华捷艾米科技有限公司 | Three-dimensional face identification method and three-dimensional face recognition system |
CN109377556A (en) * | 2018-11-22 | 2019-02-22 | 厦门美图之家科技有限公司 | Facial image characteristic processing method and device |
CN109508678A (en) * | 2018-11-16 | 2019-03-22 | 广州市百果园信息技术有限公司 | Training method, the detection method and device of face key point of Face datection model |
CN109697688A (en) * | 2017-10-20 | 2019-04-30 | 虹软科技股份有限公司 | A kind of method and apparatus for image procossing |
WO2020037676A1 (en) * | 2018-08-24 | 2020-02-27 | 太平洋未来科技(深圳)有限公司 | Three-dimensional face image generation method and apparatus, and electronic device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10643383B2 (en) * | 2017-11-27 | 2020-05-05 | Fotonation Limited | Systems and methods for 3D facial modeling |
CN108197587B (en) * | 2018-01-18 | 2021-08-03 | 中科视拓(北京)科技有限公司 | Method for performing multi-mode face recognition through face depth prediction |
CN108376421B (en) * | 2018-02-28 | 2021-05-25 | 浙江神造科技有限公司 | Method for generating human face three-dimensional model based on shadow recovery shape method |
CN109978930B (en) * | 2019-03-27 | 2020-11-10 | 杭州相芯科技有限公司 | Stylized human face three-dimensional model automatic generation method based on single image |
CN111428579A (en) * | 2020-03-03 | 2020-07-17 | 平安科技(深圳)有限公司 | Face image acquisition method and system |
-
2020
- 2020-03-03 CN CN202010141606.2A patent/CN111428579A/en active Pending
- 2020-12-10 WO PCT/CN2020/135077 patent/WO2021174939A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054291A (en) * | 2009-11-04 | 2011-05-11 | 厦门市美亚柏科信息股份有限公司 | Method and device for reconstructing three-dimensional face based on single face image |
US20160328601A1 (en) * | 2014-04-25 | 2016-11-10 | Tencent Technology (Shenzhen) Company Limited | Three-dimensional facial recognition method and system |
CN109697688A (en) * | 2017-10-20 | 2019-04-30 | 虹软科技股份有限公司 | A kind of method and apparatus for image procossing |
CN108549873A (en) * | 2018-04-19 | 2018-09-18 | 北京华捷艾米科技有限公司 | Three-dimensional face identification method and three-dimensional face recognition system |
WO2020037676A1 (en) * | 2018-08-24 | 2020-02-27 | 太平洋未来科技(深圳)有限公司 | Three-dimensional face image generation method and apparatus, and electronic device |
CN109508678A (en) * | 2018-11-16 | 2019-03-22 | 广州市百果园信息技术有限公司 | Training method, the detection method and device of face key point of Face datection model |
CN109377556A (en) * | 2018-11-22 | 2019-02-22 | 厦门美图之家科技有限公司 | Facial image characteristic processing method and device |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021174939A1 (en) * | 2020-03-03 | 2021-09-10 | 平安科技(深圳)有限公司 | Facial image acquisition method and system |
CN111985384A (en) * | 2020-08-14 | 2020-11-24 | 深圳地平线机器人科技有限公司 | Method and device for acquiring 3D coordinates of face key points and 3D face model |
CN112163509A (en) * | 2020-09-25 | 2021-01-01 | 咪咕文化科技有限公司 | Image processing method, image processing device, network equipment and storage medium |
CN112163509B (en) * | 2020-09-25 | 2024-05-07 | 咪咕文化科技有限公司 | Image processing method, device, network equipment and storage medium |
CN112233161A (en) * | 2020-10-15 | 2021-01-15 | 北京达佳互联信息技术有限公司 | Hand image depth determination method and device, electronic equipment and storage medium |
CN112233161B (en) * | 2020-10-15 | 2024-05-17 | 北京达佳互联信息技术有限公司 | Hand image depth determination method and device, electronic equipment and storage medium |
CN112487923A (en) * | 2020-11-25 | 2021-03-12 | 奥比中光科技集团股份有限公司 | Method and system for acquiring training data of human face head posture |
CN112613357A (en) * | 2020-12-08 | 2021-04-06 | 深圳数联天下智能科技有限公司 | Face measurement method, face measurement device, electronic equipment and medium |
CN112613357B (en) * | 2020-12-08 | 2024-04-09 | 深圳数联天下智能科技有限公司 | Face measurement method, device, electronic equipment and medium |
CN112541484B (en) * | 2020-12-28 | 2024-03-19 | 平安银行股份有限公司 | Face matting method, system, electronic device and storage medium |
CN112541484A (en) * | 2020-12-28 | 2021-03-23 | 平安银行股份有限公司 | Face matting method, system, electronic device and storage medium |
CN113435342A (en) * | 2021-06-29 | 2021-09-24 | 平安科技(深圳)有限公司 | Living body detection method, living body detection device, living body detection equipment and storage medium |
CN113627394A (en) * | 2021-09-17 | 2021-11-09 | 平安银行股份有限公司 | Face extraction method and device, electronic equipment and readable storage medium |
CN113627394B (en) * | 2021-09-17 | 2023-11-17 | 平安银行股份有限公司 | Face extraction method and device, electronic equipment and readable storage medium |
CN114067059A (en) * | 2021-11-29 | 2022-02-18 | 广州虎牙科技有限公司 | Three-dimensional face reconstruction method, electronic device and computer-readable storage medium |
CN114373056A (en) * | 2021-12-17 | 2022-04-19 | 云南联合视觉科技有限公司 | Three-dimensional reconstruction method and device, terminal equipment and storage medium |
CN113961734A (en) * | 2021-12-22 | 2022-01-21 | 松立控股集团股份有限公司 | User and vehicle image construction method based on parking data and APP operation log |
CN116503524A (en) * | 2023-04-11 | 2023-07-28 | 广州赛灵力科技有限公司 | Virtual image generation method, system, device and storage medium |
CN116503524B (en) * | 2023-04-11 | 2024-04-12 | 广州赛灵力科技有限公司 | Virtual image generation method, system, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2021174939A1 (en) | 2021-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111428579A (en) | Face image acquisition method and system | |
CN109961507B (en) | Face image generation method, device, equipment and storage medium | |
US11514593B2 (en) | Method and device for image processing | |
CN108305312B (en) | Method and device for generating 3D virtual image | |
US9679192B2 (en) | 3-dimensional portrait reconstruction from a single photo | |
WO2018119889A1 (en) | Three-dimensional scene positioning method and device | |
CN110136243A (en) | A kind of three-dimensional facial reconstruction method and its system, device, storage medium | |
CN113838176A (en) | Model training method, three-dimensional face image generation method and equipment | |
CN116310076A (en) | Three-dimensional reconstruction method, device, equipment and storage medium based on nerve radiation field | |
CN113689578B (en) | Human body data set generation method and device | |
CN112541484B (en) | Face matting method, system, electronic device and storage medium | |
JP2024525703A (en) | Three-dimensional dynamic tracking method, device, electronic device and storage medium | |
CN111382618B (en) | Illumination detection method, device, equipment and storage medium for face image | |
CN115239861A (en) | Face data enhancement method and device, computer equipment and storage medium | |
CN115984447B (en) | Image rendering method, device, equipment and medium | |
US20160150143A1 (en) | Systems and methods for estimating sky light probes for outdoor images | |
CN112766215A (en) | Face fusion method and device, electronic equipment and storage medium | |
CN111680573A (en) | Face recognition method and device, electronic equipment and storage medium | |
CN117274605B (en) | Method and device for extracting water area outline from photo shot by unmanned aerial vehicle | |
CN110751026B (en) | Video processing method and related device | |
US20240161362A1 (en) | Target-augmented material maps | |
CN114862866B (en) | Calibration plate detection method and device, computer equipment and storage medium | |
CN112967329B (en) | Image data optimization method and device, electronic equipment and storage medium | |
CN112419459B (en) | Method, apparatus, computer device and storage medium for baking model AO mapping | |
CN116681745A (en) | Three-dimensional reconstruction method, device, server and storage medium based on laser depth |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40032300 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |