CN111797797A - Face image processing method based on grid deformation optimization, terminal and storage medium - Google Patents

Face image processing method based on grid deformation optimization, terminal and storage medium Download PDF

Info

Publication number
CN111797797A
CN111797797A CN202010668700.3A CN202010668700A CN111797797A CN 111797797 A CN111797797 A CN 111797797A CN 202010668700 A CN202010668700 A CN 202010668700A CN 111797797 A CN111797797 A CN 111797797A
Authority
CN
China
Prior art keywords
face image
optimization
mesh network
feature
lattice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010668700.3A
Other languages
Chinese (zh)
Other versions
CN111797797B (en
Inventor
解为成
沈琳琳
田怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202010668700.3A priority Critical patent/CN111797797B/en
Publication of CN111797797A publication Critical patent/CN111797797A/en
Application granted granted Critical
Publication of CN111797797B publication Critical patent/CN111797797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a face image processing method based on mesh deformation optimization, a terminal and a storage medium, wherein the method comprises the following steps: acquiring an attitude face image, and acquiring a first characteristic lattice of the attitude face image and a second characteristic lattice of a predicted frontal face image corresponding to the attitude face image, wherein the first characteristic lattice comprises each first characteristic point, and the second characteristic lattice comprises each second characteristic point; respectively constructing a first mesh network corresponding to the attitude face image and a second mesh network corresponding to the predicted frontal face image; and optimizing the positions of the first characteristic points and the first mesh network according to the second mesh network and the second characteristic dot matrix, and processing the attitude face image into a front face image. The invention realizes the conversion of the attitude face image into the front face image, enables the face recognition technology to recognize the attitude face image and improves the performance of the face recognition system.

Description

Face image processing method based on grid deformation optimization, terminal and storage medium
Technical Field
The invention relates to the technical field of terminals, in particular to a face image processing method based on grid deformation optimization, a terminal and a storage medium.
Background
The face recognition technology is widely applied in various fields, and the current face recognition technology is usually limited to face (face facing the front) recognition, so that the recognition efficiency of the face recognition system is low.
Thus, there is a need for improvements and enhancements in the art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a face image processing method based on mesh deformation optimization, a terminal and a storage medium, and aims to solve the problem of low recognition efficiency of a face recognition technology in the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect of the present invention, a face image processing method based on mesh deformation optimization is provided, where the method includes:
acquiring a first feature lattice of an attitude face image and a second feature lattice of a predicted front face image corresponding to the attitude face image, wherein the first feature lattice comprises each first feature point, and the second feature lattice comprises each second feature point;
respectively constructing a first grid network corresponding to the first characteristic lattice and a second grid network corresponding to the second characteristic lattice;
optimizing the first mesh network according to the second mesh network, the second feature lattice and the first feature lattice;
and converting the attitude face image into a target front face image according to the optimized first grid network.
The face image processing method based on mesh deformation optimization, wherein the obtaining of the second feature lattice of the predicted front face image corresponding to the attitude face image comprises:
constructing a face shape database, and acquiring a feature vector of the face shape database;
obtaining the second characteristic lattice according to a first preset formula,
wherein the first preset formula is as follows:
Figure BDA0002581495000000021
Figure BDA0002581495000000022
wherein ,Q0For the vector representation of said second feature lattice, EiRepresenting the ith feature vector of the face shape database,
Figure BDA0002581495000000023
is the average shape of the face shape database, O is the face shape of the posed face image, n0Is a constant number, n0-1 represents the number of removed feature vectors.
The face image processing method based on mesh deformation optimization, wherein the respectively constructing a first mesh network corresponding to the first feature lattice and a second mesh network corresponding to the second feature lattice comprises:
and expanding the number of the first characteristic points in the first characteristic lattice and the number of the second characteristic points in the second characteristic lattice.
The face image processing method based on mesh deformation optimization, wherein the respectively constructing a first mesh network corresponding to the attitude face image and a second mesh network corresponding to the predicted front face image comprises:
respectively constructing the first mesh network and the second mesh network according to a second preset formula;
wherein the second preset formula is as follows:
Pi+1,j+Pi-1,j+Pi,j+1+Pi,j-1-4Pi,j=0
i=0,…,Nu;j=0,…,Nv
wherein ,Pi,jOne grid point, N, in the ith row and jth column in the gridu+1 is the number of rows in the grid, Nv+1 is the number of columns in the grid.
The face image processing method based on mesh deformation optimization, wherein the optimizing the positions of the first feature points and the first mesh network according to the second mesh network, the second feature point matrix and the first feature point matrix includes:
performing primary optimization on the first grid network according to a third preset formula;
re-optimizing the first mesh network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function;
wherein the third preset formula is as follows:
Figure BDA0002581495000000031
wherein ,Pi,j,P′i,jMesh points, Q, of the first mesh network and the second mesh network, respectivelyt,Q′tRespectively P from the first mesh networki,jT-th first feature point in mesh starting at mesh point and P 'from the second mesh network'i,jThe t-th second feature point in the grid from the grid point;
the first optimization function, the second optimization function and the third optimization function are respectively constructed based on smoothness, translation invariance and face bilateral symmetry.
The face image processing method based on mesh deformation optimization is characterized in that the first optimization function is as follows:
ETPS(z(Pi,j))=(zx″u,u)2+2(zx″u,v)2+(zx″v,v)2+(zy″u,u)2+2(zy″u,v)2+(zy″v,v)2
wherein ,z(Pi,j) (zx, zy) represents Pi,jOffset of grid points, zx being Pi,jThe grid points are offset in the u direction by an amount of Pi,jOffset of grid points in the v direction, zx ″u,vRepresents the second directional partial derivatives of zx with respect to the u and v directions;
the second optimization function is:
Figure BDA0002581495000000032
wherein ,z(Qt)=Q′t-QtDenotes Qt,Q′tA translation vector therebetween;
the third optimization function is:
Figure BDA0002581495000000041
wherein ,
Figure BDA0002581495000000042
respectively represent characteristic point columns with the same point sequence on the left side and the right side of the pose face image,
Figure BDA0002581495000000043
and the pixel colors of grid points corresponding to the left side and the right side of the attitude face image in the first grid network are respectively.
The face image processing method based on mesh deformation optimization, wherein the re-optimizing the first mesh network subjected to the initial optimization according to a first optimization function, a second optimization function and a third optimization function includes:
and acquiring a first mesh network which enables the function value of the first optimization function, the second optimization function and the third optimization function to be minimum as an optimization result.
The mesh deformation optimization-based face image processing method comprises the following steps of converting the attitude face image into a target front face image according to the optimized first mesh network:
converting the attitude face image into a middle front face image according to the optimized first grid network;
correcting the middle front face image according to a fourth preset formula to obtain the target front face image;
wherein the fourth preset formula is:
Figure BDA0002581495000000044
wherein ,OLpIs the brightness, OL, of the pixel points p in the shielded region of the intermediate front face imageqIs the brightness, NL, of pixel q in the neighborhood of p pixelsp、NLqBrightness, N, of pixel points corresponding to pixel point p and pixel point q in a non-shielding region corresponding to the shielded region, respectivelyp8 neighbourhood of pixel point p, CEOL、CENLThe variation range of the illumination intensity of the pixel points in the boundary ring region in the sheltered region and the non-sheltered region corresponding to the sheltered region are respectively.
In a second aspect of the present invention, a terminal is provided, where the terminal includes a processor, and a storage medium communicatively connected to the processor, where the storage medium is adapted to store a plurality of instructions, and the processor is adapted to call the instructions in the storage medium to execute the steps of implementing any one of the above-mentioned methods for processing a face image based on mesh deformation optimization.
In a third aspect of the present invention, a storage medium is provided, where one or more programs are stored, and the one or more programs are executable by one or more processors to implement the steps of the mesh deformation optimization-based face image processing method according to any one of the above.
Compared with the prior art, the invention provides a face image processing method based on grid deformation optimization, a terminal and a storage medium, wherein the face image processing method based on grid deformation optimization is used for processing an attitude face image, a second feature lattice of a predicted frontal face image is obtained according to a first feature lattice of the attitude face image, a first grid network of the attitude face image and a second grid network of the predicted frontal face image are constructed, and the first grid network is optimized according to the second grid network, the second feature lattice and the first feature lattice; according to the method and the device, the attitude face image is converted into the target front face image according to the optimized first grid network, the attitude face image is converted into the front face image, the attitude face image can be recognized by a face recognition technology, and the performance of a face recognition system is improved.
Drawings
FIG. 1 is a flowchart of an embodiment of a face image processing method based on mesh deformation optimization according to the present invention;
fig. 2 is a schematic diagram of a first feature lattice of a face image obtained based on mesh deformation optimization in an embodiment of a face image processing method provided by the invention;
FIG. 3 is a schematic diagram of a face shape database in an embodiment of a face image processing method based on mesh deformation optimization according to the present invention;
fig. 4 is a schematic diagram of obtaining a second feature lattice in the embodiment of the face image processing method based on mesh deformation optimization provided by the invention;
FIG. 5 is a flowchart illustrating sub-steps of step S300 of a method for processing a face image based on mesh deformation optimization according to an embodiment of the present invention;
fig. 6 is a schematic diagram of generating an intermediate front face image in an embodiment of the mesh deformation-based optimized face image processing method provided by the present invention;
fig. 7 is a schematic diagram of generating a target front face image in an embodiment of a mesh deformation-based optimized face image processing method provided by the present invention;
fig. 8 is a schematic diagram illustrating an embodiment of a terminal according to the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example one
The face image processing method based on the grid deformation optimization provided by the invention can be applied to a terminal, and the terminal can process the attitude face image through the face image processing method based on the grid deformation optimization provided by the invention and convert the attitude face image into a front face image. As shown in fig. 1, in an embodiment of the method for processing a face image based on mesh deformation optimization, the method includes the steps of:
s100, acquiring a first characteristic lattice of the attitude face image and a second characteristic lattice of the predicted frontal face image corresponding to the attitude face image.
In the present embodiment, the front face image represents a face image directed toward the front, the posing face image is a face image other than the front face image, when the attitude face image needs to be converted into a frontal face image, first feature lattices of the attitude face image are obtained, the first feature lattices comprise various first feature points, namely extracting a plurality of feature points in the pose face image to obtain the first feature lattice (as shown in figure 2), the second feature lattice comprises second feature points, the number of the first feature points in the first feature lattice is equal to the number of the second feature points in the second feature lattice, in one possible implementation, the number of feature points in the first feature lattice and the second feature lattice may be 68, the first feature point may be obtained by extracting a feature point on the posture face image. The obtaining of the second feature lattice of the predicted frontal face image corresponding to the attitude face includes:
s110, constructing a face shape database, and obtaining a feature vector of the face shape database.
The predicted front face image is a front face image estimated from the pose face image, that is, the predicted front face image is a virtual object, in this embodiment, the second feature lattice of the predicted front face image corresponding to the pose face image is obtained by a PCA (principal component analysis) method, specifically, a face shape database is first constructed, as shown in fig. 3, a plurality of face images are stored in the face shape database, n feature points are extracted for each face image, and each feature point is extracted for each face imageThe feature lattice corresponding to the face image (the feature lattice corresponding to the face image may also be referred to as the shape of the face) may be represented by a vector O, where O ═ x1,y1...xn,yn)T, wherein ,(xn,yn) The nth feature point is represented. And S120, acquiring the second characteristic dot matrix according to a first preset formula.
Specifically, the first preset formula is as follows:
Figure BDA0002581495000000071
Figure BDA0002581495000000072
wherein ,Q0For the vector representation of said second feature lattice, EiRepresenting the ith feature vector of the face shape database,
Figure BDA0002581495000000073
is the average shape of the face shape database, O is the face shape of the posed face image, n0Is a constant and n0-1 represents the number of removed feature vectors, that is, the top n of the face shape database are removed when the second feature lattice is obtained0-1 feature vector, n0May be set to 2,3, etc. According to the above formula, the shape of the predicted frontal face image (the second feature lattice) corresponding to the pose face image can be acquired, as shown in fig. 4.
Referring to fig. 1 again, the method for processing a face image based on mesh deformation optimization further includes the steps of:
s200, respectively constructing a first mesh network corresponding to the attitude face image and a second mesh network corresponding to the predicted frontal face image.
In a possible implementation manner, in order to obtain more feature points and improve the accuracy of image processing, before the respectively constructing the first mesh network corresponding to the pose face image and the second mesh network corresponding to the predicted front face image, the method further includes:
and expanding the number of the first characteristic points in the first characteristic lattice and the number of the second characteristic points in the second characteristic lattice.
The expansion of the number of feature points in the first feature lattice and the second feature lattice can be performed by a formula:
Figure BDA0002581495000000081
wherein, RS and Q' are the first characteristic lattice and the second characteristic lattice after expansion, Q0、RS0The first characteristic lattice and the second characteristic lattice before expansion are respectively, and b, T and C are respectively preset scale coefficients, transformation matrixes and translation matrixes. b. T, C can be obtained by calculating a sample image in advance, specifically, in this embodiment, the number of feature points in the first feature dot matrix and the second feature dot matrix is the feature points added to the forehead of the human face, and the number of feature points in the first feature dot matrix and the second feature dot matrix after expansion may be 79.
The respectively constructing a first mesh network corresponding to the attitude face image and a second mesh network corresponding to the predicted frontal face image comprises:
respectively constructing the first mesh network and the second mesh network according to a second preset formula;
the second preset formula is as follows:
Pi+1,j+Pi-1,j+Pi,j+1+Pi,j-1-4Pi,j=0
i=0,…,Nu;j=0,…,Nv
wherein ,Pi,jOne grid point, N, in the ith row and jth column in the gridu+1 is the number of rows in the grid, Nv+1 is the number of columns in the grid.
The boundary conditions of the grid are:
the left boundary, the right boundary, the upper boundary and the lower boundary of the grid are respectively:P0,j、P1,j、Pi,0 and Pi,1
The first mesh network and the second mesh network are respectively constructed according to the above formula, it is easy to see that the initial shapes of the first mesh network and the second mesh network are the same, and in the subsequent processing steps, the second mesh network is kept unchanged, and the first mesh network is optimized and adjusted.
S300, optimizing the first grid network according to the second grid network, the second characteristic lattice and the first characteristic lattice.
After the first mesh network and the second mesh network are constructed, optimizing the first mesh network, wherein the optimizing the first mesh network is to adjust the positions of all mesh points in the first mesh network, and the optimization aims to enable the first relative distance of all the first characteristic points relative to the mesh points in the first mesh network to be consistent with the second relative distance of all the second characteristic points relative to the mesh points in the second mesh network, so that the aim of processing the attitude face image into the front face image according to the first mesh network is fulfilled.
Specifically, as shown in fig. 5, the optimizing the positions of the first feature points and the first mesh network according to the second mesh network and the second feature lattice, and the processing the pose face image into a front face image includes:
s310, performing primary optimization on the first grid network according to a third preset formula;
the third preset formula is as follows:
Figure BDA0002581495000000091
wherein ,Pi,j,P′i,jMesh points, Q, of the first mesh network and the second mesh network, respectivelyt,Q′tRespectively P from the first mesh networki,jT-th first feature point sum in grid starting from grid pointFrom P 'in the second mesh network'i,jAnd iteratively optimizing the position of the grid point in the first grid network through the third preset formula at the t-th second feature point in the grid from the grid point.
It is worth mentioning that QtMay not be present from P at firsti,jIn the starting grid, the point Q is corresponded by the third preset formulatWill approach the grid point P gradually after a number of iterationsi,j
S320, re-optimizing the first mesh network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function;
after the step S310, the positions of the mesh points in the first mesh network are preliminarily optimized, and in the step S320, the first mesh network is further optimized.
In the step S320, image deformation optimization is automatically performed by using the similarity between the structure and the texture in the middle domain, and specifically, the first optimization function, the second optimization function, and the third optimization function are respectively constructed based on smoothness, translational invariance, and left-right symmetry of the human face.
The first optimization function is configured to constrain smoothness of a front face image obtained by converting the pose face image according to the optimized first mesh network, and the smaller the value of the first optimization function is, the better the smoothness of the front face image is, and the first optimization function is:
ETPS(z(Pi,j))=(zx″u,u)2+2(zx″u,v)2+(zx″v,v)2+(zy″u,u)2+2(zy″u,v)2+(zy″v,v)2
wherein ,z(Pi,j) (zx, zy) represents Pi,jOffset of grid points, zx being Pi,jThe grid points are offset in the u direction by an amount of Pi,jOffset of grid points in the v direction, zx ″u,vRepresenting the second directional partial derivatives of zx with respect to the u-and v-directions, i.e. in the present embodimentSmoothness is expressed as the sum of thin-plate splines (TPS) in the x-direction and y-direction of the face network.
The second optimization function is used for constraining the translational invariance of the front face image obtained by converting the attitude face image according to the optimized first grid network, the smaller the function value of the second optimization function is, the better the translational invariance of the front face image is, and the second optimization function is as follows:
Figure BDA0002581495000000101
wherein ,z(Qt)=Q′t-QtDenotes Qt,Q′tA translation vector therebetween.
The third optimization function is configured to constrain left-right symmetry of the front face image obtained by converting the pose face image according to the first mesh network, and the smaller a function value of the third optimization function is, the better left-right symmetry of the front face image is, specifically, in this embodiment, the left-right symmetry includes shape symmetry and texture symmetry, and the third optimization function is:
Figure BDA0002581495000000111
wherein ,LSymShapeFor a function of constraining the symmetry of the shape, LSymTexAs a function of constrained texture symmetry
Figure BDA0002581495000000112
Respectively represent characteristic point columns with the same point sequence on the left side and the right side of the pose face image,
Figure BDA0002581495000000113
Figure BDA0002581495000000114
and the pixel colors of grid points corresponding to the left side and the right side of the attitude face image in the first grid network are respectively. According to the formula
Figure BDA0002581495000000115
Feature points in the function that constrains shape symmetry may be converted to grid points such that the grid points are the function LSymShapeThrough a function LSymShapeOptimizing the first mesh network.
Re-optimizing the first mesh network subjected to the primary optimization is performed by solving the first optimization function, the second optimization function, and the third optimization function, and specifically, re-optimizing the first mesh network subjected to the primary optimization according to the first optimization function, the second optimization function, and the third optimization function includes:
and acquiring a first mesh network which enables the function value of the first optimization function, the second optimization function and the third optimization function to be minimum as an optimization result.
The relationships of the first optimization function, the second optimization function, and the third optimization function with the smoothness, the translational invariance, and the left-right symmetry of the image have been described above, and therefore, the first mesh network is optimized (the positions and the pixel values of the respective mesh points in the first mesh network are adjusted) in a constrained manner such that the sum of the first optimization function, the second optimization function, and the third optimization function is the minimum, so that the image quality generated from the optimized first mesh network is higher.
Referring to fig. 1 again, the method for processing a face image based on mesh deformation optimization further includes the steps of:
and S400, converting the attitude face image into a target front face image according to the optimized first grid network.
Specifically, the converting the pose face image into the target front face image according to the optimized first mesh network includes:
s410, converting the attitude face image into an intermediate front face image according to the optimized first grid network;
and S420, correcting the middle front-face image according to a fourth preset formula to obtain the target front-face image.
An image corresponding to the first mesh network may be generated according to the optimized first mesh network, which is the prior art and is not described herein again. In one possible implementation, the intermediate front-face image generated in step S410 is directly taken as the target front-face image, i.e., as a result of the processing of the posed face image. However, for a face image with a large inclined posture, the posture face may include a large occlusion part, and when there is a texture loss in some regions of the intermediate front face image generated from the posture face (as shown in fig. 6), the regions with the texture loss are referred to as occluded regions.
The intermediate front face image is processed by a poisson-based repairing method, and the conventional filling algorithm formula is as follows:
Figure BDA0002581495000000121
wherein ,OLpIs the brightness, OL, of a pixel point p in a shaded region in an imageqIs the brightness, NL, of pixel q in the neighborhood of p pixelsp、NLqBrightness, N, of pixel points corresponding to pixel point p and pixel point q in a non-shielding region corresponding to the shielded region, respectivelypIs 4 neighborhoods of pixel point p, i.e. | Np|=4。
In this embodiment, the above formula is modified, and then the processing on the intermediate face image is performed, where the fourth preset formula is:
Figure BDA0002581495000000131
wherein ,OLpIs the brightness, OL, of the pixel points p in the shielded region of the intermediate front face imageqIs the brightness, NL, of pixel q in the neighborhood of p pixelsp、NLqAre respectively connected with the quiltBrightness, N, of pixel points corresponding to pixel point p and pixel point q in a non-occlusion region corresponding to the occlusion regionp8 neighbourhood of pixel point p, CEOL、CENLThe variation range of the illumination intensity of the pixel points in the boundary ring region in the sheltered region and the non-sheltered region corresponding to the sheltered region are respectively.
As is apparent from the above description, in the present embodiment, the number of adjacent pixels is changed from 4 to 8, the number of equations corresponding to each pixel is changed from 1 to 8, and the ratio coefficient of the luminance difference is obtained, thereby improving the detail information obtained during filling and improving the reality of the filled area. And solving the illumination intensity of the abnormal area in the intermediate face image through the fourth preset formula to obtain the target front face image, as shown in fig. 7.
In summary, the present embodiment provides a face image processing method based on mesh deformation optimization, where the face image processing method based on mesh deformation optimization is configured to process an attitude face image, obtain a second feature lattice of a predicted frontal face image according to a first feature lattice of the attitude face image, construct a first mesh network of the attitude face image and a second mesh network of the predicted frontal face image, and optimize the first mesh network according to the second mesh network, the second feature lattice, and the first feature lattice; according to the method and the device, the attitude face image is converted into the target front face image according to the optimized first grid network, the attitude face image is converted into the front face image, the attitude face image can be recognized by a face recognition technology, and the performance of a face recognition system is improved.
It should be understood that, although the steps in the flowcharts shown in the figures of the present specification are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the flowchart may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Example two
Based on the above embodiments, the present invention further provides a terminal, as shown in fig. 8, where the terminal includes a processor 10 and a memory 20. It is to be understood that fig. 8 only shows some of the components of the terminal, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
The memory 20 may in some embodiments be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 20 may also be an external storage device of the terminal in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal. The memory 20 is used for storing application software installed in the terminal and various data. The memory 20 may also be used to temporarily store data that has been output or is to be output. In an embodiment, the memory 20 stores a face image processing program 30 based on mesh deformation optimization, and the face image processing program 30 based on mesh deformation optimization can be executed by the processor 10, so as to implement the face image processing method based on mesh deformation optimization in the present application.
The processor 10 may be a Central Processing Unit (CPU), a microprocessor or other chip in some embodiments, and is used for running program codes stored in the memory 20 or Processing data, such as executing the mesh deformation optimization-based face image Processing method.
In one embodiment, the following steps are implemented when the processor 10 executes the mesh deformation optimization-based face image processing program 30 in the memory 20:
acquiring a first feature lattice of an attitude face image and a second feature lattice of a predicted front face image corresponding to the attitude face image, wherein the first feature lattice comprises each first feature point, and the second feature lattice comprises each second feature point;
respectively constructing a first grid network corresponding to the first characteristic lattice and a second grid network corresponding to the second characteristic lattice;
optimizing the first mesh network according to the second mesh network, the second feature lattice and the first feature lattice;
and converting the attitude face image into a target front face image according to the optimized first grid network.
Wherein, obtaining the second feature lattice of the predicted frontal face image corresponding to the attitude face image comprises:
constructing a face shape database, and acquiring a feature vector of the face shape database;
obtaining the second characteristic lattice according to a first preset formula,
wherein the first preset formula is as follows:
Figure BDA0002581495000000151
Figure BDA0002581495000000152
wherein ,Q0For the vector representation of said second feature lattice, EiRepresenting the ith feature vector of the face shape database,
Figure BDA0002581495000000162
is the average shape of the face shape database, O is the face shape of the posed face image, n0Is a constant and n0-1 represents the number of removed feature vectors.
Wherein, before the respectively constructing a first mesh network corresponding to the first characteristic lattice and a second mesh network corresponding to the second characteristic lattice, the method comprises:
and expanding the number of the first characteristic points in the first characteristic lattice and the number of the second characteristic points in the second characteristic lattice.
Wherein the respectively constructing a first mesh network corresponding to the attitude face image and a second mesh network corresponding to the predicted frontal face image comprises:
respectively constructing the first mesh network and the second mesh network according to a second preset formula;
wherein the second preset formula is as follows:
Pi+1,j+Pi-1,j+Pi,j+1+Pi,j-1-4Pi,j=0
i=0,…,Nu;j=0,…,Nv
wherein ,Pi,jOne grid point, N, in the ith row and jth column in the gridu+1 is the number of rows in the grid, Nv+1 is the number of columns in the grid.
Wherein the optimizing the positions of the first feature points and the first mesh network according to the second mesh network, the second feature lattice, and the first feature lattice comprises:
performing primary optimization on the first grid network according to a third preset formula;
re-optimizing the first mesh network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function;
wherein the third preset formula is as follows:
Figure BDA0002581495000000161
wherein ,Pi,j,P′i,jMesh points, Q, of the first mesh network and the second mesh network, respectivelyt,Q′tRespectively P from the first mesh networki,jT-th first feature point in mesh starting at mesh point and P 'from the second mesh network'i,jThe t-th second feature point in the grid from the grid point;
the first optimization function, the second optimization function and the third optimization function are respectively constructed based on smoothness, translation invariance and face bilateral symmetry.
Wherein the first optimization function is:
ETPS(z(Pi,j))=(zx″u,u)2+2(zx″u,v)2+(zx″v,v)2+(zy″u,u)2+2(zy″u,v)2+(zy″v,v)2
wherein ,z(Pi,j) (zx, zy) represents Pi,jOffset of grid points, zx being Pi,jThe grid points are offset in the u direction by an amount of Pi,jGrid point is on v sideUpward offset, zx ″)u,vRepresents the second directional partial derivatives of zx with respect to the u and v directions;
the second optimization function is:
Figure BDA0002581495000000171
wherein ,z(Qt)=Q′t-QtDenotes Qt,Q′tA translation vector therebetween;
the third optimization function is:
Figure BDA0002581495000000172
wherein ,
Figure BDA0002581495000000173
respectively represent characteristic point columns with the same point sequence on the left side and the right side of the pose face image,
Figure BDA0002581495000000174
and the pixel colors of grid points corresponding to the left side and the right side of the attitude face image in the first grid network are respectively.
Wherein the re-optimizing the first mesh network that has been initially optimized according to a first optimization function, a second optimization function, and a third optimization function includes:
and acquiring a first mesh network which enables the function value of the first optimization function, the second optimization function and the third optimization function to be minimum as an optimization result.
Wherein the converting the posed face image into a target frontal face image according to the optimized first mesh network comprises:
converting the attitude face image into a middle front face image according to the optimized first grid network;
correcting the middle front face image according to a fourth preset formula to obtain the target front face image;
wherein the fourth preset formula is:
Figure BDA0002581495000000181
wherein ,OLpIs the brightness, OL, of the pixel points p in the shielded region of the intermediate front face imageqIs the brightness, NL, of pixel q in the neighborhood of p pixelsp、NLqBrightness, N, of pixel points corresponding to pixel point p and pixel point q in a non-shielding region corresponding to the shielded region, respectivelyp8 neighbourhood of pixel point p, CEOL、CENLThe variation range of the illumination intensity of the pixel points in the boundary ring region in the sheltered region and the non-sheltered region corresponding to the sheltered region are respectively.
EXAMPLE III
The present invention also provides a storage medium in which one or more programs are stored, the one or more programs being executable by one or more processors to implement the steps of the mesh deformation optimization-based face image processing method as described above.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A face image processing method based on mesh deformation optimization is characterized by comprising the following steps:
acquiring a first feature lattice of an attitude face image and a second feature lattice of a predicted front face image corresponding to the attitude face image, wherein the first feature lattice comprises each first feature point, and the second feature lattice comprises each second feature point;
respectively constructing a first grid network corresponding to the first characteristic lattice and a second grid network corresponding to the second characteristic lattice;
optimizing the first mesh network according to the second mesh network, the second feature lattice and the first feature lattice;
and converting the attitude face image into a target front face image according to the optimized first grid network.
2. The method for processing the human face image based on the mesh deformation optimization according to claim 1, wherein the obtaining of the second feature lattice of the predicted front face image corresponding to the pose face image comprises:
constructing a face shape database, and acquiring a feature vector of the face shape database;
obtaining the second characteristic lattice according to a first preset formula,
wherein the first preset formula is as follows:
Figure FDA0002581494990000011
Figure FDA0002581494990000012
wherein ,Q0For the vector representation of said second feature lattice, EiRepresenting the ith feature vector of the face shape database,
Figure FDA0002581494990000013
is the average shape of the face shape database, O is the face shape of the posed face image, n0Is a constant number, n0-1 represents the number of removed feature vectors.
3. The method for processing a face image based on mesh deformation optimization according to claim 1, wherein before the respectively constructing a first mesh network corresponding to the first feature lattice and a second mesh network corresponding to the second feature lattice, the method comprises:
and expanding the number of the first characteristic points in the first characteristic lattice and the number of the second characteristic points in the second characteristic lattice.
4. The method for processing a human face image based on mesh deformation optimization according to claim 1, wherein the respectively constructing a first mesh network corresponding to the pose face image and a second mesh network corresponding to the predicted front face image comprises:
respectively constructing the first mesh network and the second mesh network according to a second preset formula;
wherein the second preset formula is as follows:
Pi+1,j+Pi-1,j+Pi,j+1+Pi,j-1-4Pi,j=0
i=0,…,Nu;j=0,…,Nv
wherein ,Pi,jFor a grid point in the grid network, which is located at the ith row and the jth column, Nu+1 is the number of rows in the mesh network, Nv+1 is the number of columns in the mesh network.
5. The method for processing a facial image based on mesh deformation optimization according to claim 1, wherein the optimizing the positions of the first feature points and the first mesh network according to the second mesh network, the second feature point lattice and the first feature point lattice comprises:
performing primary optimization on the first grid network according to a third preset formula;
re-optimizing the first mesh network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function;
wherein the third preset formula is as follows:
Figure FDA0002581494990000021
wherein ,Pi,j,P′i,jMesh points, Q, of the first mesh network and the second mesh network, respectivelyt,Q′tRespectively P from the first mesh networki,jT-th first feature point in mesh starting at mesh point and P 'from the second mesh network'i,jThe t-th second feature point in the grid from the grid point;
the first optimization function, the second optimization function and the third optimization function are respectively constructed based on smoothness, translation invariance and face bilateral symmetry.
6. The method for processing a facial image based on mesh deformation optimization according to claim 5, wherein the first optimization function is:
ETPS(z(Pi,j))=(zx″u,u)2+2(zx″u,v)2+(zx″v,v)2+(zy″u,u)2+2(zy″u,v)2+(zy″v,v)2
wherein ,z(Pi,j) (zx, zy) represents Pi,jOffset of grid points, zx being Pi,jThe grid points are offset in the u direction by an amount of Pi,jOffset of grid points in the v direction, zx ″u,vRepresents the second directional partial derivatives of zx with respect to the u and v directions;
the second optimization function is:
ETI(z(Pi,j))=(1-α′-β′)||z(Pi,j)-z(Qt)||2+α′||z(Pi,j+1)-z(Qt)||2+β′||z(Pi+1,j)-z(Qt)||2
Figure FDA0002581494990000031
wherein ,z(Qt)=Q″t-QtDenotes Qt,Q″tA translation vector therebetween;
the third optimization function is:
Figure FDA0002581494990000032
Figure FDA0002581494990000033
wherein ,
Figure FDA0002581494990000034
respectively represent characteristic point columns with the same point sequence on the left side and the right side of the pose face image,
Figure FDA0002581494990000035
and the pixel colors of grid points corresponding to the left side and the right side of the attitude face image in the first grid network are respectively.
7. The method of claim 6, wherein the re-optimizing the first mesh network subjected to the initial optimization according to a first optimization function, a second optimization function and a third optimization function comprises:
and acquiring a first mesh network which enables the function value of the first optimization function, the second optimization function and the third optimization function to be minimum as an optimization result.
8. The mesh deformation optimization-based facial image processing method according to claim 1, wherein the converting the pose face image into the target front face image according to the optimized first mesh network comprises:
converting the attitude face image into a middle front face image according to the optimized first grid network;
correcting the middle front face image according to a fourth preset formula to obtain the target front face image;
wherein the fourth preset formula is:
Figure FDA0002581494990000041
wherein ,OLpIs the brightness, OL, of the pixel points p in the shielded region of the intermediate front face imageqIs the brightness, NL, of pixel q in the neighborhood of p pixelsp、NLqBrightness, N, of pixel points corresponding to pixel point p and pixel point q in a non-shielding region corresponding to the shielded region, respectivelyp8 neighbourhood of pixel point p, CEOL、CENLThe variation range of the illumination intensity of the pixel points in the boundary ring region in the sheltered region and the non-sheltered region corresponding to the sheltered region are respectively.
9. A terminal, characterized in that the terminal comprises: a processor, and a storage medium communicatively connected to the processor, the storage medium being adapted to store a plurality of instructions, the processor being adapted to call the instructions in the storage medium to execute the steps of implementing the mesh deformation optimization-based face image processing method according to any one of the preceding claims 1 to 8.
10. A storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the mesh deformation optimization-based face image processing method according to any one of claims 1 to 8.
CN202010668700.3A 2020-07-13 2020-07-13 Face image processing method, terminal and storage medium based on grid deformation optimization Active CN111797797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010668700.3A CN111797797B (en) 2020-07-13 2020-07-13 Face image processing method, terminal and storage medium based on grid deformation optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010668700.3A CN111797797B (en) 2020-07-13 2020-07-13 Face image processing method, terminal and storage medium based on grid deformation optimization

Publications (2)

Publication Number Publication Date
CN111797797A true CN111797797A (en) 2020-10-20
CN111797797B CN111797797B (en) 2023-09-15

Family

ID=72808406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010668700.3A Active CN111797797B (en) 2020-07-13 2020-07-13 Face image processing method, terminal and storage medium based on grid deformation optimization

Country Status (1)

Country Link
CN (1) CN111797797B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304829A (en) * 2018-03-08 2018-07-20 北京旷视科技有限公司 Face identification method, apparatus and system
WO2019100608A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Video capturing device, face recognition method, system, and computer-readable storage medium
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100608A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Video capturing device, face recognition method, system, and computer-readable storage medium
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN108304829A (en) * 2018-03-08 2018-07-20 北京旷视科技有限公司 Face identification method, apparatus and system
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face

Also Published As

Publication number Publication date
CN111797797B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
CN109886878B (en) Infrared image splicing method based on coarse-to-fine registration
US20080317383A1 (en) Adaptive Point-Based Elastic Image Registration
CN112539843B (en) Method and device for detecting temperature and computer equipment
US11144837B2 (en) System, method, and program for predicting information
CN111968134B (en) Target segmentation method, device, computer readable storage medium and computer equipment
JP2009020613A (en) Image processing program, image processing method, and image processor
CN108242063B (en) Light field image depth estimation method based on GPU acceleration
CN112258418A (en) Image distortion correction method, device, electronic equipment and storage medium
CN111681165A (en) Image processing method, image processing device, computer equipment and computer readable storage medium
JP7149124B2 (en) Image object extraction device and program
CN111860582B (en) Image classification model construction method and device, computer equipment and storage medium
CN115293968A (en) Super-light-weight high-efficiency single-image super-resolution method
CN111797797A (en) Face image processing method based on grid deformation optimization, terminal and storage medium
JP7114431B2 (en) Image processing method, image processing device and program
CN106934344B (en) quick pedestrian detection method based on neural network
CN115147389A (en) Image processing method, apparatus, and computer-readable storage medium
CN113344004A (en) Image feature generation method, image recognition method and device
CN113793269B (en) Super-resolution image reconstruction method based on improved neighborhood embedding and priori learning
CN114119593B (en) Super-resolution image quality evaluation method based on texture features of shallow and deep structures
JP7315516B2 (en) Skeleton estimation device and program
CN114331871A (en) Video inverse tone mapping method capable of removing banding artifacts and related equipment
CN116128945B (en) Improved AKAZE image registration method
CN110189272B (en) Method, apparatus, device and storage medium for processing image
TWI819641B (en) Image stitching correction device and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant