CN118262392A - Face image correction method, device, equipment and medium - Google Patents

Face image correction method, device, equipment and medium Download PDF

Info

Publication number
CN118262392A
CN118262392A CN202211690325.8A CN202211690325A CN118262392A CN 118262392 A CN118262392 A CN 118262392A CN 202211690325 A CN202211690325 A CN 202211690325A CN 118262392 A CN118262392 A CN 118262392A
Authority
CN
China
Prior art keywords
face
key point
face image
image
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211690325.8A
Other languages
Chinese (zh)
Inventor
王季源
黄培根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202211690325.8A priority Critical patent/CN118262392A/en
Publication of CN118262392A publication Critical patent/CN118262392A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a face image correction method, a device, equipment and a medium, which are used for solving the technical problem that the prior art cannot correct expansion deformation and compression deformation, so that poor photo look and feel is caused, and the method comprises the following steps: detecting the key points of the human face on the obtained human face image to obtain a detection result of the key points of the human face; based on the face image and the face key point detection result, obtaining depth information corresponding to each face key point; correcting the detection result of the key points of the human face according to the depth information to obtain a correction result of the key points of the human face, wherein the correction processing comprises expansion deformation correction processing and compression deformation correction processing; and obtaining a corrected face image according to the face key point correction result and the face image.

Description

Face image correction method, device, equipment and medium
Technical Field
The present invention relates to the field of image processing and image capturing, and in particular, to a method, apparatus, device, and medium for face image correction.
Background
In the field of portrait photography, a wide-angle lens is generally used for close-range photography, and a telephoto lens is used for long-range photography, and expansion deformation and compression deformation can be generated respectively for photographing under the two distances.
The proportion of the imaging after the deformation and the five sense organs seen by the eyes is different, so that the photographing effect of the portrait scene is improved, a 'face protection' algorithm is provided in the related technology, and the perspective deformation is corrected to a certain extent, but the current scheme only can correct the stretching abnormal effect caused by the fact that the contour of the face of a person is positioned in the edge area of the field of view, the deformation and the compression deformation cannot be corrected and expanded, and the form of the five sense organs perceived by people in the real world cannot be restored, so that the photo is poor in appearance.
Disclosure of Invention
The invention provides a face image correction method, device, equipment and medium, which are used for solving the technical problem that the photo look and feel is poor due to the fact that the expansion deformation and the compression deformation cannot be corrected in the related technology.
In a first aspect, an embodiment of the present invention provides a face image correction method, including:
Detecting the key points of the human face on the obtained human face image to obtain a detection result of the key points of the human face;
obtaining depth information corresponding to each face key point based on the face image and the face key point detection result;
Correcting the detection result of the key points of the human face according to the depth information to obtain the correction result of the key points of the human face, wherein the correction processing comprises expansion deformation correction processing and compression deformation correction processing;
and obtaining a corrected face image according to the face key point correction result and the face image.
In a possible implementation manner, in the method provided by the embodiment of the present invention, before performing face key point detection on the obtained face image to obtain a face key point detection result, the method further includes:
Acquiring an image in real time;
and determining an image containing the face in the preset position as a face image.
In a possible implementation manner, in the method provided by the embodiment of the present invention, face keypoint detection is performed on an obtained face image to obtain a face keypoint detection result, including:
performing face key point detection and face segmentation on the face image to obtain five-sense organ recognition points and a face contour map;
and determining the facial feature recognition points and the facial contour map as a facial key point detection result.
In a possible implementation manner, in the method provided by the embodiment of the present invention, according to depth information, a correction process is performed on a face key point detection result to obtain a face key point correction result, including:
If the depth information is smaller than or equal to the preset value, performing expansion deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result;
And if the depth information is larger than the preset value, performing compression deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result.
In a possible implementation manner, in the method provided by the embodiment of the present invention, according to a face key point correction result and a face image, a corrected face image is obtained, including:
based on the face key point correction result and the face key point, determining a mapping relation for correcting the face image, wherein the mapping relation is used for representing the corresponding position relation between the face key point and the face key point correction result;
and correcting the face image according to the mapping relation to obtain a corrected face image.
In a possible implementation manner, in the method provided by the embodiment of the present invention, a face image is corrected according to a mapping relationship, and a corrected face image is obtained, including:
determining a non-face area in the face image based on the face contour map;
Correcting the mapping relation according to the non-face area;
And correcting the face image according to the corrected mapping relation to obtain a corrected face image.
In a second aspect, an embodiment of the present invention provides a face image correction apparatus, including:
The detection unit is used for carrying out face key point detection on the obtained face image to obtain a face key point detection result;
the first processing unit is used for obtaining depth information corresponding to each face key point based on the face image and the face key point detection result;
The second processing unit is used for carrying out correction processing on the face key point detection result according to the depth information to obtain a face key point correction result, wherein the correction processing comprises expansion deformation correction processing and compression deformation correction processing;
and the correction unit is used for obtaining a corrected face image according to the face key point correction result and the face image.
In one possible implementation manner, in the device provided by the embodiment of the present invention, the detection unit is further configured to:
Acquiring an image in real time;
and determining an image containing the face in the preset position as a face image.
In one possible implementation manner, in the device provided by the embodiment of the present invention, the detection unit is specifically configured to:
performing face key point detection and face segmentation on the face image to obtain five-sense organ recognition points and a face contour map;
and determining the facial feature recognition points and the facial contour map as a facial key point detection result.
In a possible implementation manner, in the apparatus provided by the embodiment of the present invention, the second processing unit is specifically configured to:
If the depth information is smaller than or equal to the preset value, performing expansion deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result;
And if the depth information is larger than the preset value, performing compression deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result.
In a possible implementation manner, in the device provided by the embodiment of the present invention, the correction unit is specifically configured to:
based on the face key point correction result and the face key point, determining a mapping relation for correcting the face image, wherein the mapping relation is used for representing the corresponding position relation between the face key point and the face key point correction result;
and correcting the face image according to the mapping relation to obtain a corrected face image.
In a possible implementation manner, in the device provided by the embodiment of the present invention, the correction unit is specifically configured to:
determining a non-face area in the face image based on the face contour map;
Correcting the mapping relation according to the non-face area;
And correcting the face image according to the corrected mapping relation to obtain a corrected face image.
In a third aspect, an embodiment of the present invention provides an electronic device, including: at least one processor, at least one memory and computer program instructions stored in the memory, which when executed by the processor implement the method as provided by the first aspect of the embodiments of the invention.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as provided by the first aspect of embodiments of the present invention.
In the embodiment of the invention, face key point detection is performed on an obtained face image to obtain a face key point detection result, depth information corresponding to each face key point is obtained based on the face image and the face key point detection result, correction processing is performed on the face key point detection result according to the depth information to obtain a face key point correction result, and finally the corrected face image is obtained according to the face key point correction result and the face image. Compared with the related art, the method for correcting the image expansion deformation and compression deformation is provided, the five sense organs in the image can be restored to the form perceived by people in the real world, the appearance of the photo is perfected, the photo is more natural, and the user experience is enhanced.
Drawings
Fig. 1 is a schematic diagram of face expansion deformation provided by an embodiment of the present invention;
Fig. 2 is a schematic diagram of face compression deformation provided in an embodiment of the present invention;
Fig. 3 is a normal schematic diagram of a face according to an embodiment of the present invention;
fig. 4 is a flow chart of a face image correction method according to an embodiment of the present invention;
fig. 5 is a specific flow chart of a face image correction method according to an embodiment of the present invention;
Fig. 6 is a schematic view of a face frame according to an embodiment of the present invention;
Fig. 7 is a schematic diagram of five sense organs identification points according to an embodiment of the present invention;
Fig. 8 is a schematic view of a face contour provided in an embodiment of the present invention;
fig. 9 is a specific flow chart of a face image correction method according to an embodiment of the present invention;
Fig. 10 is a schematic structural diagram of a face image correction device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Some words appearing hereinafter are explained:
1. In the embodiment of the invention, the term "and/or" describes the association relation of the association objects, which means that three relations can exist, for example, a and/or B can be expressed as follows: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In the field of portrait photographing, a wide-angle lens is generally used for close-range photographing, and a telephoto lens is used for long-range photographing, and expansion deformation as shown in fig. 1 and compression deformation as shown in fig. 2 are generated when photographing at the two distances respectively.
The proportion of the two deformed images to the five sense organs seen by the human eyes shown in fig. 3 is different, so as to improve the photographing effect of the portrait scene, a 'face protection' algorithm is provided in the related technology to correct the perspective deformation to a certain extent, but the current scheme only can correct the stretching abnormal effect caused by the fact that the outline of the face of the person is positioned in the edge area of the field of view, the outline deviates from the optical axis, the deformation and the compression deformation cannot be corrected and expanded, and the form perceived by the five sense organs in the real world by people cannot be restored, so that the photo is bad in appearance.
Therefore, there is a need to provide a face image correction method to solve the above-mentioned problems.
The face image correction method, device, equipment and medium provided by the invention are described in more detail below with reference to the accompanying drawings and embodiments.
An embodiment of the present invention provides a face image correction method, as shown in fig. 4, including:
step S401, face key point detection is carried out on the obtained face image, and a face key point detection result is obtained.
In specific implementation, an image is acquired in real time through an image sensor such as a camera, then an image containing a human face in a preset position is determined to be a human face image, for example, the preset position is the center of the image, then human face key point detection is carried out on the acquired human face image to obtain human face key points, specifically, human face key point detection and human face segmentation are carried out on the human face image to obtain five-element identification points and a human face contour map, and then the five-element identification points and the human face contour map are determined to be human face key point detection results.
Step S402, based on the face image and the face key point detection result, depth information corresponding to each face key point is obtained.
In the specific implementation, the distance between the face in the face image and the image sensor is determined according to the face image, and then depth information corresponding to each face key point is determined according to the facial feature recognition points and the face outline map in the face key point detection result. The depth information in the face image may be obtained by image calculation or may be obtained by assistance of a distance sensor, etc., which is not limited in the embodiment of the present invention.
Step S403, correcting the detection result of the key points of the human face according to the depth information to obtain the correction result of the key points of the human face.
In specific implementation, the correction processing in the step comprises expansion deformation correction processing and compression deformation correction processing, different processing modes are selected according to different depth information, if the depth information is smaller than or equal to a preset value, the expansion deformation correction processing is carried out on the face key point detection result according to the depth information to obtain a face key point correction result, and if the depth information is larger than the preset value, the compression deformation correction processing is carried out on the face key point detection result according to the depth information to obtain a face key point correction result. The preset value can be set according to the actual requirement, and in the embodiment of the invention, 1m is taken as an example. And selecting and eliminating the compression deformation mode when the depth information of the face area is larger than 1m, namely selecting and using a first face image correction model, and selecting and eliminating the expansion deformation mode when the depth information of the face area is smaller than or equal to 1m, namely selecting and eliminating a second face image correction model.
Specifically, expansion deformation correction processing and compression deformation correction processing can be calculated through a model, a first face image correction model which is responsible for compression deformation correction processing is generated by training, a plurality of training sample images are firstly obtained, then each training sample image in the plurality of training sample images is taken as input, a face key point correction result which is correspondingly output by each training sample image is compared with a labeling result of the training sample image, the first face image correction model is generated based on the difference training of the face key point correction result which is correspondingly output by each training sample image and the labeling result of the training sample image, the face key point correction result which is correspondingly used for representing the face image which eliminates compression deformation in each training sample image, and the labeling result is the face key point correction result which is labeled in the training sample image in advance; the second face image correction model for performing expansion deformation correction processing is generated by training the following steps of firstly acquiring a plurality of training sample images, then taking each training sample image in the plurality of training sample images as input, comparing a face key point correction result correspondingly output by each training sample image with a labeling result of the training sample image, generating the second face image correction model based on difference training of the face key point correction result correspondingly output by each training sample image and the labeling result of the training sample image, wherein the face key point correction result corresponding to each training sample image is a face image for representing elimination of expansion deformation in each training sample image, and the labeling result is a face key point correction result labeled in the training sample image in advance.
Step S404, obtaining a corrected face image according to the face key point correction result and the face image.
In the specific implementation, the mapping relation for correcting the face image is determined based on the face key point correction result and the face key points, and then the face image is corrected according to the mapping relation, so that the corrected face image is obtained. Before correcting the face image, a non-face area in the face image can be determined based on the face contour map, the mapping relation is corrected according to the non-face area, and finally the face image is corrected according to the corrected mapping relation, so that the corrected face image is obtained, and the transition between the corrected face and other non-face areas is smooth.
As shown in fig. 5, the specific process for face image correction provided by the embodiment of the present invention may include the following steps:
Step S501, images are acquired in real time, and the images containing faces in preset positions are determined to be face images.
In specific implementation, an image is obtained in real time through an image sensor such as a camera, then an image containing a face in a preset position is determined to be a face image according to face frame recognition or other modes, for example, the preset position is the center of the image, and if the image does not contain the face, the image is not processed.
In one example, an image as shown in FIG. 1 is acquired, and then the face position is identified by a face frame as shown in FIG. 6.
Step S502, face key point detection is carried out on the obtained face image, and a face key point detection result is obtained.
In specific implementation, face key point detection is performed on the obtained face image to obtain face key points, specifically, face key point detection and face segmentation are performed on the face image to obtain five-element identification points and a face contour map, and then the five-element identification points and the face contour map are determined to be face key point detection results.
Still using the above example, after the face key point detection is performed on fig. 1, the facial feature recognition point shown in fig. 7 and the face contour map shown in fig. 8 are obtained.
Step S503, based on the face image and the face key point detection result, obtaining depth information corresponding to each face key point.
In the specific implementation, the distance between the face in the face image and the image sensor is determined according to the face image, and then depth information corresponding to each face key point is determined according to the facial feature recognition points and the face outline map in the face key point detection result.
In this step, the non-face area may also be determined from the face contour map.
Step S504, correcting the detection result of the key points of the human face according to the depth information to obtain the correction result of the key points of the human face.
In specific implementation, the correction processing in the step comprises expansion deformation correction processing and compression deformation correction processing, different processing modes are selected according to different depth information, if the depth information is smaller than or equal to a preset value, the expansion deformation correction processing is carried out on the face key point detection result according to the depth information to obtain a face key point correction result, and if the depth information is larger than the preset value, the compression deformation correction processing is carried out on the face key point detection result according to the depth information to obtain a face key point correction result. The preset value can be set according to the actual requirement, and in the embodiment of the invention, 1m is taken as an example. And selecting a compression deformation eliminating mode when the depth information of the face area is larger than 1m, and selecting an expansion deformation eliminating mode when the depth information of the face area is smaller than or equal to 1 m.
Specifically, in the calculation, a face key point correction result is obtained according to the following formula:
{V(x,y)}=(x,,,h,(x,y));
Wherein V (x,y) represents the target point of the five sense organs to be optimized in the face image, (x ,y) represents the coordinates where the model is corrected by perspective, that is, the coordinates after the model is corrected, w is the width of the current coordinates from the optical center (default is the center of the image), h is the height of the pixels, and d (x,y) represents the absolute depth of the point of the five sense organs, that is, the depth information of the point.
Still using the above example, the elimination of compression set is performed.
In another example, if the image obtained in S501 is as shown in fig. 2, the expansion-eliminating deformation is performed. The face image in this example and the face image of the previous example are described by taking the same person at different distances as an example.
Step S505, the mapping relation is corrected according to the non-face area.
In specific implementation, the mapping relationship is corrected according to the non-face region determined in step S503.
Specifically, a minimum energy function with smooth transition of a face region and other regions is constructed as a loss function. The loss function formula is as follows:
Wherein v i is the target optimization coordinates of each region in the input image, And representing the actual corresponding coordinates of each energy item after optimization. Where E t is the optimization quantity, and is represented by each sub-dependency term in the following formula.
Ets,kEs,k+bEb+rEr
Where lambda s,k、λb、λr represents the optimal weight of each sub-item, as an adjustable parameter. Where E s,k represents a face region optimization term. E b is a straight line holding term, and the correction and stretching of the face area are prevented from affecting the straight line shape around the face. E r is a regularization term, ensuring smooth transition of the whole image.
For E s,k, the following formula is used:
Wherein w j is the weight of the face item, when the point is determined to be the point of the center of the face outline, the weight is 1, otherwise, the weight is 0.m j represents the correction weight of facial five sense organs and contour points, V i represents the point to be optimized, U i represents an ideal optimization target point, and the correction weight is calculated by Is available. S k and t k represent similarity changes, rotation vectors and translation vectors, respectively, and slight rotation and translation can find better vertex distribution, so as to avoid abnormal enlargement and reduction of the five sense organs and contours, and therefore, a regularization term λ is added (S k) to keep the dimensions of the face.
For E b, the following formula is used:
V j denotes four field points, i.e., e ij denotes unit vectors in x-axis and y-axis directions, which are the current optimization points, so that N (i) is minimized to secure a straight line.
For E r, the following formula is used:
Regularization is used to encourage smoothing before the four domain points of the optimization points to optimize the entire image.
Finally, the variable parameters are adjusted, the formula E t is optimized and solved, and a full-graph mapping table of the target result and the input image is obtained.
Step S506, the face image is corrected according to the corrected mapping relation, and the corrected face image is obtained.
Still using the above example, two face images as shown in fig. 1 and 2, the face image as shown in fig. 3 is obtained after correction.
As shown in fig. 9, a specific flow of the face image correction method provided by the embodiment of the present invention is described in detail.
Step 901, shooting an image. Step 902 is then performed.
In specific implementation, an image is acquired in real time through an image sensor such as a camera.
Step 902, a face detection box. Steps 903 and 904 are performed if a face is detected, otherwise step 914 is performed.
Step 903, acquiring a two-dimensional image. Steps 905 and 906 are then performed.
In practice, two-dimensional image information is extracted from an image.
Step 904, obtaining image depth information. Step 906 is then performed.
Step 905, face key point detection. Step 908 is then performed.
Step 906, face segmentation, obtaining a band optimization area. Step 907 is then performed.
Step 907, obtaining the absolute depth of the face area. Step 909 is then performed.
Step 908, determining that the face area is at the image position. If at the center, step 909 is performed, and if at the edge, step 910 is performed.
Step 909, "full focus Duan Ren face correction" model selection. Step 911 then proceeds.
In the specific implementation, if the depth information is smaller than or equal to a preset value, performing expansion deformation correction processing on the face key point detection result by using an expansion deformation model according to the depth information to obtain a face key point correction result, and if the depth information is larger than the preset value, performing compression deformation correction processing on the face key point detection result by using a compression deformation model according to the depth information to obtain a face key point correction result.
Step 910, calculating the contour coordinate mapping of the traditional LDC edge face sphere polar model. Step 912 is then performed.
In specific implementation, the conventional LDC edge face sphere model is the prior art, and the embodiments of the present invention are not described herein.
Step 911, mapping relation of the facial contours of the central face along with depth change. Step 912 is then performed.
And in the specific implementation, according to the model processing, obtaining the mapping relation of the facial features contours of the central face along with the depth change.
Step 912, obtaining a full map mapping table of the target result and the original image. Step 913 is then performed.
In the implementation, a full-graph mapping table of the target result and the original graph is obtained according to the mapping relation. In the step, the result of the traditional LDC edge face sphere pole model can be used for obtaining the full-image mapping table.
Step 913, interpolation obtains the final result. Step 914 is then performed.
In specific implementation, the final result is obtained by bilinear interpolation on the full map mapping table.
Step 914, end.
As shown in fig. 10, the present invention further provides a facial image correction device based on the same inventive concept as the facial image correction method, including:
The detection unit 1001 is configured to perform face key point detection on the obtained face image, to obtain a face key point detection result;
A first processing unit 1002, configured to obtain depth information corresponding to each face key point based on the face image and the face key point detection result;
A second processing unit 1003, configured to perform correction processing on the face key point detection result according to the depth information, to obtain a face key point correction result, where the correction processing includes expansion deformation correction processing and compression deformation correction processing;
The correcting unit 1004 is configured to obtain a corrected face image according to the face key point correction result and the face image.
In one possible implementation manner, in the device provided by the embodiment of the present invention, the detection unit 1001 is further configured to:
Acquiring an image in real time;
and determining an image containing the face in the preset position as a face image.
In one possible implementation manner, in the device provided by the embodiment of the present invention, the detection unit 1001 is specifically configured to:
performing face key point detection and face segmentation on the face image to obtain five-sense organ recognition points and a face contour map;
and determining the facial feature recognition points and the facial contour map as a facial key point detection result.
In a possible implementation manner, in the apparatus provided by the embodiment of the present invention, the second processing unit 1003 is specifically configured to:
If the depth information is smaller than or equal to the preset value, performing expansion deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result;
And if the depth information is larger than the preset value, performing compression deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result.
In one possible implementation manner, in the apparatus provided by the embodiment of the present invention, the correction unit 1004 is specifically configured to:
based on the face key point correction result and the face key point, determining a mapping relation for correcting the face image, wherein the mapping relation is used for representing the corresponding position relation between the face key point and the face key point correction result;
and correcting the face image according to the mapping relation to obtain a corrected face image.
In one possible implementation manner, in the apparatus provided by the embodiment of the present invention, the correction unit 1004 is specifically configured to:
determining a non-face area in the face image based on the face contour map;
Correcting the mapping relation according to the non-face area;
And correcting the face image according to the corrected mapping relation to obtain a corrected face image.
In addition, the face image correction method and apparatus according to the embodiments of the present invention described in connection with fig. 4 to 10 may be implemented by an electronic device. Fig. 11 shows a schematic hardware structure of an electronic device according to an embodiment of the present invention.
Referring now in particular to fig. 11, a schematic diagram of an electronic device 1100 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 11 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 11, the electronic device 1100 may include a processing means (e.g., a central processor, a graphics processor, etc.) 1101 that may perform various suitable actions and processes to implement the voice control method of the embodiments as described in the present disclosure according to a program stored in a Read Only Memory (ROM) 1102 or a program loaded from a storage means 1108 into a Random Access Memory (RAM) 1103. In the RAM1103, various programs and data necessary for the operation of the electronic device 1100 are also stored. The processing device 1101, ROM 1102, and RAM1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
In general, the following devices may be connected to the I/O interface 1105: input devices 1106 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1107 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 1108, including for example, magnetic tape, hard disk, etc.; and a communication device 1109. The communication means 1109 may allow the electronic device 1100 to communicate wirelessly or by wire with other devices to exchange data. While fig. 11 illustrates an electronic device 1100 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts, thereby implementing the speech control method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 1109, or from storage device 1108, or from ROM 1102. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 1101.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
Detecting the key points of the human face on the obtained human face image to obtain a detection result of the key points of the human face;
obtaining depth information corresponding to each face key point based on the face image and the face key point detection result;
Correcting the detection result of the key points of the human face according to the depth information to obtain the correction result of the key points of the human face, wherein the correction processing comprises expansion deformation correction processing and compression deformation correction processing;
and obtaining a corrected face image according to the face key point correction result and the face image.
Alternatively, the electronic device may perform other steps described in the above embodiments when the above one or more programs are executed by the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the embodiment of the invention, face key point detection is performed on an obtained face image to obtain a face key point detection result, depth information corresponding to each face key point is obtained based on the face image and the face key point detection result, correction processing is performed on the face key point detection result according to the depth information to obtain a face key point correction result, and finally the corrected face image is obtained according to the face key point correction result and the face image. Compared with the related art, the method for correcting the image expansion deformation and compression deformation is provided, the five sense organs in the image can be restored to the form perceived by people in the real world, the appearance of the photo is perfected, the photo is more natural, and the user experience is enhanced.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (14)

1. A face image correction method, the method comprising:
Detecting the key points of the human face on the obtained human face image to obtain a detection result of the key points of the human face;
based on the face image and the face key point detection result, obtaining depth information corresponding to each face key point;
Correcting the detection result of the key points of the human face according to the depth information to obtain a correction result of the key points of the human face, wherein the correction processing comprises expansion deformation correction processing and compression deformation correction processing;
and obtaining a corrected face image according to the face key point correction result and the face image.
2. The face image correction method according to claim 1, wherein before the face key point detection is performed on the obtained face image to obtain the face key point detection result, the method further comprises:
Acquiring an image in real time;
and determining an image containing a face in a preset position as the face image.
3. The face image correction method according to claim 2, wherein the face key point detection is performed on the obtained face image to obtain a face key point detection result, comprising:
Performing face key point detection and face segmentation on the face image to obtain five-sense organ recognition points and a face contour map;
and determining the facial feature recognition points and the facial contour map as the facial key point detection result.
4. The face image correction method according to claim 1, wherein the correcting the face key point detection result according to the depth information to obtain a face key point correction result includes:
If the depth information is smaller than or equal to a preset value, performing expansion deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result;
And if the depth information is larger than a preset value, performing compression deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result.
5. The face image correction method according to claim 4, wherein the obtaining the corrected face image according to the face key point correction result and the face image includes:
determining a mapping relation of the face image based on the face key point correction result and the face key point, wherein the mapping relation is used for representing the corresponding position relation between the face key point and the face key point correction result;
And correcting the face image according to the mapping relation to obtain a corrected face image.
6. The face image correction method according to claim 3 and 5, wherein the correcting the face image according to the mapping relation to obtain a corrected face image includes:
determining a non-face area in the face image based on the face contour map;
Correcting the mapping relation according to the non-face area;
and correcting the face image according to the corrected mapping relation to obtain a corrected face image.
7. A face image correction apparatus, comprising:
The detection unit is used for carrying out face key point detection on the obtained face image to obtain a face key point detection result;
the first processing unit is used for obtaining depth information corresponding to each face key point based on the face image and the face key point detection result;
The second processing unit is used for carrying out correction processing on the face key point detection result according to the depth information to obtain a face key point correction result, wherein the correction processing comprises expansion deformation correction processing and compression deformation correction processing;
and the correction unit is used for obtaining a corrected face image according to the face key point correction result and the face image.
8. The face image correction apparatus according to claim 7, wherein the detection unit is further configured to:
Acquiring an image in real time;
and determining an image containing a face in a preset position as the face image.
9. The facial image correction apparatus according to claim 8, wherein the detection unit is specifically configured to:
Performing face key point detection and face segmentation on the face image to obtain five-sense organ recognition points and a face contour map;
and determining the facial feature recognition points and the facial contour map as the facial key point detection result.
10. The facial image correction apparatus according to claim 7, wherein the second processing unit is specifically configured to:
If the depth information is smaller than or equal to a preset value, performing expansion deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result;
And if the depth information is larger than a preset value, performing compression deformation correction processing on the face key point detection result according to the depth information to obtain a face key point correction result.
11. The facial image correction apparatus according to claim 10, wherein the correction unit is specifically configured to:
Determining a mapping relation for correcting the face image based on the face key point correction result and the face key point, wherein the mapping relation is used for representing the corresponding position relation between the face key point and the face key point correction result;
And correcting the face image according to the mapping relation to obtain a corrected face image.
12. The facial image correction apparatus according to claim 9 and 11, wherein the correction unit is specifically configured to:
determining a non-face area in the face image based on the face contour map;
Correcting the mapping relation according to the non-face area;
and correcting the face image according to the corrected mapping relation to obtain a corrected face image.
13. An electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of any one of claims 1-6.
14. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1-6.
CN202211690325.8A 2022-12-27 2022-12-27 Face image correction method, device, equipment and medium Pending CN118262392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211690325.8A CN118262392A (en) 2022-12-27 2022-12-27 Face image correction method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211690325.8A CN118262392A (en) 2022-12-27 2022-12-27 Face image correction method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN118262392A true CN118262392A (en) 2024-06-28

Family

ID=91606282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211690325.8A Pending CN118262392A (en) 2022-12-27 2022-12-27 Face image correction method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN118262392A (en)

Similar Documents

Publication Publication Date Title
CN109584151B (en) Face beautifying method, device, terminal and storage medium
CN110766777B (en) Method and device for generating virtual image, electronic equipment and storage medium
CN112989904B (en) Method for generating style image, method, device, equipment and medium for training model
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
JP2023515654A (en) Image optimization method and device, computer storage medium, computer program, and electronic equipment
WO2022068451A1 (en) Style image generation method and apparatus, model training method and apparatus, device, and medium
CN111507333B (en) Image correction method and device, electronic equipment and storage medium
CN109754464B (en) Method and apparatus for generating information
CN112733820B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
WO2023035531A1 (en) Super-resolution reconstruction method for text image and related device thereof
CN111723707A (en) Method and device for estimating fixation point based on visual saliency
US20240221126A1 (en) Image splicing method and apparatus, and device and medium
CN110837332A (en) Face image deformation method and device, electronic equipment and computer readable medium
CN113920023B (en) Image processing method and device, computer readable medium and electronic equipment
CN112991208B (en) Image processing method and device, computer readable medium and electronic equipment
JP6202938B2 (en) Image recognition apparatus and image recognition method
WO2023193613A1 (en) Highlight shading method and apparatus, and medium and electronic device
CN109816791B (en) Method and apparatus for generating information
CN116134476A (en) Plane correction method and device, computer readable medium and electronic equipment
CN114418835B (en) Image processing method, device, equipment and medium
CN108256477B (en) Method and device for detecting human face
CN118262392A (en) Face image correction method, device, equipment and medium
CN110097622B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN114040129A (en) Video generation method, device, equipment and storage medium
CN116310615A (en) Image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination