CN113870148A - Face distortion correction method and device, electronic equipment, chip and storage medium - Google Patents

Face distortion correction method and device, electronic equipment, chip and storage medium Download PDF

Info

Publication number
CN113870148A
CN113870148A CN202111214281.7A CN202111214281A CN113870148A CN 113870148 A CN113870148 A CN 113870148A CN 202111214281 A CN202111214281 A CN 202111214281A CN 113870148 A CN113870148 A CN 113870148A
Authority
CN
China
Prior art keywords
face
corrected
distortion correction
distortion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111214281.7A
Other languages
Chinese (zh)
Inventor
朱达祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111214281.7A priority Critical patent/CN113870148A/en
Publication of CN113870148A publication Critical patent/CN113870148A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application is applicable to the technical field of image processing, and provides a face distortion correction method, a face distortion correction device, electronic equipment, a chip and a storage medium. The face distortion correction method comprises the following steps: acquiring a face area to be corrected in an image to be processed; acquiring the posture information of the face to be corrected in the face area to be corrected; and under the condition that the face to be corrected is in a perspective distortion scene, carrying out distortion correction on the face area to be corrected according to the posture information of the face to be corrected. The distortion correction can be carried out on the face in the image through the method and the device.

Description

Face distortion correction method and device, electronic equipment, chip and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method and an apparatus for correcting face distortion, an electronic device, a chip, and a storage medium.
Background
With the popularization of electronic devices, cameras on the electronic devices can be wide-angle cameras, and images with larger view field ranges can be obtained by taking pictures with the wide-angle cameras. But the more the area closer to the edge of the image has more severe distortion due to the characteristics of the wide-angle camera. When a human face exists in an image and the human face is at the edge of the image, the human face is greatly distorted, namely, a non-real human face appears in the image.
Disclosure of Invention
The embodiment of the application provides a face distortion correction method, a face distortion correction device, electronic equipment, a chip and a storage medium, so as to correct the distortion of a face in an image.
In a first aspect, an embodiment of the present application provides a face distortion correction method, including:
acquiring a face area to be corrected in an image to be processed;
acquiring the posture information of the face to be corrected in the face area to be corrected;
and under the condition that the face to be corrected is in a perspective distortion scene, carrying out distortion correction on the face area to be corrected according to the posture information of the face to be corrected.
In the embodiment of the application, by acquiring the face area to be corrected in the image to be processed and the pose information of the face to be corrected in the face area to be corrected, under the condition that the face to be corrected is in a perspective distortion scene, distortion correction can be performed on the face area to be corrected according to the pose information of the face to be corrected, so that distortion correction of the face in the image to be processed is achieved.
In a second aspect, an embodiment of the present application provides a face distortion correction apparatus, including:
the region acquisition module is used for acquiring a face region to be corrected in the image to be processed;
the posture acquisition module is used for acquiring the posture information of the face to be corrected in the face area to be corrected;
and the distortion correction module is used for carrying out distortion correction on the face area to be corrected according to the posture information of the face to be corrected under the condition that the face to be corrected is in a perspective distortion scene.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the face distortion correction method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a chip, which includes a processor, and the processor is configured to read and execute a computer program stored in a memory to perform the steps of the face distortion correction method according to the first aspect.
Optionally, the memory and the processor are connected by a circuit or a wire.
In a fifth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the face distortion correction method according to the first aspect.
In a sixth aspect, the present application provides a computer program product, which when run on an electronic device, causes the electronic device to execute the steps of the face distortion correction method according to the first aspect.
It is to be understood that the second, third, fourth, fifth and sixth aspects provided above are all used to execute the corresponding methods provided above, and therefore, the beneficial effects achieved by the methods can refer to the beneficial effects in the corresponding methods provided above, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1a is an exemplary diagram of an image to be processed with face distortion;
FIG. 1b is an exemplary diagram of the result of the global correction of FIG. 1 a;
FIG. 1c is an exemplary graph of the results of a partial correction of FIG. 1 a;
fig. 2 is a schematic flow chart illustrating an implementation of a face distortion correction method according to an embodiment of the present application;
FIG. 3a is a diagram of an example of a camera setup;
FIG. 3b is an exemplary diagram of an image to be processed with face distortion displayed in a preview area;
FIG. 3c is another exemplary diagram of a camera setup;
FIG. 3d is a diagram of yet another example of a camera setup;
FIG. 3e is an exemplary diagram of FIG. 3b after face distortion correction;
fig. 4 is an exemplary diagram of a face region to be corrected;
fig. 5 is a schematic flow chart illustrating an implementation of a face distortion correction method according to a second embodiment of the present application;
fig. 6 is a schematic flow chart illustrating an implementation of a face distortion correction method according to a third embodiment of the present application;
FIG. 7 is an exemplary diagram of a face box;
fig. 8 is a schematic structural diagram of a face distortion correction apparatus according to a fourth embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device provided in this application embodiment five.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The face distortion correction method provided by the embodiment of the application can be applied to electronic devices such as a mobile phone, a tablet personal computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the application does not limit the specific types of the electronic devices at all.
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiment of the present application.
When a user uses a wide-angle camera to take a self-portrait at a close distance and the face of the user is too large, the face of the user at the edge is usually stretched and deformed, and distortion is generated. Fig. 1a shows an exemplary diagram of an image to be processed with face distortion. 10 in fig. 1a, 1b, 1c, 4 and 7 represents an image to be processed, and 101 in fig. 1a, 1c, 4 and 7 represents a straight line in the background of the image to be processed.
In order to achieve distortion correction of a face in an image, a conventional scheme is to use global correction, that is, to perform distortion correction on the whole image, and although the distortion correction of the face can be achieved, straight lines (particularly long straight lines) in the background are easily bent, resulting in image distortion. As fig. 1b is an exemplary diagram of the result of the global correction performed on fig. 1a, the distortion correction of the human face can be realized by performing the distortion correction on the whole image of fig. 1a through the global correction, but the straight line 101 in fig. 1a is also bent, and 102 in fig. 1b represents the curve of the bent straight line 101.
In order to reduce the probability of bending a straight line in a background while realizing distortion correction of a face in an image, the embodiment of the application provides a face distortion correction method, which performs distortion correction on a face region to be corrected through pose information of a face to be corrected, realizes local correction, and does not need to perform distortion correction on straight lines in other regions (namely, regions except the face region to be corrected in the image to be processed), so that the distortion correction of the face in the image to be processed can be realized, and the distortion correction of the straight lines in other regions is avoided. FIG. 1c is a diagram illustrating an example of the result of the partial correction of FIG. 1 a.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 2 is a schematic view of an implementation flow of the face distortion correction method according to an embodiment of the present application. As shown in fig. 2, the face distortion correction method may include the steps of:
step 201, obtaining a face area to be corrected in an image to be processed.
The image to be processed may be an image stored in the electronic device, or an image sent to the electronic device by another device, or an image acquired in real time by a shooting device in the electronic device (for example, an image displayed in a preview area in a shooting interface of a mobile phone), which is not limited herein. The camera may be any device capable of capturing images, such as a wide-angle camera in a mobile phone.
Taking a wide-angle camera in a mobile phone as an example, as shown in fig. 3a, there is no option of "face distortion correction" in the camera setting, and when a close-distance self-timer shooting is performed by the wide-angle camera, the face in the image to be processed displayed in the preview area as shown in fig. 3b is easily distorted.
According to the embodiment of the application, the option of 'face distortion correction' is added in the camera setting, so that when a user carries out close-range self-shooting through a wide-angle camera, distortion correction is carried out on a face with distortion generated in an acquired image, and the image with the face distortion corrected is displayed in a preview area. Fig. 3c shows another example of the camera settings, in fig. 3c, the "face distortion correction" option is in an off state, when the "face distortion correction" option is in the off state, the mobile phone does not perform face distortion correction on the image captured by the wide-angle camera, and the image displayed in the preview area is as shown in fig. 3 b. Fig. 3d shows another example of the camera setting, the "face distortion correction" option in fig. 3d is in an on state, and when the "face distortion correction" option is in the on state, the mobile phone performs face distortion correction on the image acquired by the wide-angle camera to improve the face distortion condition in the image, and fig. 3e shows an example of the image after performing face distortion correction on fig. 3 b.
The "shutter sound" option, "geographical location" option, "video coding" option, "photo watermarking" option, etc. in fig. 3a, 3c, and 3d are conventional options of cameras, and the conventional options will not be explained here.
The face region to be corrected in the image to be processed may refer to a region in the image to be processed, where face distortion correction is required. Fig. 4 is a diagram illustrating an example of a face region to be corrected. The region surrounded by 103 in fig. 4 is the face region to be corrected.
After the electronic equipment obtains the image to be processed, whether a face exists in the image to be processed can be detected, if the face exists in the image to be processed, the image to be processed is subjected to face segmentation to obtain a face area to be corrected; if the face does not exist in the image to be processed, the face distortion correction of the image to be processed is not needed. Whether a face exists in the image to be processed can be detected through a face detection algorithm, for example, a face detection algorithm based on binary wavelet transform, a face detection algorithm based on histogram rough segmentation and singular value features, and the like. The image to be processed may be segmented by a human segmentation algorithm, such as a human segmentation algorithm based on a convolutional neural network model.
It should be noted that, when at least two faces exist in the image to be processed, each face corresponds to a face region to be corrected. For example, if three faces exist in the image to be processed, three face regions to be corrected can be obtained by performing face segmentation on the image to be processed.
Step 202, obtaining the pose information of the face to be corrected in the face area to be corrected.
The attitude information of the face to be corrected includes, but is not limited to, a pitch angle, a yaw angle, and the like of the face to be corrected. The pitch angle is the angle of rotation around the X-axis (pointing to the right side of the face) and appears as a head overlook or look up. Yaw angle refers to the angle of rotation about the Y axis (pointing below the face) as if the head were rotated horizontally to the left or right.
In this embodiment, after the face region to be corrected is obtained, face key point information of the face to be corrected in the face region to be corrected may be determined, and pose information of the face to be corrected may be calculated according to the face key point information of the face to be corrected. The face key points refer to key points capable of representing human face features, such as eyes, nose tips, mouth corner points, eyebrows, contour points of each part of the face, and the like. The face key point information includes, but is not limited to, position information of the face key point in the image to be processed. The face key point information of the face to be corrected can be determined by a face key point detection algorithm, for example, a face key point detection algorithm based on a Practical face key point Detector (PFLD).
And 203, under the condition that the face to be corrected is in the perspective distortion scene, performing distortion correction on the face area to be corrected according to the posture information of the face to be corrected.
Here, the perspective distorted scene may refer to a scene in which a photographed object is subjected to tensile deformation (i.e., distortion) compared to a standard object. For example, when the subject is a human face, if the human face in the captured human face image is subjected to stretch deformation compared with a standard human face, it is determined that the human face is in a perspective distorted scene.
In this embodiment, the pose information of the face to be corrected may represent a tensile deformation condition of the face to be corrected, that is, a distortion condition of the face to be corrected, so that according to the pose information of the face to be corrected, distortion correction may be performed on a face region to be corrected, distortion correction of the face in an image to be processed is achieved, and more real face information is displayed in the image after distortion correction.
According to the method and the device, the face area to be corrected in the image to be processed and the posture information of the face to be corrected in the face area to be corrected are obtained, distortion correction can be conducted on the face area to be corrected according to the posture information of the face to be corrected under the condition that the face to be corrected is in a perspective distortion scene, and therefore distortion correction of the face in the image to be processed is achieved.
Fig. 5 is a schematic view of an implementation flow of the face distortion correction method provided in the second embodiment of the present application. As shown in fig. 5, the face distortion correction method may include the steps of:
step 501, obtaining a face area to be corrected in an image to be processed.
The step is the same as step 201, and reference may be made to the related description of step 201, which is not described herein again.
Step 502, obtaining a pitch angle and a yaw angle of a face to be corrected in the face area to be corrected.
The meaning of this step is the same as that of step 202, and reference may be made to the related description of step 202, which is not repeated herein.
Step 503, determining a face distortion correction parameter according to the internal reference of the shooting device and the pitch angle and the yaw angle of the face to be corrected.
The shooting device in step 503 is a device that collects the image to be processed.
The internal parameters of the photographing device are parameters related to the characteristics of the photographing device itself, such as the focal length of the photographing device, the pixel size, and the like.
The face distortion correction parameters refer to parameters for performing distortion correction on a face region to be corrected. The distortion correction of the face area to be corrected can be understood as correcting the face area to be corrected from the current posture of the face to be corrected to the standard posture. For example, the pose of the face orthographic camera may be taken as the standard pose.
In the present embodiment, the face distortion correction parameter can be calculated by the following formula.
H=KθPθYK-1
Wherein H represents a face distortion correction parameter, K represents an internal reference of the photographing device, and thetaPRepresenting the pitch angle, theta, of the face to be correctedYRepresenting the yaw angle of the face to be corrected.
On the basis that the homography matrix is used for representing plane transformation, if the face to be corrected in the embodiment is assumed to be a plane, the homography matrix can be used for representing the face distortion correction parameters so as to describe the transformation between the current posture and the standard posture of the face to be corrected through the homography matrix.
And step 504, performing distortion correction on the face area to be corrected according to the face distortion correction parameters to obtain a corrected face area.
In this embodiment, the face region to be corrected may be updated or adjusted based on the face distortion correction parameter in combination with a formula or equation established in advance, so as to realize distortion correction of the face region to be corrected, and obtain the corrected face region.
As an alternative embodiment, after obtaining the corrected face area, the method further includes:
updating an optimization function according to the face area to be corrected and the corrected face area, wherein the optimization function is a function which takes the difference between the face area to be corrected and the corrected face area as a target to be minimized;
and when the optimization function takes the minimum value, obtaining the optimized face area.
In addition to the face to be corrected, there may be some straight lines in the face area to be corrected. For example, when a user takes a picture of a building or a door or window, or takes a picture of the building or the door or window as a background, the wall or corner edge of the building, the door frame or the partition of the door or window all have straight lines.
Some balance can be made between the human face and the straight line in the human face area to be corrected through the optimization function, so that the situation that the straight line is bent due to human face distortion correction can be improved while the human face distortion correction is achieved.
The optimization function includes, but is not limited to, optimization terms such as a face constraint term, a line angle consistency term, and a regularization term. The optimization function can be constructed by weighted summation of these optimization terms. The face constraint item is used for constraining the position offset of the face area to be corrected and the corrected face area. The line angle consistent item is used for constraining the position offset of a straight line in the face to be corrected and a corresponding straight line (which can be called as a curve if bending occurs) in the corrected face area. The regularization term is used to preserve the face proportion, i.e. the face proportion before and after correction is consistent, for example, the ratio of the distance from the eyes to the mouth to the face length before and after correction is 33%, and the ratio of the distance between the eyes to the face width is 42%.
It should be noted that the face constraint term, the line angle consistency term, the regular term, and other optimization terms are used to exemplify the optimization function, and the optimization term included in the optimization function is not limited in this embodiment.
As an alternative embodiment, after obtaining the optimized face region, the method further includes:
and interpolating the optimized grids to obtain a target correction result of the face area to be corrected, wherein the optimized grids are used for describing the optimized face area.
In this optional embodiment, a network may be established in the image to be processed, for example, a low-resolution grid is used to represent coordinate points at each position in the image to be processed, based on the grid of the image to be processed, a grid to be corrected corresponding to a face region to be corrected may be determined, distortion correction may be performed on the network to be corrected according to the face distortion correction parameter, a corrected grid corresponding to the corrected face region may be obtained, an optimization function may be updated according to the grid to be corrected and the corrected network, when the optimization function takes a minimum value, an optimized grid corresponding to the optimized face region may be obtained, and a target correction result of the face region to be corrected may be obtained by interpolating the optimized grid. The target correction result is a correction result obtained by interpolating the optimized grid.
It should be noted that, in this embodiment, the interpolation algorithm of the optimized grid is not limited, and may be bilinear interpolation, bicubic interpolation, or the like, for example.
According to the embodiment of the application, the face distortion correction parameters can be determined according to the internal parameters of the shooting device and the pitch angle and the yaw angle of the face to be corrected, and the face area to be corrected can be updated or adjusted according to the face distortion correction parameters, so that the distortion correction of the face area to be corrected is realized.
Fig. 6 is a schematic view of an implementation flow of the face distortion correction method provided in the third embodiment of the present application. As shown in fig. 6, the face distortion correction method may include the steps of:
step 601, obtaining a face area to be corrected in an image to be processed.
The step is the same as step 201, and reference may be made to the related description of step 201, which is not described herein again.
Step 602, obtaining a reference angle of a face to be corrected in the face area to be corrected.
The reference angle of the face to be corrected refers to at least one of a pitch angle and a yaw angle of the face to be corrected, and specific obtaining methods can refer to the related description of the first embodiment, and are not described herein again.
Step 603, judging whether the face to be corrected belongs to a reference face close to the shooting device.
The reference face refers to a face whose face size exceeds a preset threshold. For example, in the image to be processed with the resolution of 640 × 480, 200 × 200 may be set as a preset threshold, the face to be corrected is determined as the reference face when the face size of the face to be corrected exceeds 200 × 200, and the face to be corrected is determined as not the reference face when the face size of the face to be corrected does not exceed 200 × 200.
As an alternative embodiment, the determining whether the face to be corrected belongs to a reference face close to the shooting device includes:
acquiring the size of the face to be corrected and the density of key points of the face to be corrected;
if the face size of the face to be corrected exceeds a preset threshold value and the density degree of the face key points of the face to be corrected does not exceed the preset density degree, judging that the face to be corrected belongs to a reference face close to the shooting device;
and if the size of the face to be corrected does not exceed the preset threshold value, or the density degree of key points of the face to be corrected exceeds the preset density degree, judging that the face to be corrected does not belong to the reference face close to the shooting device.
If the real face corresponding to the face to be corrected is close to the shooting device, stretching deformation of the face to be corrected in the image to be processed is caused, the stretching deformation of the face to be corrected reduces the density degree of the face key points of the face to be corrected, and based on the density degree, whether the face to be corrected is close to the shooting device can be judged by comparing the density degree of the face key points of the face to be corrected with the preset density degree. If the density degree of the face key points of the face to be corrected does not exceed the preset density degree, the face to be corrected can be judged to be close to the shooting device; if the density degree of the face key points of the face to be corrected exceeds a preset density program, the face to be corrected can be judged not to be close to the shooting device.
As an alternative embodiment, the face to be corrected is framed by a face frame, and the density of the face key points of the face to be corrected is obtained, including:
and determining the distance of the face key points on the adjacent frames according to the face key point information on the adjacent frames of the face frame, and expressing the density degree of the face key points of the face to be corrected by using the distance of the face key points on the adjacent frames.
In this optional embodiment, a distance threshold may be used to represent a preset density degree, and if the distance between the face key points on adjacent frames exceeds the distance threshold, it may be determined that the density degree of the face key points of the face to be corrected does not exceed the preset density degree; if the distance between the face key points on the adjacent frames does not exceed the distance threshold, the density degree of the face key points of the face to be corrected can be judged to exceed the preset density degree.
A face to be corrected can be selected from a face frame in an image to be processed, as shown in fig. 7, the face frame is an exemplary face frame, a rectangle in the face in fig. 7 represents the face frame, 1031 in fig. 7 represents a right frame of the face frame, 1032 represents a lower frame of the face frame, a right frame and the lower frame of the face frame are adjacent frames, and a white circle around the face frame in fig. 7 represents a key point of the face. Taking the right frame, the lower frame, and four face key points of the face frame in fig. 7 as an example, the distance between the face key points on the adjacent frames may be the distance between the face key point on the right frame and the face key point on the lower frame. The right frame is provided with two face key points, the distances between the two face key points on the right frame and the face key points on the lower frame can be respectively calculated, if the distance exceeding the distance threshold exists in the two calculated distances, the density degree of the face key points of the face to be corrected can be judged not to exceed the preset density degree, and if the two calculated distances do not exceed the distance exceeding the distance threshold, the density degree of the face key points of the face to be corrected can be judged to exceed the preset density degree. The distance threshold may be set to a maximum distance between key points of the face on adjacent frames when the face is in the standard posture. For example a distance threshold of 5 pixels (pixels).
Step 604, if the reference angle of the face to be corrected exceeds the preset angle and the face to be corrected belongs to the reference face close to the shooting device, determining that the face to be corrected is in the perspective distortion scene.
In this embodiment, if the reference angle of the face to be corrected exceeds the preset angle and the face to be corrected belongs to a reference face close to the shooting device, it may be determined that the face to be corrected is stretched and deformed, that is, the face to be corrected is in a perspective distortion scene, and distortion correction needs to be performed on the face area to be corrected according to the posture information of the face to be corrected, so as to effectively solve the perspective distortion problem in the scene.
In another embodiment, if the reference angle of the face to be corrected does not exceed the preset angle, or the face to be corrected does not belong to the reference face close to the shooting device, it is determined that the face to be corrected is not in the perspective distortion scene.
If the face to be corrected is not in the perspective distortion scene, it indicates that the face to be corrected is not subjected to stretching deformation, and distortion correction is not required.
And 605, performing distortion correction on the face region to be corrected according to the posture information of the face to be corrected.
The step is partially the same as step 203, and the same parts can be referred to the related description of step 203, which is not described herein again.
The embodiment of the application can judge whether the face to be corrected is in a perspective distortion scene or not based on the reference angle of the face to be corrected and whether the face to be corrected belongs to a reference face close to a shooting device, and can realize distortion correction of the face area to be corrected based on the posture information of the face to be corrected when the face to be corrected is judged to be in the perspective distortion scene.
Fig. 8 is a schematic structural diagram of a face distortion correction apparatus according to the fourth embodiment of the present application, and for convenience of description, only the relevant portions of the embodiment of the present application are shown.
The face distortion correction apparatus includes:
the region acquiring module 81 is configured to acquire a face region to be corrected in the image to be processed;
the pose acquisition module 82 is used for acquiring pose information of a face to be corrected in the face area to be corrected;
and the distortion correction module 83 is configured to, when the face to be corrected is in a perspective distortion scene, perform distortion correction on the face region to be corrected according to the pose information of the face to be corrected.
Optionally, the attitude information of the face to be corrected includes a pitch angle and a yaw angle of the face to be corrected; the distortion correction module 83 is specifically configured to:
determining face distortion correction parameters according to internal parameters of a shooting device, and the pitch angle and the yaw angle of the face to be corrected, wherein the shooting device is a device for collecting the image to be processed;
and carrying out distortion correction on the face area to be corrected according to the face distortion correction parameters to obtain a corrected face area.
Optionally, the face distortion correction apparatus further includes:
an optimization updating module, configured to update an optimization function according to the face region to be corrected and the corrected face region, where the optimization function is a function that aims at minimizing a difference between the face region to be corrected and the corrected face region;
and the function processing module is used for obtaining the optimized face area when the optimization function takes the minimum value.
Optionally, the face distortion correction apparatus further includes:
and the grid interpolation module is used for interpolating the optimized grid to obtain a target correction result of the face area to be corrected, wherein the optimized grid is used for describing the optimized face area.
Optionally, the pose information of the face to be corrected includes a pitch angle and a yaw angle of the face to be corrected; the face distortion correction apparatus further includes:
the face judgment module is used for judging whether the face to be corrected belongs to a reference face close to a shooting device, the reference face is a face with the face size exceeding a preset threshold, and the shooting device is a device for collecting the image to be processed;
the first determining module is used for determining that the face to be corrected is in a perspective distortion scene if the reference angle of the face to be corrected exceeds a preset angle and the face to be corrected belongs to a reference face close to a shooting device, wherein the reference angle of the face to be corrected refers to at least one of a pitch angle and a yaw angle of the face to be corrected;
and the second determining module is used for determining that the face to be corrected is not in a perspective distortion scene if the reference angle of the face to be corrected does not exceed the preset angle or the face to be corrected does not belong to a reference face close to a shooting device.
Optionally, the face judgment module includes:
the data acquisition unit is used for acquiring the face size of the face to be corrected and the density of face key points of the face to be corrected;
the first judging unit is used for judging that the face to be corrected belongs to a reference face close to a shooting device if the face size of the face to be corrected exceeds the preset threshold value and the density degree of the face key points of the face to be corrected does not exceed the preset density degree;
and the second judging unit is used for judging that the face to be corrected does not belong to a reference face close to the shooting device if the face size of the face to be corrected does not exceed the preset threshold value or the density degree of the face key points of the face to be corrected exceeds the preset density degree.
Optionally, the face to be corrected is framed by a face frame, and the data acquisition unit is specifically configured to:
and determining the distance of the face key points on the adjacent frames according to the face key point information on the adjacent frames of the face frames, and expressing the density degree of the face key points of the face to be corrected by using the distance of the face key points on the adjacent frames.
Optionally, the area obtaining module 81 is specifically configured to:
and under the condition that the face exists in the image to be processed, carrying out face segmentation on the image to be processed to obtain the face area to be corrected.
The face distortion correction device provided in the embodiment of the present application can be applied to the foregoing method embodiments, and for details, reference is made to the description of the foregoing method embodiments, and details are not repeated here.
Fig. 9 is a schematic structural diagram of an electronic device provided in this application embodiment five. As shown in fig. 9, the electronic apparatus 9 of this embodiment includes: one or more processors 90 (only one shown), a memory 91, and a computer program 92 stored in the memory 91 and executable on the at least one processor 90. The processor 90, when executing the computer program 92, implements the steps in the various face distortion correction method embodiments described above.
The electronic device 9 may include, but is not limited to, a processor 90, a memory 91. Those skilled in the art will appreciate that fig. 9 is merely an example of the electronic device 9, and does not constitute a limitation of the electronic device 9, and may include more or less components than those shown, or combine certain components, or different components, for example, the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 90 may be a Central Processing Unit (CPU), or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may be an internal storage unit of the electronic device 9, such as a hard disk or a memory of the electronic device 9. The memory 91 may also be an external storage device of the electronic device 9, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 9. Further, the memory 91 may also include both an internal storage unit and an external storage device of the electronic device 9. The memory 91 is used for storing the computer program and other programs and data required by the electronic device. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps that can be implemented in the above method embodiments.
The embodiments of the present application further provide a computer program product, which when executed on an electronic device, enables the electronic device to implement the steps in the above method embodiments when executed.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (12)

1. A face distortion correction method, comprising:
acquiring a face area to be corrected in an image to be processed;
acquiring the posture information of the face to be corrected in the face area to be corrected;
and under the condition that the face to be corrected is in a perspective distortion scene, carrying out distortion correction on the face area to be corrected according to the posture information of the face to be corrected.
2. The face distortion correction method as claimed in claim 1, wherein the attitude information of the face to be corrected includes a pitch angle and a yaw angle of the face to be corrected; and according to the pose information of the corrected face, carrying out distortion correction on the face region to be corrected, wherein the distortion correction comprises the following steps:
determining face distortion correction parameters according to internal parameters of a shooting device, and the pitch angle and the yaw angle of the face to be corrected, wherein the shooting device is a device for collecting the image to be processed;
and carrying out distortion correction on the face area to be corrected according to the face distortion correction parameters to obtain a corrected face area.
3. The method for correcting face distortion of claim 2, further comprising, after obtaining the corrected face region:
updating an optimization function according to the face area to be corrected and the corrected face area, wherein the optimization function is a function which aims at minimizing the difference between the face area to be corrected and the corrected face area;
and obtaining an optimized face area when the optimization function takes the minimum value.
4. A method for correcting face distortion as claimed in claim 3, further comprising, after obtaining the optimized face region:
and interpolating the optimized grids to obtain a target correction result of the face area to be corrected, wherein the optimized grids are used for describing the optimized face area.
5. The face distortion correction method of claim 1, wherein the pose information of the face to be corrected includes a pitch angle and a yaw angle of the face to be corrected; the face distortion correction method further includes:
judging whether the face to be corrected belongs to a reference face close to a shooting device, wherein the reference face is a face with the face size exceeding a preset threshold value, and the shooting device is a device for collecting the image to be processed;
if the reference angle of the face to be corrected exceeds a preset angle and the face to be corrected belongs to a reference face close to a shooting device, determining that the face to be corrected is in a perspective distortion scene, wherein the reference angle of the face to be corrected refers to at least one of a pitch angle and a yaw angle of the face to be corrected;
and if the reference angle of the face to be corrected does not exceed the preset angle or the face to be corrected does not belong to a reference face close to a shooting device, determining that the face to be corrected is not in a perspective distortion scene.
6. The method for correcting face distortion according to claim 5, wherein the determining whether the face to be corrected belongs to a reference face close to a camera comprises:
acquiring the size of the face to be corrected and the density of face key points of the face to be corrected;
if the size of the face to be corrected exceeds the preset threshold value and the density degree of the face key points of the face to be corrected does not exceed the preset density degree, judging that the face to be corrected belongs to a reference face close to a shooting device;
and if the size of the face to be corrected does not exceed the preset threshold value, or the density degree of the face key points of the face to be corrected exceeds the preset density degree, judging that the face to be corrected does not belong to a reference face close to a shooting device.
7. The method for correcting face distortion according to claim 6, wherein the face to be corrected is framed by a face frame, and the obtaining of the density of face key points of the face to be corrected comprises:
and determining the distance of the face key points on the adjacent frames according to the face key point information on the adjacent frames of the face frames, and expressing the density degree of the face key points of the face to be corrected by using the distance of the face key points on the adjacent frames.
8. The method for correcting face distortion according to any one of claims 1 to 7, wherein the acquiring a face region to be corrected in an image to be processed comprises:
and under the condition that the face exists in the image to be processed, carrying out face segmentation on the image to be processed to obtain the face area to be corrected.
9. A face distortion correction apparatus, comprising:
the region acquisition module is used for acquiring a face region to be corrected in the image to be processed;
the posture acquisition module is used for acquiring the posture information of the face to be corrected in the face area to be corrected;
and the distortion correction module is used for carrying out distortion correction on the face area to be corrected according to the posture information of the face to be corrected under the condition that the face to be corrected is in a perspective distortion scene.
10. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the face distortion correction method according to any one of claims 1 to 8 when executing the computer program.
11. A chip comprising a processor, wherein the processor is configured to read and execute a computer program stored in a memory to perform the steps of the face distortion correction method according to any one of claims 1 to 8.
12. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the face distortion correction method according to any one of claims 1 to 8.
CN202111214281.7A 2021-10-19 2021-10-19 Face distortion correction method and device, electronic equipment, chip and storage medium Pending CN113870148A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111214281.7A CN113870148A (en) 2021-10-19 2021-10-19 Face distortion correction method and device, electronic equipment, chip and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111214281.7A CN113870148A (en) 2021-10-19 2021-10-19 Face distortion correction method and device, electronic equipment, chip and storage medium

Publications (1)

Publication Number Publication Date
CN113870148A true CN113870148A (en) 2021-12-31

Family

ID=79000427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111214281.7A Pending CN113870148A (en) 2021-10-19 2021-10-19 Face distortion correction method and device, electronic equipment, chip and storage medium

Country Status (1)

Country Link
CN (1) CN113870148A (en)

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
EP3816928A1 (en) Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and computer-readable storage medium
CN109474780B (en) Method and device for image processing
US20190251675A1 (en) Image processing method, image processing device and storage medium
WO2021057294A1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN111062881A (en) Image processing method and device, storage medium and electronic equipment
CN111311482B (en) Background blurring method and device, terminal equipment and storage medium
CN111127303A (en) Background blurring method and device, terminal equipment and computer readable storage medium
CN109035134B (en) Panoramic image splicing method and device, electronic equipment and storage medium
CN114143528A (en) Multi-video stream fusion method, electronic device and storage medium
CN111105367A (en) Face distortion correction method and device, electronic equipment and storage medium
CN108717704B (en) Target tracking method based on fisheye image, computer device and computer readable storage medium
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN112215906A (en) Image processing method and device and electronic equipment
CN107295261B (en) Image defogging method and device, storage medium and mobile terminal
CN113205011A (en) Image mask determining method and device, storage medium and electronic equipment
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN111353945B (en) Fisheye image correction method, device and storage medium
CN113870148A (en) Face distortion correction method and device, electronic equipment, chip and storage medium
CN115147389A (en) Image processing method, apparatus, and computer-readable storage medium
CN113497886B (en) Video processing method, terminal device and computer-readable storage medium
CN113034345B (en) Face recognition method and system based on SFM reconstruction
CN114004839A (en) Image segmentation method and device of panoramic image, computer equipment and storage medium
CN113824894A (en) Exposure control method, device, equipment and storage medium
CN116266356A (en) Panoramic video transition rendering method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination