CN109377446B - Face image processing method and device, electronic equipment and storage medium - Google Patents

Face image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109377446B
CN109377446B CN201811253127.9A CN201811253127A CN109377446B CN 109377446 B CN109377446 B CN 109377446B CN 201811253127 A CN201811253127 A CN 201811253127A CN 109377446 B CN109377446 B CN 109377446B
Authority
CN
China
Prior art keywords
deformation
pixel
distance
determining
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811253127.9A
Other languages
Chinese (zh)
Other versions
CN109377446A (en
Inventor
黄明杨
付万增
石建萍
曲艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201811253127.9A priority Critical patent/CN109377446B/en
Publication of CN109377446A publication Critical patent/CN109377446A/en
Application granted granted Critical
Publication of CN109377446B publication Critical patent/CN109377446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a method and a device for processing a face image, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring original face key points and face deflection angles in a face image; determining a plurality of continuous deformation regions according to the original face key points and the face deflection angle, wherein each deformation region comprises an original face region and a non-original face region; and respectively carrying out deformation processing on the original face area and the non-original face area in each deformation area. The method and the device can avoid bad selection of the deformation region caused by different postures or different angles of the human face, namely, the deformation region can be accurately positioned, and not only can the deformation processing be carried out on the original face region, but also the deformation processing can be carried out on a certain range outside the original face region, so that the deformation effect is more natural.

Description

Face image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for processing a face image, an electronic device, and a storage medium.
Background
At present, goose egg faces, melon seed faces and other face-thinning types are deeply popular among people. How to accurately perform face thinning operation on the human face in the human face image to enable the face thinning effect to be natural is a problem to be solved urgently.
Disclosure of Invention
The present disclosure provides a technical scheme for processing a face image.
According to an aspect of the present disclosure, a method for processing a face image is provided, including:
acquiring original face key points and face deflection angles in a face image;
determining a plurality of continuous deformation regions according to the original face key points and the face deflection angle, wherein each deformation region comprises an original face region and a non-original face region;
and respectively carrying out deformation processing on the original face area and the non-original face area in each deformation area.
In a possible implementation manner, determining a plurality of continuous deformation regions according to the original face key points and the face deflection angle includes:
determining a deformation center in the face image according to the original face key points and the face deflection angle;
determining a plurality of continuous deformation regions according to the deformation center.
In a possible implementation manner, determining a center of deformation in the face image according to the original face key point and the face deflection angle includes:
determining an estimated position of a deformation center in the face image according to the original face key point;
and adjusting the estimated position of the deformation center according to the human face deflection angle to obtain the deformation center in the human face image.
In one possible implementation, determining a plurality of consecutive deformation regions according to the deformation center includes:
acquiring original face contour key points;
respectively determining reference points of a deformation region on an extension line of a line segment between the deformation center and each original face contour key point according to the original face contour key points to obtain the reference points of a plurality of deformation regions, wherein the distance between the reference point of the deformation region and the corresponding original face contour key point is less than the distance between the reference point of the deformation region and the deformation center;
and determining a plurality of continuous deformation areas according to the deformation center and the reference points of the plurality of deformation areas.
In one possible implementation, obtaining original face contour key points includes:
performing curve fitting on the contour key points in the original face key points to obtain an original face contour curve;
and sampling the original face contour curve to obtain key points of the original face contour.
In one possible implementation, obtaining original face contour key points includes:
and selecting partial contour key points from a plurality of contour key points in the original face key points as original face contour key points.
In one possible implementation manner, the deforming the original face region and the non-original face region in each deformed region separately includes:
acquiring a target face contour curve;
determining the intersection point of the deformation region and the target face contour curve as a target reference point in the deformation region;
according to the target reference point in the deformation region and the original face contour key point in the deformation region, carrying out deformation processing on the original face region;
and carrying out deformation processing on the non-original face region according to a target reference point in the deformation region, the original face contour key point in the deformation region and the reference point of the deformation region.
In one possible implementation, obtaining a target face contour curve includes:
and performing curve fitting on key points of the target face contour to obtain a target face contour curve.
In a possible implementation manner, the deforming the original face region according to the target reference point in the deformed region and the original face contour key point in the deformed region includes:
determining a line segment between the target reference points in the deformation region as a first line segment;
determining line segments among key points of the original face contour in the deformation area as second line segments;
for each pixel in the original face area in the deformation area, determining a target pixel corresponding to each pixel according to an intersection point of a connecting line of the deformation center and each pixel and the first line segment and the second line segment;
and updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
In a possible implementation manner, for each pixel in the original face region in the deformed region, determining a target pixel corresponding to each pixel according to an intersection of a connecting line of the deformation center and each pixel and the first line segment and the second line segment, includes:
for a first pixel in the original face region in the deformed region, determining a first connecting line of the deformation center and the first pixel;
determining a first intersection point of the first connecting line and the first line segment and a second intersection point of the first connecting line and the second line segment;
and determining a target pixel corresponding to the first pixel according to the deformation center, the first pixel, the first intersection point and the second intersection point.
In a possible implementation manner, determining a target pixel corresponding to the first pixel according to the deformation center, the first pixel, and the positions of the first intersection point and the second intersection point includes:
acquiring a first distance between the deformation center and the first intersection point;
acquiring a second distance between the deformation center and the second intersection point;
acquiring a third distance between the center of deformation and the first pixel;
determining a fourth distance between the deformation center and a target pixel corresponding to the first pixel according to the first distance, the second distance and the third distance;
and determining a target pixel corresponding to the first pixel according to the fourth distance and the position of the deformation center.
In one possible implementation manner, determining a fourth distance between the deformation center and a target pixel corresponding to the first pixel according to the first distance, the second distance, and the third distance includes:
calculating a first ratio of the first distance to the second distance;
and calculating the product of the first ratio and the third distance to obtain a fourth distance between the deformation center and the target pixel corresponding to the first pixel.
In a possible implementation manner, the deforming the non-original face region according to the target reference point in the deformed region, the original face contour key point in the deformed region, and the reference point of the deformed region includes:
determining a line segment between the target reference points in the deformation region as a first line segment;
determining line segments among key points of the original face contour in the deformation region as second line segments;
determining a line segment between the reference points of the deformation region as a third line segment;
for each pixel in the non-original face area in the deformation area, determining a target pixel corresponding to each pixel according to the intersection point of the connecting line of the deformation center and each pixel and the first line segment, the second line segment and the third line segment;
and updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
In a possible implementation manner, for each pixel in the non-original face region in the deformed region, determining a target pixel corresponding to each pixel according to an intersection of a connecting line between the deformation center and each pixel and the first line segment, the second line segment, and the third line segment includes:
for a second pixel in the non-original face region in the deformed region, determining a second line connecting the deformation center and the second pixel;
determining a third intersection point of the second connecting line and the first line segment, a fourth intersection point of the second connecting line and the second line segment, and a fifth intersection point of the second connecting line and the third line segment;
and determining a target pixel corresponding to the second pixel according to the positions of the deformation center, the second pixel, the third intersection point, the fourth intersection point and the fifth intersection point.
In a possible implementation manner, determining a target pixel corresponding to the second pixel according to the positions of the deformation center, the second pixel, the third intersection, the fourth intersection, and the fifth intersection includes:
acquiring a fifth distance between the deformation center and the third intersection point;
acquiring a sixth distance between the deformation center and the fourth intersection point;
acquiring a seventh distance between the deformation center and the fifth intersection point;
acquiring an eighth distance between the deformation center and the second pixel;
determining a ninth distance between the deformation center and a target pixel corresponding to the second pixel according to the fifth distance, the sixth distance, the seventh distance and the eighth distance;
and determining a target pixel corresponding to the second pixel according to the ninth distance and the position of the deformation center.
In one possible implementation manner, determining a ninth distance between the deformation center and a target pixel corresponding to the second pixel according to the fifth distance, the sixth distance, the seventh distance, and the eighth distance includes:
calculating a first difference of the seventh distance and the fifth distance;
calculating a second difference of the seventh distance and the sixth distance;
calculating a second ratio of the first difference to the second difference;
and calculating the product of the second ratio and the eighth distance to obtain a ninth distance between the deformation center and the target pixel corresponding to the second pixel.
In one possible implementation, the method is applied to one or more of face shape adjustment, chin extension adjustment, forehead plump adjustment, apple muscle adjustment, and hairline adjustment.
According to an aspect of the present disclosure, there is provided a face image processing apparatus, including:
the acquisition module is used for acquiring original face key points and face deflection angles in the face image;
the determining module is used for determining a plurality of continuous deformation areas according to the original human face key points and the human face deflection angle, wherein each deformation area comprises an original face area and a non-original face area;
and the deformation processing module is used for respectively carrying out deformation processing on the original face area and the non-original face area in each deformation area.
In one possible implementation, the determining module includes:
the first determining submodule is used for determining a deformation center in the face image according to the original face key point and the face deflection angle;
and the second determining submodule is used for determining a plurality of continuous deformation areas according to the deformation center.
In one possible implementation, the first determining sub-module includes:
the first determining unit is used for determining the estimated position of a deformation center in the face image according to the original face key point;
and the adjusting unit is used for adjusting the estimated position of the deformation center according to the human face deflection angle to obtain the deformation center in the human face image.
In one possible implementation, the second determining sub-module includes:
the acquiring unit is used for acquiring key points of the original face contour;
a second determining unit, configured to determine, for a plurality of original face contour key points, reference points of a deformed region on an extension line of a line segment between the deformation center and each of the original face contour key points, respectively, to obtain reference points of a plurality of deformed regions, where a distance between a reference point of the deformed region and a corresponding original face contour key point is smaller than a distance between the reference point of the deformed region and the deformation center;
a third determining unit, configured to determine a plurality of continuous deformation regions according to the deformation center and the reference points of the plurality of deformation regions.
In one possible implementation manner, the obtaining unit includes:
the curve fitting subunit is used for performing curve fitting on the contour key points in the original face key points to obtain an original face contour curve;
and the acquisition subunit is used for sampling the original face contour curve to acquire the key points of the original face contour.
In a possible implementation manner, the obtaining unit is configured to:
and selecting partial contour key points from a plurality of contour key points in the original face key points as original face contour key points.
In one possible implementation, the deformation processing module includes:
the acquisition submodule is used for acquiring a target face contour curve;
a third determining submodule, configured to determine an intersection point of the deformation region and the target face contour curve as a target reference point in the deformation region;
the first deformation processing submodule is used for carrying out deformation processing on the original face area according to a target reference point in the deformation area and an original face contour key point in the deformation area;
and the second deformation processing submodule is used for carrying out deformation processing on the non-original face region according to the target reference point in the deformation region, the original face contour key point in the deformation region and the reference point of the deformation region.
In one possible implementation, the obtaining sub-module is configured to:
and performing curve fitting on key points of the target face contour to obtain a target face contour curve.
In one possible implementation, the first deformation processing sub-module includes:
a fourth determination unit configured to determine a line segment between the target reference points in the deformation region as a first line segment;
a fifth determining unit, configured to determine a line segment between key points of the original face contour in the deformation region as a second line segment;
a sixth determining unit, configured to determine, for each pixel in the original face area in the deformed area, a target pixel corresponding to each pixel according to an intersection of a connection line between the deformation center and each pixel and the first line segment and the second line segment;
and the first updating unit is used for respectively updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
In one possible implementation manner, the sixth determining unit includes:
a first determining subunit, configured to determine, for a first pixel in the original face region in the deformed region, a first connection line between the deformation center and the first pixel;
a second determining subunit, configured to determine a first intersection point of the first connection line and the first line segment, and a second intersection point of the first connection line and the second line segment;
and the third determining subunit is configured to determine, according to the deformation center, the first pixel, and the positions of the first intersection and the second intersection, a target pixel corresponding to the first pixel.
In a possible implementation manner, the third determining subunit is configured to:
acquiring a first distance between the deformation center and the first intersection point;
acquiring a second distance between the deformation center and the second intersection point;
acquiring a third distance between the center of deformation and the first pixel;
determining a fourth distance between the deformation center and a target pixel corresponding to the first pixel according to the first distance, the second distance and the third distance;
and determining a target pixel corresponding to the first pixel according to the fourth distance and the position of the deformation center.
In one possible implementation manner, the third determining subunit is configured to:
calculating a first ratio of the first distance to the second distance;
and calculating the product of the first ratio and the third distance to obtain a fourth distance between the deformation center and the target pixel corresponding to the first pixel.
In one possible implementation, the second deformation processing sub-module includes:
a seventh determining unit configured to determine a line segment between the target reference points in the deformation region as a first line segment;
an eighth determining unit, configured to determine a line segment between key points of the original face contour in the deformation region as a second line segment;
a ninth determining unit for determining a line segment between the reference points of the deformed region as a third line segment;
a tenth determining unit, configured to determine, for each pixel in the non-original face region in the deformed region, a target pixel corresponding to each pixel according to an intersection of a connection line between the deformation center and each pixel and the first line segment, the second line segment, and the third line segment;
and the second updating unit is used for respectively updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
In one possible implementation manner, the tenth determining unit includes:
a fourth determining subunit, configured to determine, for a second pixel in the non-original face area in the deformed area, a second line connecting the deformation center and the second pixel;
a fifth determining subunit, configured to determine a third intersection point of the second line and the first line segment, a fourth intersection point of the second line and the second line segment, and a fifth intersection point of the second line and the third line segment;
and a sixth determining subunit, configured to determine, according to the positions of the deformation center, the second pixel, the third intersection, the fourth intersection, and the fifth intersection, a target pixel corresponding to the second pixel.
In one possible implementation manner, the sixth determining subunit is configured to:
acquiring a fifth distance between the deformation center and the third intersection point;
acquiring a sixth distance between the deformation center and the fourth intersection point;
acquiring a seventh distance between the deformation center and the fifth intersection point;
acquiring an eighth distance between the center of deformation and the second pixel;
determining a ninth distance between the deformation center and a target pixel corresponding to the second pixel according to the fifth distance, the sixth distance, the seventh distance and the eighth distance;
and determining a target pixel corresponding to the second pixel according to the ninth distance and the position of the deformation center.
In one possible implementation manner, the sixth determining subunit is configured to:
calculating a first difference of the seventh distance and the fifth distance;
calculating a second difference of the seventh distance and the sixth distance;
calculating a second ratio of the first difference to the second difference;
and calculating the product of the second ratio and the eighth distance to obtain a ninth distance between the deformation center and the target pixel corresponding to the second pixel.
In one possible implementation, the device is applied in one or more of facial form adjustment, chin extension adjustment, forehead satiation adjustment, apple muscle adjustment, and hairline adjustment.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: and executing the processing method of the face image.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method of processing a face image.
In the embodiment of the disclosure, by acquiring the key points of the original face and the face deflection angle in the face image, determining a plurality of continuous deformation regions according to the key points of the original face and the face deflection angle, and performing deformation processing on the original face region and the non-original face region in each deformation region respectively, it is possible to avoid that the deformation regions are not selected well due to different postures or different angles of the face, that is, the deformation regions can be accurately positioned, and not only the original face region is subjected to deformation processing, but also a certain range outside the original face region is subjected to deformation processing, so that the deformation effect is more natural.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a method of processing a face image according to an embodiment of the present disclosure.
Fig. 2a is a schematic diagram of a face image before deformation processing in a face image processing method according to an embodiment of the present disclosure.
Fig. 2b is a schematic diagram of a face image after deformation processing in the face image processing method according to the embodiment of the disclosure.
Fig. 3 shows an exemplary flowchart of step S12 of the processing method of a face image according to the embodiment of the present disclosure.
Fig. 4 is a schematic diagram illustrating a deformation center and each deformation region in a processing method of a face image according to an embodiment of the present disclosure.
Fig. 5 shows an exemplary flowchart of step S121 of the processing method of the face image according to the embodiment of the disclosure.
Fig. 6 shows an exemplary flowchart of step S122 of the processing method of the face image according to the embodiment of the disclosure.
Fig. 7 shows an exemplary flowchart of step S1221 of the processing method of a face image according to an embodiment of the disclosure.
Fig. 8 shows an exemplary flowchart of step S13 of the processing method of a face image according to the embodiment of the present disclosure.
Fig. 9 is a schematic diagram illustrating a deformed region in a method for processing a face image according to an embodiment of the present disclosure.
Fig. 10 shows an exemplary flowchart of step S133 of the processing method of a face image according to the embodiment of the disclosure.
Fig. 11 shows an exemplary flowchart of step S1333 of the processing method of a face image according to the embodiment of the present disclosure.
Fig. 12 shows an exemplary flowchart of step S13333 of the processing method of a face image according to the embodiment of the present disclosure.
Fig. 13 shows an exemplary flowchart of step S133334 of the processing method of a face image according to the embodiment of the present disclosure.
Fig. 14 shows an exemplary flowchart of step S134 of the processing method of a face image according to the embodiment of the disclosure.
Fig. 15 shows an exemplary flowchart of step S1344 of a processing method of a face image according to an embodiment of the disclosure.
Fig. 16 shows an exemplary flowchart of step S13443 of the processing method of a face image according to an embodiment of the disclosure.
Fig. 17 shows an exemplary flowchart of step S134435 of the processing method of a face image according to an embodiment of the disclosure.
Fig. 18 is a schematic diagram illustrating deformation functions of an original face region and a non-original face region of a deformed region in a processing method of a face image according to an embodiment of the present disclosure.
Fig. 19 shows a block diagram of a face image processing apparatus according to an embodiment of the present disclosure.
Fig. 20 shows an exemplary block diagram of a processing apparatus for a face image according to an embodiment of the present disclosure.
Fig. 21 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment.
Fig. 22 is a block diagram illustrating an electronic device 1900 in accordance with an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a method of processing a face image according to an embodiment of the present disclosure. As shown in fig. 1, the method includes steps S11 through S13.
In step S11, original face key points and face deflection angles in the face image are acquired.
In one possible implementation, the original face key points may include one or more of eye key points, nose key points, mouth key points, chin key points, face contour key points, and the like in the face image.
In one possible implementation, the face deflection angle may represent a deflection angle of the face relative to the front face. For example, when the face is a front face, the face deflection angle may be 0; when the face deflects leftwards relative to the front face, the deflection angle of the face can be equal to the included angle between the face and the front face; when the face deflects to the right relative to the positive face, the absolute value of the face deflection angle may be equal to the included angle between the face and the positive face, and the face deflection angle is a negative number.
In step S12, a plurality of continuous deformation regions are determined according to the original face key points and the face deflection angles, wherein each deformation region includes an original face region and a non-original face region.
The plurality of continuous deformation regions may be a plurality of continuous triangular patches.
In the embodiment of the present disclosure, by determining a plurality of continuous deformation regions and performing deformation processing based on the plurality of continuous deformation regions, it is possible to flexibly cope with different angles, sizes, and shapes of faces, and to obtain a good deformation effect for all of the different angles, sizes, and shapes of faces.
In step S13, the deformation processing is performed on the original face region and the non-original face region in each of the deformed regions, respectively.
In the embodiment of the present disclosure, for each deformed region, the original face region and the non-original face region are deformed, so that the influence of the deformation of the original face region is eliminated by the deformation of the non-original face region in the deformed region, and the deformation effect is more natural.
In the embodiment of the disclosure, each deformation area is subjected to deformation processing independently, so that the processing mode of the face image can be simplified, and the processing efficiency of the face image can be improved.
Fig. 2a is a schematic diagram of a face image before deformation processing in a face image processing method according to an embodiment of the present disclosure. Fig. 2b is a schematic diagram of a face image after deformation processing in the face image processing method according to the embodiment of the disclosure. In the example shown in fig. 2a and 2b, face thinning processing is performed on the face image.
According to the method and the device, the original face key points and the face deflection angles in the face image are obtained, the plurality of continuous deformation areas are determined according to the original face key points and the face deflection angles, and the original face area and the non-original face area in each deformation area are subjected to deformation processing respectively, so that the problem that the deformation areas are not well selected due to different postures or different angles of the face can be avoided, namely, the deformation areas can be accurately positioned, not only the original face area is subjected to deformation processing, but also a certain range outside the original face area is subjected to deformation processing, and the deformation effect is more natural. Based on the accurately positioned deformation region, the face-thinning effect or the face-fat effect can be more natural.
Fig. 3 shows an exemplary flowchart of step S12 of the processing method of a face image according to the embodiment of the present disclosure. As shown in fig. 3, step S12 may include step S121 and step S122.
In step S121, a deformation center in the face image is determined according to the original face key point and the face deflection angle.
In a possible implementation manner, the center of deformation in the face image may be determined according to the eye key points and the chin key points in the original face key points, and the face deflection angle.
The example shown in fig. 3 combines the original face key points and the face deflection angles to determine the deformation center in the face image, so that the accuracy of the determined deformation center can be improved, and the accuracy of the subsequent determination of the deformation region can be improved.
In step S122, a plurality of continuous deformation regions are determined based on the deformation center.
In one possible implementation, a plurality of continuous deformation regions may be determined according to the deformation center and the contour key points in the original face key points.
In the disclosed embodiment, the deformation center may be a vertex of each deformation region, that is, each deformation region may intersect at the deformation center.
Fig. 4 is a schematic diagram illustrating a deformation center and each deformation region in the processing method of a face image according to the embodiment of the present disclosure. In the example shown in fig. 4, the center of deformation is below the tip of the nose and the deformed region is a continuous triangular patch.
Fig. 5 shows an exemplary flowchart of step S121 of the processing method of a face image according to the embodiment of the present disclosure. As shown in fig. 5, step S121 may include step S1211 and step S1212.
In step S1211, an estimated position of the deformation center in the face image is determined according to the original face key points.
In one possible implementation, a midpoint of a binocular pupil keypoint in the original face keypoints can be determined, and an estimated position of a center of deformation in the face image is determined on a line segment between the midpoint and the chin keypoint. For example, of the quartering points of the line segment between the midpoint and the chin key point, the quartering point farthest from the chin key point may be used as the estimated position of the deformation center.
In step S1212, the estimated position of the deformation center is adjusted according to the face deflection angle, so as to obtain the deformation center in the face image.
In the embodiment of the present disclosure, if the face deflection angle is 0, the estimated position of the deformation center may be directly used as the deformation center in the face image without adjusting the estimated position of the deformation center. If the human face deflection angle is not 0, adjusting the estimated position of the deformation center according to the human face deflection angle to obtain the deformation center in the human face image.
Fig. 6 shows an exemplary flowchart of step S122 of the processing method of the face image according to the embodiment of the disclosure. As shown in fig. 6, step S122 may include steps S1221 to S1223.
In step S1221, original face contour key points are acquired.
In the embodiment of the present disclosure, the original face contour key points may be obtained based on the contour key points in the original face key points. For example, the contour key points in the original face key points can be used as original face contour key points.
In step S1222, for the plurality of original face contour key points, reference points of a deformation region are respectively determined on the extension lines of the line segment between the deformation center and each original face contour key point, resulting in a plurality of reference points of the deformation region, and the distance between the reference point of the deformation region and the corresponding original face contour key point is smaller than the distance between the reference point of the deformation region and the deformation center.
As shown in fig. 4, the reference point of the deformed region is on the extension line of the line segment between the deformed center and the key point of the original face contour, and the distance between the reference point of the deformed region and the corresponding key point of the original face contour is smaller than the distance between the reference point of the deformed region and the deformed center.
In step S1223, a plurality of continuous deformation regions are determined based on the deformation center and the reference points of the plurality of deformation regions.
As shown in fig. 4, the deformation center is connected to the reference points of the respective deformation regions, and the reference points of the adjacent deformation regions are connected to obtain a plurality of continuous deformation regions.
Fig. 7 shows an exemplary flowchart of step S1221 of the processing method of a face image according to an embodiment of the disclosure. As shown in fig. 7, step S1221 may include step S12211 and step S12212.
In step S12211, curve fitting is performed on the contour key points in the original face key points to obtain an original face contour curve.
In this example, curve fitting may be performed based on a small number of contour key points in the original face key points, so as to obtain an original face contour curve. By performing curve fitting based on a small number of contour key points, the calculation amount can be reduced, and the processing speed of the face image can be improved.
In a possible implementation manner, curve fitting can be performed on contour key points in the original face key points by adopting a Catmull-Rom fitting method to obtain an original face contour curve.
In step S12212, the original face contour curve is sampled to obtain key points of the original face contour.
In the embodiment of the present disclosure, the number of the obtained original face contour key points is greater than the number of contour key points in the original face key points.
In another possible implementation, obtaining original face contour key points includes: and selecting partial contour key points from a plurality of contour key points in the original face key points as original face contour key points. In this implementation, the original face contour key points may be directly obtained from a plurality of contour key points in the original face key points.
Fig. 8 shows an exemplary flowchart of step S13 of the processing method of a face image according to the embodiment of the present disclosure. As shown in fig. 8, step S13 may include steps S131 to S134.
In step S131, a target face contour curve is acquired.
In one possible implementation, obtaining a target face contour curve includes: and performing curve fitting on key points of the target face contour to obtain a target face contour curve.
As an example of the implementation manner, a Catmull-Rom fitting method may be adopted to perform curve fitting on key points of the target face contour to obtain a target face contour curve.
As one example of this implementation, the locations of the target face contour keypoints may be determined from operations that push or stretch the face.
In another possible implementation, obtaining a target face contour curve includes: and acquiring a target face contour curve according to the face thinning strength coefficient or the face fat strength coefficient and the position of the original face contour region. Wherein, the face slimming force coefficient or the face fat force coefficient can be customized by the user.
In step S132, the intersection of the deformed region and the target face contour curve is determined as a target reference point in the deformed region.
In the embodiment of the disclosure, when face thinning is performed, the distance between the target reference point and the deformation center is smaller than the distance between the corresponding original face contour key point and the deformation center; when the face is fat, the distance between the target reference point and the deformation center is larger than the distance between the corresponding original face contour key point and the deformation center.
In step S133, the original face region is subjected to the deformation processing based on the target reference points in the deformation region and the original face contour key points in the deformation region.
In the embodiment of the disclosure, when face thinning is performed, the original face area is shrunk according to a target reference point in the deformation area and an original face contour key point in the deformation area; and when the face is fat, stretching the original face region according to the target reference point in the deformed region and the key point of the original face contour in the deformed region.
In step S134, the non-original face region is subjected to the deformation processing based on the target reference points in the deformation region, the original face contour key points in the deformation region, and the reference points of the deformation region.
In the embodiment of the disclosure, when face thinning is performed, stretching processing is performed on a non-original face region according to a target reference point in a deformation region, a key point of an original face contour in the deformation region and a reference point of the deformation region; when the face is fat, the non-original face region is shrunk according to the target reference point in the deformed region, the key point of the original face contour in the deformed region and the reference point of the deformed region.
Fig. 9 is a schematic diagram illustrating a deformed region in a method for processing a face image according to an embodiment of the present disclosure. In the example shown in fig. 9, a is the center of deformation, the reference points for the deformed region are B and C, and A, B and C are connected to obtain the deformed region. E and F are the original face contour key points. G and H are the intersection points of the deformation areas and the contour curve of the target face, namely target reference points. AEF is the original face region in the deformed region, EFBC is the non-original face region in the deformed region, and AGH is the target face region. And P is any pixel in the deformation region, A and P are connected and extended, the ray AP intersects with the line segment GH at a point J, the ray AP intersects with the line segment EF at a point I, and the ray AP intersects with the line segment BC at a point K.
Fig. 10 shows an exemplary flowchart of step S133 of the processing method of a face image according to the embodiment of the disclosure. As shown in fig. 10, step S133 may include steps S1331 to S1334.
In step S1331, a line segment between the target reference points in the deformed region is determined as a first line segment.
For example, a line segment between the target reference points G and H in the deformed region ABC is determined as the first line segment GH.
In step S1332, a line segment between the key points of the original face contour in the deformed region is determined as a second line segment.
For example, a line segment between the key points E and F of the original face contour in the deformed region ABC is determined as the second line segment EF.
In step S1333, for each pixel in the original face region in the deformed region, a target pixel corresponding to each pixel is determined according to an intersection of a connecting line of the deformation center and each pixel and the first line segment and the second line segment.
For example, for a pixel P in the original face area AEF in the deformed area ABC, a target pixel Q corresponding to the pixel P is determined according to an intersection point J of a connecting line AP of the deformation center a and the pixel P and the first line segment GH, and an intersection point I of the connecting line AP of the deformation center a and the pixel P and the second line segment EF.
In step S1334, the pixel value of each pixel is updated to the pixel value of the corresponding target pixel.
For example, the pixel value of the pixel P is updated to the pixel value of the target pixel Q, that is, the pixel value of the target pixel Q is taken as the pixel value of the pixel P.
Fig. 11 shows an exemplary flowchart of step S1333 of the processing method of a face image according to the embodiment of the present disclosure. As shown in fig. 11, step S1333 may include steps S13331 to S13333.
In step S13331, for a first pixel in the original face region in the deformed region, a first connection line of the deformation center and the first pixel is determined.
For example, for a first pixel P in original face region AEF in deformed region ABC, a first connecting line AP of deformation center a and first pixel P is determined.
In step S13332, a first intersection of the first line with the first line segment and a second intersection of the first line with the second line segment are determined.
For example, a first intersection J of the first connection AP with the first line GH and a second intersection I of the first connection AP with the second line EF are determined.
In step S13333, a target pixel corresponding to the first pixel is determined according to the deformation center, the first pixel, and the positions of the first intersection and the second intersection.
For example, the target pixel Q corresponding to the first pixel P is determined according to the positions of the deformation center a, the first pixel P, the first intersection J, and the second intersection I.
Fig. 12 shows an exemplary flowchart of step S13333 of the processing method of a face image according to the embodiment of the present disclosure. As shown in fig. 12, step S13333 may include steps S133331 to S133335.
In step S133331, a first distance between the center of deformation and the first intersection point is acquired.
For example, a first distance D between the deformation center A and the first intersection point J is acquired AJ
In step S133332, a second distance between the deformation center and the second intersection point is acquired.
For example, a second distance D between the deformation center A and the second intersection point I is obtained AI
In step S133333, a third distance between the center of deformation and the first pixel is acquired.
For example, a third distance D between the center of deformation a and the first pixel P is acquired AP
In step S133334, a fourth distance between the deformation center and the target pixel corresponding to the first pixel is determined according to the first distance, the second distance, and the third distance.
For example, according to the first distance D AJ A second distance D AI And a third distance D AP Determining a fourth distance D between the center of deformation A and the target pixel Q corresponding to the first pixel P AQ
In step S133335, a target pixel corresponding to the first pixel is determined according to the fourth distance and the position of the deformation center.
For example, according to the fourth distance D AQ And the position of the deformation center A, and determining the position of the target pixel Q corresponding to the first pixel P.
Fig. 13 shows an exemplary flowchart of step S133334 of the processing method of a face image according to the embodiment of the present disclosure. As shown in fig. 13, the step S133334 may include steps S1333341 and S1333342.
In step S1333341, a first ratio of the first distance to the second distance is calculated.
For example, the first distance D is calculated AJ At a second distance D AI First ratio D of AJ /D AI
In step S1333342, a product of the first ratio and the third distance is calculated to obtain a fourth distance between the deformation center and the target pixel corresponding to the first pixel.
For example, a first ratio D is calculated AJ /D AI A third distance D AP Product of D AJ /D AI ×D AP Obtaining a fourth distance D between the deformation center A and the target pixel Q corresponding to the first pixel P AQ =D AJ /D AI ×D AP
Fig. 14 shows an exemplary flowchart of step S134 of the processing method of a face image according to the embodiment of the disclosure. As shown in fig. 14, step S134 may include steps S1341 to S1345.
In step S1341, a line segment between the target reference points in the deformation region is determined as a first line segment.
For example, a line segment between the target reference points G and H in the deformed region ABC is determined as the first line segment GH.
In step S1342, the line segment between the key points of the original face contour in the deformed region is determined as the second line segment.
For example, a line segment between the key points E and F of the original face contour in the deformed region ABC is determined as a second line segment EF.
In step S1343, a line segment between the reference points of the deformed region is determined as a third line segment.
For example, a line segment between reference points B and C of the deformed region ABC is determined as the third line segment BC.
In step S1344, for each pixel in the non-original face region in the deformed region, a target pixel corresponding to each pixel is determined according to an intersection of a connecting line between the center of deformation and each pixel and the first line segment, the second line segment, and the third line segment.
For example, for a pixel P in the non-original face area EFBC in the deformed area ABC, a target pixel Q corresponding to the pixel P is determined according to intersections J, I and K of a connection line AP between the deformation center a and the pixel P and the first line GH, the second line EF, and the third line BC.
In step S1345, the pixel value of each pixel is updated to the pixel value of the corresponding target pixel.
For example, the pixel value of the pixel P is updated to the pixel value of the target pixel Q, that is, the pixel value of the target pixel Q is taken as the pixel value of the pixel P.
Fig. 15 shows an exemplary flowchart of step S1344 of the processing method of a face image according to the embodiment of the disclosure. As shown in fig. 15, step S1344 may include steps S13441 to S13443.
In step S13441, for a second pixel in the non-original face region in the deformed region, a second line connecting the center of deformation and the second pixel is determined.
For example, for a second pixel P in the non-original face region EFBC in the deformed region ABC, a second connecting line AP of the deformation center a and the second pixel P is determined.
In step S13442, a third intersection of the second line with the first line segment, a fourth intersection of the second line with the second line segment, and a fifth intersection of the second line with the third line segment are determined.
For example, a third intersection J of the second connection AP with the first line segment GH, a fourth intersection I of the second connection AP with the second line segment EF, and a fifth intersection K of the second connection AP with the third line segment BC are determined.
In step S13443, a target pixel corresponding to the second pixel is determined based on the positions of the deformation center, the second pixel, the third intersection, the fourth intersection, and the fifth intersection.
For example, the target pixel Q corresponding to the second pixel P is determined according to the positions of the deformation center a, the second pixel P, the third intersection J, the fourth intersection I, and the fifth intersection K.
Fig. 16 shows an exemplary flowchart of step S13443 of the processing method of a face image according to an embodiment of the disclosure. As shown in fig. 16, step S13443 may include steps S134431 to S134436.
In step S134431, a fifth distance between the deformation center and the third intersection is acquired.
For example, a fifth distance D between the deformation center A and the third intersection point J is acquired AJ
In step S134432, a sixth distance between the deformation center and the fourth intersection point is acquired.
For example, a sixth distance D between the deformation center A and the fourth intersection point I is acquired AI
In step S134433, a seventh distance between the deformation center and the fifth intersection is acquired.
For example, a seventh distance D between the deformation center A and the fifth intersection point K is acquired AK
In step S134434, an eighth distance between the center of deformation and the second pixel is acquired.
For example, an eighth distance D between the center of deformation A and the second pixel P is obtained AP
In step S134435, a ninth distance between the deformation center and the target pixel corresponding to the second pixel is determined according to the fifth distance, the sixth distance, the seventh distance, and the eighth distance.
For example, according to the fifth distance D AJ A sixth distance D AI A seventh distance D AK And an eighth distance D AP Determining a ninth distance D between the center of deformation A and the target pixel Q corresponding to the second pixel P AQ
In step S134436, a target pixel corresponding to the second pixel is determined based on the ninth distance and the position of the deformation center.
E.g. rootAccording to a ninth distance D AQ And the position of the deformation center A, and determining the position of the target pixel Q corresponding to the second pixel P.
Fig. 17 shows an exemplary flowchart of step S134435 of the processing method of a face image according to an embodiment of the disclosure. As shown in fig. 17, step S134435 may include steps S1344351 to S1344354.
In step S1344351, a first difference between the seventh distance and the fifth distance is calculated.
For example, the seventh distance D is calculated AK A fifth distance D AJ First difference D of AK -D AJ
In step S1344352, a second difference between the seventh distance and the sixth distance is calculated.
For example, the seventh distance D is calculated AK A sixth distance D AI Second difference D of AK -D AI
In step S1344353, a second ratio of the first difference to the second difference is calculated.
For example, a first difference (D) is calculated AK -D AJ ) And a second difference (D) AK -D AI ) Second ratio (D) AK -D AJ )/(D AK -D AI )。
In step S1344354, a product of the second ratio and the eighth distance is calculated to obtain a ninth distance between the deformation center and the target pixel corresponding to the second pixel.
For example, a second ratio (D) is calculated AK -D AJ )/(D AK -D AI ) At an eighth distance D AP Product of (D) AK -D AJ )/(D AK -D AI )×D AP Obtaining a ninth distance D between the center of deformation and the target pixel corresponding to the second pixel AQ =(D AK -D AJ )/(D AK -D AI )×D AP
Fig. 18 is a schematic diagram illustrating deformation functions of an original face region and a non-original face region of a deformed region in a processing method of a face image according to an embodiment of the present disclosure. For example, the original face region of the deformed region ABCThe deformation function in the domain AEF is
Figure BDA0001842163250000241
The deformation function in the non-original face region EFBC of the deformation region ABC is
Figure BDA0001842163250000242
That is, when x is 0. ltoreq. D AI When the temperature of the water is higher than the set temperature,
Figure BDA0001842163250000243
when x > D AI When the temperature of the water is higher than the set temperature,
Figure BDA0001842163250000244
wherein x is D AP ,y=D AQ
Note that, in the example shown in fig. 18, the deformation function is a linear function. In other possible examples, the deformation function may not be a linear function, and is not limited herein.
The processing method of the face image provided by the embodiment of the disclosure has certain fault tolerance, and can perform overall deformation aiming at a certain area around the face contour, so that the negative influence caused by the error of the key point can be reduced, and the overall effect is more stable. In addition, the processing method of the face image provided by the embodiment of the disclosure has linear complexity, has higher processing speed for the face image, and can be applied to camera real-time preview, image processing, video processing, live broadcast software, game engine and the like. In a possible implementation manner, the processing method of the face image provided by the embodiment of the disclosure may be applied to one or more of face shape adjustment, chin stretching adjustment, forehead plumping adjustment, apple muscle adjustment, and hairline adjustment.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
In addition, the present disclosure also provides a device for a face image, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the methods for processing a face image provided by the present disclosure, and the corresponding technical solutions and descriptions thereof and the corresponding descriptions thereof in the methods section are omitted for brevity.
Fig. 19 shows a block diagram of a face image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 19, the apparatus includes: the acquisition module 21 is configured to acquire an original face key point and a face deflection angle in a face image; a determining module 22, configured to determine a plurality of continuous deformation regions according to the original face key points and the face deflection angles, where each deformation region includes an original face region and a non-original face region; and a deformation processing module 23, configured to perform deformation processing on the original face region and the non-original face region in each deformation region respectively.
Fig. 20 shows an exemplary block diagram of a processing apparatus for a face image according to an embodiment of the present disclosure. As shown in fig. 20:
in one possible implementation, the determining module 22 includes: the first determining submodule 221, configured to determine a deformation center in the face image according to the original face key point and the face deflection angle; a second determination submodule 222 for determining a plurality of successive deformation regions based on the deformation center.
In one possible implementation, the first determining submodule 221 includes: the first determining unit is used for determining the estimated position of a deformation center in the face image according to the key points of the original face; and the adjusting unit is used for adjusting the estimated position of the deformation center according to the deflection angle of the face to obtain the deformation center in the face image.
In one possible implementation, the second determining submodule 222 includes: the acquisition unit is used for acquiring key points of the original face contour; a second determining unit, configured to determine, for the plurality of original face contour key points, reference points of a deformed region on an extension line of a line segment between a deformation center and each original face contour key point, respectively, to obtain reference points of the plurality of deformed regions, where a distance between a reference point of a deformed region and a corresponding original face contour key point is smaller than a distance between the reference point of the deformed region and the deformation center; and the third determining unit is used for determining a plurality of continuous deformation areas according to the deformation center and the reference points of the plurality of deformation areas.
In one possible implementation, the obtaining unit includes: the curve fitting subunit is used for performing curve fitting on the contour key points in the original face key points to obtain an original face contour curve; and the acquisition subunit is used for sampling the original face contour curve and acquiring the key points of the original face contour.
In one possible implementation manner, the obtaining unit is configured to: and selecting partial contour key points from a plurality of contour key points in the original face key points as original face contour key points.
In one possible implementation, the deformation processing module 23 includes: an obtaining submodule 231 for obtaining a target face contour curve; a third determining submodule 232, configured to determine an intersection point of the deformation region and the target face contour curve as a target reference point in the deformation region; a first deformation processing submodule 233, configured to perform deformation processing on the original face region according to the target reference point in the deformation region and the key point of the original face contour in the deformation region; the second deformation processing submodule 234 is configured to perform deformation processing on the non-original face region according to the target reference point in the deformation region, the key point of the original face contour in the deformation region, and the reference point of the deformation region.
In one possible implementation, the obtaining submodule 231 is configured to: and performing curve fitting on key points of the target face contour to obtain a target face contour curve.
In one possible implementation, the first deformation processing submodule 233 includes: a fourth determination unit configured to determine a line segment between the target reference points in the deformation region as the first line segment; a fifth determining unit, configured to determine a line segment between key points of the original face contour in the deformed region as a second line segment; a sixth determining unit, configured to determine, for each pixel in the original face region in the deformed region, a target pixel corresponding to each pixel according to an intersection of a connection line between the deformation center and each pixel and the first line segment and the second line segment; and the first updating unit is used for respectively updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
In one possible implementation manner, the sixth determining unit includes: a first determining subunit, configured to determine, for a first pixel in the original face region in the deformed region, a first connection line between the deformation center and the first pixel; the second determining subunit is used for determining a first intersection point of the first connecting line and the first line segment and a second intersection point of the first connecting line and the second line segment; and the third determining subunit is used for determining the target pixel corresponding to the first pixel according to the deformation center, the first pixel, the positions of the first intersection point and the second intersection point.
In one possible implementation manner, the third determining subunit is configured to: acquiring a first distance between the deformation center and the first intersection point; acquiring a second distance between the deformation center and the second intersection point; acquiring a third distance between the deformation center and the first pixel; determining a fourth distance between the deformation center and a target pixel corresponding to the first pixel according to the first distance, the second distance and the third distance; and determining a target pixel corresponding to the first pixel according to the fourth distance and the position of the deformation center.
In one possible implementation, the third determining subunit is configured to: calculating a first ratio of the first distance to the second distance; and calculating the product of the first ratio and the third distance to obtain a fourth distance between the deformation center and the target pixel corresponding to the first pixel.
In one possible implementation, the second deformation processing sub-module 234 includes: a seventh determining unit configured to determine a line segment between the target reference points in the deformation region as the first line segment; an eighth determining unit, configured to determine a line segment between key points of the original face contour in the deformed region as a second line segment; a ninth determining unit for determining a line segment between the reference points of the deformed region as a third line segment; a tenth determining unit, configured to determine, for each pixel in the non-original face region in the deformed region, a target pixel corresponding to each pixel according to an intersection of a connection line between the deformation center and each pixel and the first line segment, the second line segment, and the third line segment; and the second updating unit is used for respectively updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
In one possible implementation manner, the tenth determining unit includes: a fourth determining subunit, configured to determine, for a second pixel in a non-original face region in the deformed region, a second line connecting the deformation center and the second pixel; the fifth determining subunit is used for determining a third intersection point of the second connecting line and the first line segment, a fourth intersection point of the second connecting line and the second line segment, and a fifth intersection point of the second connecting line and the third line segment; and the sixth determining subunit is used for determining the target pixel corresponding to the second pixel according to the positions of the deformation center, the second pixel, the third intersection point, the fourth intersection point and the fifth intersection point.
In one possible implementation manner, the sixth determining subunit is configured to: acquiring a fifth distance between the deformation center and the third intersection point; acquiring a sixth distance between the deformation center and the fourth intersection point; acquiring a seventh distance between the deformation center and the fifth intersection point; acquiring an eighth distance between the deformation center and the second pixel; determining a ninth distance between the deformation center and a target pixel corresponding to the second pixel according to the fifth distance, the sixth distance, the seventh distance and the eighth distance; and determining a target pixel corresponding to the second pixel according to the ninth distance and the position of the deformation center.
In one possible implementation, the sixth determining subunit is configured to: calculating a first difference value between the seventh distance and the fifth distance; calculating a second difference between the seventh distance and the sixth distance; calculating a second ratio of the first difference to the second difference; and calculating the product of the second ratio and the eighth distance to obtain a ninth distance between the deformation center and the target pixel corresponding to the second pixel.
In one possible implementation, the device is applied to one or more of facial contour adjustment, chin extension adjustment, forehead plump adjustment, apple muscle adjustment, and hairline adjustment.
According to the method and the device, the original face key points and the face deflection angles in the face image are obtained, the plurality of continuous deformation regions are determined according to the original face key points and the face deflection angles, and deformation processing is respectively carried out on the original face region and the non-original face region in each deformation region, so that the deformation regions can be prevented from being selected badly due to different postures or different angles of the face, namely, the deformation regions can be accurately positioned, the original face region is subjected to deformation processing, a certain range outside the original face region is subjected to deformation processing, and the deformation effect is more natural.
Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and when executed by a processor, the computer program instructions implement the above method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 21 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
Referring to fig. 21, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communications component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 22 is a block diagram illustrating an electronic device 1900 in accordance with an example embodiment. For example, the electronic device 1900 may be provided as a server. Referring to fig. 22, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may further include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (32)

1. A method for processing a face image is characterized by comprising the following steps:
acquiring original face key points and face deflection angles in a face image;
determining a deformation center in the face image according to the original face key points and the face deflection angle;
determining a plurality of continuous deformation regions according to the deformation center, wherein each deformation region comprises an original face region and a non-original face region;
acquiring a target face contour curve;
for any one of the plurality of continuous deformation regions, determining an intersection point of the deformation region and the target face contour curve as a target reference point in the deformation region, performing deformation processing on the original face region according to the target reference point in the deformation region and original face contour key points in the deformation region, and performing deformation processing on the non-original face region according to the target reference point in the deformation region, the original face contour key points in the deformation region and reference points of the deformation region.
2. The method of claim 1, wherein determining a center of distortion in the face image based on the original face keypoints and the face deflection angle comprises:
determining the estimated position of a deformation center in the face image according to the original face key point;
and adjusting the estimated position of the deformation center according to the human face deflection angle to obtain the deformation center in the human face image.
3. The method according to claim 1 or 2, wherein determining a plurality of successive deformation regions from the deformation center comprises:
acquiring original face contour key points;
respectively determining reference points of a deformation region on an extension line of a line segment between the deformation center and each original face contour key point according to the original face contour key points to obtain the reference points of a plurality of deformation regions, wherein the distance between the reference point of the deformation region and the corresponding original face contour key point is less than the distance between the reference point of the deformation region and the deformation center;
and determining a plurality of continuous deformation areas according to the deformation center and the reference points of the plurality of deformation areas.
4. The method of claim 3, wherein obtaining original face contour keypoints comprises:
performing curve fitting on the contour key points in the original face key points to obtain an original face contour curve;
and sampling the original face contour curve to obtain key points of the original face contour.
5. The method of claim 3, wherein obtaining original face contour keypoints comprises:
and selecting partial contour key points from a plurality of contour key points in the original face key points as original face contour key points.
6. The method of claim 1, wherein obtaining a target face contour curve comprises:
and performing curve fitting on key points of the target face contour to obtain a target face contour curve.
7. The method according to claim 1 or 6, wherein the deforming the original face region according to the target reference points in the deformed region and the original face contour key points in the deformed region comprises:
determining a line segment between the target reference points in the deformation region as a first line segment;
determining line segments among key points of the original face contour in the deformation area as second line segments;
for each pixel in the original face area in the deformation area, determining a target pixel corresponding to each pixel according to an intersection point of a connecting line of the deformation center and each pixel and the first line segment and the second line segment;
and respectively updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
8. The method according to claim 7, wherein for each pixel in the original face area in the deformed area, determining a target pixel corresponding to each pixel according to an intersection of a connecting line of the center of deformation and each pixel and the first line segment and the second line segment comprises:
for a first pixel in the original face region in the deformed region, determining a first connecting line of the deformation center and the first pixel;
determining a first intersection point of the first connecting line and the first line segment and a second intersection point of the first connecting line and the second line segment;
and determining a target pixel corresponding to the first pixel according to the deformation center, the first pixel, and the positions of the first intersection point and the second intersection point.
9. The method of claim 8, wherein determining a target pixel corresponding to the first pixel according to the locations of the deformation center, the first pixel, the first intersection, and the second intersection comprises:
acquiring a first distance between the deformation center and the first intersection point;
acquiring a second distance between the deformation center and the second intersection point;
acquiring a third distance between the center of deformation and the first pixel;
determining a fourth distance between the deformation center and a target pixel corresponding to the first pixel according to the first distance, the second distance and the third distance;
and determining a target pixel corresponding to the first pixel according to the fourth distance and the position of the deformation center.
10. The method of claim 9, wherein determining a fourth distance between the center of deformation and a target pixel corresponding to the first pixel from the first distance, the second distance, and the third distance comprises:
calculating a first ratio of the first distance to the second distance;
and calculating the product of the first ratio and the third distance to obtain a fourth distance between the deformation center and the target pixel corresponding to the first pixel.
11. The method according to claim 1 or 6, wherein the deformation processing of the non-original face region according to the target reference point in the deformation region, the original face contour key point in the deformation region and the reference point of the deformation region comprises:
determining a line segment between the target reference points in the deformation region as a first line segment;
determining line segments among key points of the original face contour in the deformation area as second line segments;
determining a line segment between the reference points of the deformation region as a third line segment;
for each pixel in the non-original face area in the deformation area, determining a target pixel corresponding to each pixel according to the intersection point of the connecting line of the deformation center and each pixel and the first line segment, the second line segment and the third line segment;
and updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
12. The method according to claim 11, wherein for each pixel in the non-original face region in the deformed region, determining a target pixel corresponding to the each pixel according to an intersection of a connecting line of the center of deformation and the each pixel and the first line segment, the second line segment, and the third line segment comprises:
for a second pixel in the non-original face region in the deformed region, determining a second line connecting the deformation center and the second pixel;
determining a third intersection point of the second connecting line and the first line segment, a fourth intersection point of the second connecting line and the second line segment, and a fifth intersection point of the second connecting line and the third line segment;
and determining a target pixel corresponding to the second pixel according to the positions of the deformation center, the second pixel, the third intersection point, the fourth intersection point and the fifth intersection point.
13. The method of claim 12, wherein determining a target pixel corresponding to the second pixel according to the positions of the deformation center, the second pixel, the third intersection, the fourth intersection, and the fifth intersection comprises:
acquiring a fifth distance between the deformation center and the third intersection point;
acquiring a sixth distance between the deformation center and the fourth intersection point;
acquiring a seventh distance between the deformation center and the fifth intersection point;
acquiring an eighth distance between the center of deformation and the second pixel;
determining a ninth distance between the deformation center and a target pixel corresponding to the second pixel according to the fifth distance, the sixth distance, the seventh distance and the eighth distance;
and determining a target pixel corresponding to the second pixel according to the ninth distance and the position of the deformation center.
14. The method of claim 13, wherein determining a ninth distance between the center of deformation and a target pixel corresponding to the second pixel according to the fifth distance, the sixth distance, the seventh distance, and the eighth distance comprises:
calculating a first difference of the seventh distance and the fifth distance;
calculating a second difference of the seventh distance and the sixth distance;
calculating a second ratio of the first difference to the second difference;
and calculating the product of the second ratio and the eighth distance to obtain a ninth distance between the deformation center and the target pixel corresponding to the second pixel.
15. The method of claim 1 or 2, wherein the method is applied in one or more of facial modification, chin extension modification, forehead satiation modification, apple muscle modification and hairline modification.
16. An apparatus for processing a face image, comprising:
the acquisition module is used for acquiring original face key points and face deflection angles in the face image;
the determining module is used for determining a plurality of continuous deformation areas according to the original human face key points and the human face deflection angle, wherein each deformation area comprises an original face area and a non-original face area;
the deformation processing module is used for respectively carrying out deformation processing on the original face area and the non-original face area in each deformation area;
the determining module comprises:
the first determining submodule is used for determining a deformation center in the face image according to the original face key point and the face deflection angle;
the second determining submodule is used for determining a plurality of continuous deformation areas according to the deformation center;
the deformation processing module includes:
the acquisition submodule is used for acquiring a target face contour curve;
a third determining submodule, configured to determine an intersection point of the deformation region and the target face contour curve as a target reference point in the deformation region;
the first deformation processing submodule is used for carrying out deformation processing on the original face area according to a target reference point in the deformation area and an original face contour key point in the deformation area;
and the second deformation processing submodule is used for carrying out deformation processing on the non-original face region according to the target reference point in the deformation region, the original face contour key point in the deformation region and the reference point of the deformation region.
17. The apparatus of claim 16, wherein the first determining submodule comprises:
the first determining unit is used for determining the estimated position of a deformation center in the face image according to the original face key point;
and the adjusting unit is used for adjusting the estimated position of the deformation center according to the human face deflection angle to obtain the deformation center in the human face image.
18. The apparatus of claim 16 or 17, wherein the second determining submodule comprises:
the acquiring unit is used for acquiring key points of the original face contour;
a second determining unit, configured to determine, for a plurality of original face contour key points, reference points of a deformed region on an extension line of a line segment between the deformation center and each of the original face contour key points, respectively, to obtain reference points of a plurality of deformed regions, where a distance between a reference point of the deformed region and a corresponding original face contour key point is smaller than a distance between the reference point of the deformed region and the deformation center;
a third determining unit, configured to determine a plurality of continuous deformation regions according to the deformation center and the reference points of the plurality of deformation regions.
19. The apparatus of claim 18, wherein the obtaining unit comprises:
the curve fitting subunit is used for performing curve fitting on the contour key points in the original face key points to obtain an original face contour curve;
and the acquisition subunit is used for sampling the original face contour curve to acquire the key points of the original face contour.
20. The apparatus of claim 18, wherein the obtaining unit is configured to:
and selecting partial contour key points from a plurality of contour key points in the original face key points as original face contour key points.
21. The apparatus of claim 16, wherein the acquisition sub-module is configured to:
and performing curve fitting on key points of the target face contour to obtain a target face contour curve.
22. The apparatus of claim 16 or 21, wherein the first deformation processing submodule comprises:
a fourth determination unit configured to determine a line segment between the target reference points in the deformation region as a first line segment;
a fifth determining unit, configured to determine a line segment between key points of the original face contour in the deformation region as a second line segment;
a sixth determining unit, configured to determine, for each pixel in the original face region in the deformed region, a target pixel corresponding to each pixel according to an intersection of a connection line between the deformation center and each pixel and the first line segment and the second line segment;
and the first updating unit is used for respectively updating the pixel value of each pixel to the pixel value of the corresponding target pixel.
23. The apparatus of claim 22, wherein the sixth determining unit comprises:
a first determining subunit, configured to determine, for a first pixel in the original face region in the deformed region, a first connection line between the deformation center and the first pixel;
a second determining subunit, configured to determine a first intersection point of the first connection line and the first line segment, and a second intersection point of the first connection line and the second line segment;
and the third determining subunit is used for determining a target pixel corresponding to the first pixel according to the deformation center, the first pixel, and the positions of the first intersection point and the second intersection point.
24. The apparatus of claim 23, wherein the third determining subunit is configured to:
acquiring a first distance between the deformation center and the first intersection point;
acquiring a second distance between the deformation center and the second intersection point;
acquiring a third distance between the center of deformation and the first pixel;
determining a fourth distance between the deformation center and a target pixel corresponding to the first pixel according to the first distance, the second distance and the third distance;
and determining a target pixel corresponding to the first pixel according to the fourth distance and the position of the deformation center.
25. The apparatus of claim 24, wherein the third determining subunit is configured to:
calculating a first ratio of the first distance to the second distance;
and calculating the product of the first ratio and the third distance to obtain a fourth distance between the deformation center and the target pixel corresponding to the first pixel.
26. The apparatus of claim 16 or 21, wherein the second deformation processing submodule comprises:
a seventh determining unit configured to determine a line segment between the target reference points in the deformation region as a first line segment;
an eighth determining unit, configured to determine a line segment between key points of the original face contour in the deformation region as a second line segment;
a ninth determining unit configured to determine a line segment between the reference points of the deformation region as a third line segment;
a tenth determining unit, configured to determine, for each pixel in the non-original face region in the deformed region, a target pixel corresponding to each pixel according to an intersection of a connection line between the deformation center and each pixel and the first line segment, the second line segment, and the third line segment;
and a second updating unit, configured to update the pixel values of the pixels to the pixel values of the corresponding target pixels, respectively.
27. The apparatus of claim 26, wherein the tenth determining unit comprises:
a fourth determining subunit, configured to determine, for a second pixel in the non-original face region in the deformed region, a second line connecting the deformation center and the second pixel;
a fifth determining subunit, configured to determine a third intersection point of the second connection line and the first line segment, a fourth intersection point of the second connection line and the second line segment, and a fifth intersection point of the second connection line and the third line segment;
and a sixth determining subunit, configured to determine, according to the positions of the deformation center, the second pixel, the third intersection, the fourth intersection, and the fifth intersection, a target pixel corresponding to the second pixel.
28. The apparatus of claim 27, wherein the sixth determining subunit is configured to:
acquiring a fifth distance between the deformation center and the third intersection point;
acquiring a sixth distance between the deformation center and the fourth intersection point;
acquiring a seventh distance between the deformation center and the fifth intersection point;
acquiring an eighth distance between the deformation center and the second pixel;
determining a ninth distance between the deformation center and a target pixel corresponding to the second pixel according to the fifth distance, the sixth distance, the seventh distance and the eighth distance;
and determining a target pixel corresponding to the second pixel according to the ninth distance and the position of the deformation center.
29. The apparatus of claim 28, wherein the sixth determining subunit is configured to:
calculating a first difference of the seventh distance and the fifth distance;
calculating a second difference of the seventh distance and the sixth distance;
calculating a second ratio of the first difference to the second difference;
and calculating the product of the second ratio and the eighth distance to obtain a ninth distance between the deformation center and the target pixel corresponding to the second pixel.
30. The device of claim 16 or 17, wherein the device is used in one or more of facial modification, chin extension modification, forehead satiation modification, apple muscle modification and hairline modification.
31. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 15.
32. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 15.
CN201811253127.9A 2018-10-25 2018-10-25 Face image processing method and device, electronic equipment and storage medium Active CN109377446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811253127.9A CN109377446B (en) 2018-10-25 2018-10-25 Face image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811253127.9A CN109377446B (en) 2018-10-25 2018-10-25 Face image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109377446A CN109377446A (en) 2019-02-22
CN109377446B true CN109377446B (en) 2022-08-30

Family

ID=65401503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811253127.9A Active CN109377446B (en) 2018-10-25 2018-10-25 Face image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109377446B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949237A (en) * 2019-03-06 2019-06-28 北京市商汤科技开发有限公司 Image processing method and device, vision facilities and storage medium
CN109934766B (en) 2019-03-06 2021-11-30 北京市商汤科技开发有限公司 Image processing method and device
CN109993067B (en) * 2019-03-07 2022-01-28 北京旷视科技有限公司 Face key point extraction method and device, computer equipment and storage medium
CN112070681B (en) * 2019-05-24 2024-02-13 北京小米移动软件有限公司 Image processing method and device
CN112492211A (en) * 2020-12-01 2021-03-12 咪咕文化科技有限公司 Shooting method, electronic equipment and storage medium
CN116188290A (en) * 2022-12-09 2023-05-30 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958479A (en) * 2017-12-26 2018-04-24 南京开为网络科技有限公司 A kind of mobile terminal 3D faces augmented reality implementation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2550283C1 (en) * 2013-11-21 2015-05-10 Александр Петрович Худяков Method for typing individual's psychics

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958479A (en) * 2017-12-26 2018-04-24 南京开为网络科技有限公司 A kind of mobile terminal 3D faces augmented reality implementation method

Also Published As

Publication number Publication date
CN109377446A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109377446B (en) Face image processing method and device, electronic equipment and storage medium
CN109784255B (en) Neural network training method and device and recognition method and device
CN107692997B (en) Heart rate detection method and device
KR101694643B1 (en) Method, apparatus, device, program, and recording medium for image segmentation
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN109446912B (en) Face image processing method and device, electronic equipment and storage medium
CN110288716B (en) Image processing method, device, electronic equipment and storage medium
CN107944367B (en) Face key point detection method and device
CN109840939B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
CN111243011A (en) Key point detection method and device, electronic equipment and storage medium
CN111241887B (en) Target object key point identification method and device, electronic equipment and storage medium
CN110933488A (en) Video editing method and device
CN111553864A (en) Image restoration method and device, electronic equipment and storage medium
CN109325908B (en) Image processing method and device, electronic equipment and storage medium
EP3113071A1 (en) Method and device for acquiring iris image
CN110928627A (en) Interface display method and device, electronic equipment and storage medium
CN110930336A (en) Image processing method and device, electronic equipment and storage medium
CN111563138A (en) Positioning method and device, electronic equipment and storage medium
CN112767288A (en) Image processing method and device, electronic equipment and storage medium
CN111860388A (en) Image processing method and device, electronic equipment and storage medium
CN113538310A (en) Image processing method and device, electronic equipment and storage medium
CN112541971A (en) Point cloud map construction method and device, electronic equipment and storage medium
CN110807769B (en) Image display control method and device
CN113012052A (en) Image processing method and device, electronic equipment and storage medium
CN112613447B (en) Key point detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant