WO2020177394A1 - 一种图像处理方法及装置 - Google Patents

一种图像处理方法及装置 Download PDF

Info

Publication number
WO2020177394A1
WO2020177394A1 PCT/CN2019/119534 CN2019119534W WO2020177394A1 WO 2020177394 A1 WO2020177394 A1 WO 2020177394A1 CN 2019119534 W CN2019119534 W CN 2019119534W WO 2020177394 A1 WO2020177394 A1 WO 2020177394A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
key point
deformation
point information
face
Prior art date
Application number
PCT/CN2019/119534
Other languages
English (en)
French (fr)
Inventor
苏柳
杨瑞健
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to SG11202006345UA priority Critical patent/SG11202006345UA/en
Priority to KR1020207013711A priority patent/KR102442483B1/ko
Priority to JP2020536145A priority patent/JP7160925B2/ja
Priority to US16/920,972 priority patent/US11244449B2/en
Publication of WO2020177394A1 publication Critical patent/WO2020177394A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Definitions

  • This application relates to image processing technology, in particular to an image processing method and device.
  • the embodiments of the present application provide an image processing method and device.
  • An embodiment of the present application provides an image processing method, the method includes: obtaining a first image, identifying a face area in the first image, and determining key point information related to the face area, where the key point information includes : Key point information and outer edge key point information of the face area; the area corresponding to the outer edge key point information includes the face area and is larger than the face area; determining multiple deformation areas based on the key point information, Performing an image deformation process on the face region based on at least a part of the deformation region in the plurality of deformation regions to generate a second image.
  • the key point information of the face area includes key point information of the organs of the face area and key point information of the edge of the face area; the edge of the face area corresponds to all The contour of the facial area; the key point information of the organ includes the central key point information of the organ and/or the contour key point information of the organ.
  • the determining multiple deformation areas based on the key point information includes: determining the multiple deformation areas based on any three adjacent key points in the key point information .
  • the performing image deformation processing on the facial region based on at least a portion of the deformation regions in the plurality of deformation regions includes: determining the first to-be-processed facial region in the facial region Target area; based on the key point information corresponding to the first target area, determine the deformed area corresponding to the first target area from the plurality of deformed areas; image the deformed area corresponding to the first target area Deformation treatment.
  • the first target area is an eye area; the eye area includes a left eye area and/or a right eye area; the key based on the first target area Point information, determining the deformation area corresponding to the first target area from the plurality of deformation areas, including: determining the deformation area corresponding to the first target area from the plurality of deformation areas based on the key point information corresponding to the left eye area A first group of deformed areas corresponding to the left-eye area, and/or, based on key point information corresponding to the right-eye area, determining a second group of deformed areas corresponding to the right-eye area from the plurality of deformed areas;
  • the performing image deformation processing on the deformation area corresponding to the first target area includes: performing image deformation processing on the first group of deformation areas and/or the second group of deformation areas; wherein, the first group The image deformation direction of the deformed area is opposite to the image deformation direction of the second group of deformed areas, so that the distance between the left eye
  • the first target area is an eye corner area; the eye corner area includes the left eye corner area and/or the right eye corner area; the first target area corresponds to Determining the deformation area corresponding to the first target area from the plurality of deformation areas includes: based on the key point information corresponding to the corner of the eye area of the left eye, from the plurality of deformation areas Determining a third group of deformed regions, and/or, based on key point information corresponding to the corner of the right eye region, determining a fourth group of deformed regions corresponding to the corner of the right eye from the plurality of deformed regions;
  • the performing image deformation processing on the deformation area corresponding to the first target area includes: stretching or compressing the third group of deformation areas and/or the fourth group of deformation areas in a first specific direction to adjust the The position of the corner of the eye of the left eye area and/or the position of the corner of the eye of the right eye area.
  • the first target area is an eye area; the eye area includes a left eye area and/or a right eye area; the key based on the first target area Point information, determining the deformation area corresponding to the first target area from the plurality of deformation areas, including: determining the deformation area corresponding to the first target area from the plurality of deformation areas based on the key point information corresponding to the left eye area A fifth group of deformed areas corresponding to the left-eye area, and/or, based on key point information corresponding to the right-eye area, determining a sixth group of deformed areas corresponding to the right-eye area from the plurality of deformed areas;
  • the performing image deformation processing on the deformation area corresponding to the first target area includes: performing deformation processing on the fifth group of deformation areas so that the contour key point of the left eye area is relative to the center key point of the left eye area Rotate, and the angle of rotation satisfies the first set angle, and/or perform deformation processing on the sixth group of
  • the first target area is a nose area; and the first target area is determined based on key point information corresponding to the first target area.
  • Performing image deformation processing on the deformed region corresponding to the region includes: stretching or compressing the seventh set of deformed regions according to a second specific direction to lengthen or shorten the nose region.
  • the first target area is a nasal wing area; and the first target area is determined based on key point information corresponding to the first target area.
  • the first target area is a chin area or a human middle area; and the first target area is based on key point information corresponding to the first target area.
  • the deformation area corresponding to the first target area including: determining a ninth corresponding to the chin area or the human area from the plurality of deformation areas based on key point information corresponding to the chin area or the human area Group of deformed areas; said performing image deformation processing on the deformed area corresponding to the first target area includes: compressing or stretching the ninth group of deformed areas according to a fourth specific direction to shorten or lengthen the chin area Or people in the region.
  • the first target area is a mouth area; and the first target area is based on key point information corresponding to the first target area, and the first target area is A deformation area corresponding to a target area; including: determining a tenth group of deformation areas corresponding to the mouth area from the plurality of deformation areas based on key point information corresponding to the mouth area; Performing image deformation processing on the deformation area corresponding to the first target area includes: compressing the tenth group of deformation areas according to the direction in which the edge of the mouth area faces the center of the mouth area, or according to the mouth The tenth group of deformed regions is stretched in a direction in which the center of the region faces the edge of the mouth region.
  • the determining a deformation area corresponding to the first target area from the plurality of deformation areas based on key point information corresponding to the first target area includes: The key point information of the edge of the face region, determining an eleventh group of deformed regions corresponding to the facial region from the plurality of deformed regions; and performing image deformation on the deformed region corresponding to the first target region
  • the processing includes: compressing the eleventh group of deformed regions according to the direction in which the edge of the facial region faces the midpoint of the facial region, or according to the midpoint of the facial region towards the edge of the facial region
  • the eleventh group of deformation regions are stretched in the direction of.
  • the first target area is a forehead area; and the first target area is determined based on key point information corresponding to the first target area from the plurality of deformed areas.
  • a deformation area corresponding to the target area including: determining a twelfth group of deformation areas corresponding to the forehead area from the plurality of deformation areas based on key point information of the forehead area;
  • Performing image deformation processing on the deformed region corresponding to the region includes: stretching or compressing the twelfth group of deformed regions according to a fifth specific direction to raise or lower the hairline of the facial region; the fifth The specific direction is the direction in which the key point of the forehead area points to the center of the brow closest to the key point, or the fifth specific direction is the direction where the key point of the forehead area is away from the center of the brow closest to the key point direction.
  • the method for determining the key point information of the forehead area includes: determining at least three key points of the forehead area; based on the at least three key points and the facial area The first group of contour point information below the eyes determines the key point information of the forehead area.
  • the first key point in the at least three key points is located on the midline of the forehead area; the second key point and the third key point in the at least three key points The points are on both sides of the midline.
  • the determining the key point information of the forehead region based on the at least three key points and the first set of contour point information below the eyes in the face region includes: Curve fitting is performed on the key points at the two ends and the at least three key points in the first set of contour point information below the eyes in the face area to obtain curve fitting key point information; and the curve is calculated based on a curve interpolation algorithm The fitting key point information is subjected to interpolation processing to obtain the key point information of the forehead area.
  • the determining key point information related to the facial area includes: detecting the facial area through a facial key point detection algorithm, and obtaining key point information of the organs contained in the facial area And key point information of the edge of the face region; obtaining the key point information of the outer edge based on the key point information of the edge of the face region.
  • obtaining key point information of the edge of the face area includes: obtaining a first set of contour point information below the eyes in the face area; and determining the second set of contour points in the forehead area
  • the group of contour point information determines the key point information of the edge of the face region based on the first group of contour point information and the second group of contour point information.
  • the obtaining the outer edge key point information based on the key point information of the edge of the face region includes: determining the key point information of the edge of the face region and the face The relative positional relationship between the midpoints of the area, the relative positional relationship includes the distance between the key point of the edge of the face area and the center point of the face area, and the key point of the edge of the face area relative to The direction of the center point of the facial area; based on the relative position relationship, the key point of the first edge is extended a preset distance in the direction toward the outside of the facial area, and the outer corresponding to the key point of the first edge is obtained Edge key point; wherein the key point of the first edge is any key point among the key points of the edge of the face area; the preset distance is from the key point of the first edge and the center of the face area The distance between the points is related.
  • the method further includes: determining a deflection parameter of the face region, and determining a deformation parameter and a deformation direction corresponding to each deformation region in the at least partial deformation region based on the deflection parameter , So that each deformed area performs image deformation processing according to the corresponding deformation parameters and deformation direction.
  • the determining the deflection parameter of the facial area includes: determining a left edge key point, a right edge key point, and a center key point of any area in the face area;
  • the area includes at least one of the following areas: a face area, a nose area, and a mouth area; determining the first distance between the left key point and the center key point, and determining the right edge key The second distance between the point and the central key point; and the deflection parameter of the face region is determined based on the first distance and the second distance.
  • the method further includes: identifying a second target area in the face area, performing feature processing on the second target area, and generating a third image; the second target The area includes at least one of the following: eye area, nasolabial fold area, tooth area, eye area, and apple muscle area.
  • An embodiment of the present application also provides an image processing device, the device comprising: a first determining unit and a deformation processing unit; wherein the first determining unit is configured to obtain a first image and identify To determine the key point information related to the face area, the key point information includes: key point information of the face area and outer edge key point information; the area corresponding to the outer edge key point information includes the A face area that is larger than the face area; further configured to determine a plurality of deformation areas based on the key point information;
  • the deformation processing unit is configured to perform image deformation processing on the face region based on at least a part of the deformation regions in the plurality of deformation regions to generate a second image.
  • the key point information of the face area includes key point information of the organs of the face area and key point information of the edge of the face area; the edge of the face area corresponds to all The contour of the facial area; the key point information of the organ includes the central key point information of the organ and/or the contour key point information of the organ.
  • the first determining unit is configured to determine the multiple deformation regions based on any three adjacent key points in the key point information.
  • the first determining unit is configured to determine a first target area to be processed in the face area; based on key point information corresponding to the first target area, Determining a deformation area corresponding to the first target area among the plurality of deformation areas;
  • the deformation processing unit is configured to perform image deformation processing on the deformation area corresponding to the first target area.
  • the first target area is an eye area; the eye area includes a left eye area and/or a right eye area; the first determining unit is configured to be based on the The key point information corresponding to the left eye area is determined from the plurality of deformation areas and the first group of deformation areas corresponding to the left eye area is determined, and/or, based on the key point information corresponding to the right eye area, Determining a second group of deformed regions corresponding to the right eye region among the plurality of deformed regions;
  • the deformation processing unit is configured to perform image deformation processing on the first group of deformation areas and/or the second group of deformation areas, wherein the image deformation direction of the first group of deformation areas and the second group of deformation areas
  • the image deformation direction of the deformed area is opposite, so that the distance between the left eye area and the right eye area is increased or decreased.
  • the first target area is an eye corner area; the eye corner area includes the left eye corner area and/or the right eye corner area; the first determining unit is configured to be based on The key point information corresponding to the corner of the eye area of the left eye, and a third group of deformation regions corresponding to the corner of the eye area of the left eye are determined from the multiple deformation areas, and/or based on the corner area of the right eye Corresponding key point information, determining a fourth group of deformed regions corresponding to the corner region of the right eye from the plurality of deformed regions;
  • the deformation processing unit is configured to stretch or compress the third group of deformation areas and/or the fourth group of deformation areas according to a first specific direction to adjust the position and/or the position of the corner of the eye of the left eye area State the position of the corner of the right eye area.
  • the first target area is an eye area; the eye area includes a left eye area and/or a right eye area; the first determining unit is configured to be based on the The key point information corresponding to the left eye area, determine the fifth group of deformation areas corresponding to the left eye area from the multiple deformation areas, and/or, based on the key point information corresponding to the right eye area, Determining a sixth group of deformed regions corresponding to the right eye region among the plurality of deformed regions;
  • the deformation processing unit is configured to perform deformation processing on the fifth group of deformed regions, so that the contour key point of the left eye area is rotated relative to the center key point of the left eye area, and the rotation angle meets the first set angle , And/or, performing deformation processing on the sixth group of deformed regions, so that the contour key points of the right eye region are rotated relative to the center key points of the right eye region, and the rotation angle meets the second set angle.
  • the first target area is a nose area
  • the first determining unit is configured to determine from the plurality of deformation areas based on key point information corresponding to the nose area The seventh group of deformed regions corresponding to the nose region;
  • the deformation processing unit is configured to stretch or compress the seventh group of deformation regions in a second specific direction to lengthen or shorten the nose region.
  • the deformation processing unit is configured to compress or stretch the eighth group of deformation regions according to a third specific direction, so as to narrow or widen the nose region.
  • the first target area is a chin area or a mid-person area
  • the first determining unit is configured to obtain information from key points corresponding to the chin area or the mid-person area Determining a ninth group of deformation regions corresponding to the chin region or the human region among the plurality of deformation regions;
  • the deformation processing unit is configured to compress or stretch the ninth group of deformed regions according to a fourth specific direction to shorten or lengthen the chin region or the middle region.
  • the first target area is a mouth area; the first determining unit is configured to obtain information from the plurality of deformation areas based on key point information corresponding to the mouth area Determining the tenth group of deformed regions corresponding to the mouth region in
  • the deformation processing unit is configured to compress the tenth group of deformation regions according to the direction in which the edge of the mouth region faces the center of the mouth region, or to compress the tenth group of deformation regions according to the center of the mouth region.
  • the direction of the edge of the mouth region stretches the tenth group of deformed regions.
  • the first determining unit is configured to determine the tenth corresponding to the facial region from the plurality of deformation regions based on key point information of the edge of the facial region A set of deformed areas;
  • the deformation processing unit is configured to perform compression processing on the eleventh group of deformation regions according to the direction in which the edge of the facial region faces the center line of the facial region, or to face the facial region according to the center line of the facial region The direction of the edge of the eleventh group of deformation regions is stretched.
  • the first target area is a forehead area
  • the first determining unit is configured to determine from the plurality of deformed areas based on key point information corresponding to the forehead area A twelfth set of deformed areas corresponding to the forehead area;
  • the deformation processing unit is configured to perform stretching or compression processing on the twelfth group of deformation regions in a fifth specific direction, so as to raise or lower the hairline of the facial region;
  • the fifth specific direction is The key point of the forehead area points to the direction of the center of the brow closest to the key point, or the fifth specific direction is the direction where the key point of the forehead area is away from the center of the brow closest to the key point.
  • the first determining unit is configured to determine at least three key points of the forehead area; based on the at least three key points and the facial area below the eyes The first set of contour point information determines the key point information of the forehead area.
  • the first key point in the at least three key points is located on the midline of the forehead area; the second key point and the third key point in the at least three key points The points are on both sides of the midline.
  • the first determining unit is configured to be based on the key point information at both ends of the first group of contour point information below the eyes in the face area and the at least three key points. Perform curve fitting on the point information to obtain curve fitting key point information; perform interpolation processing on the curve fitting key point information based on a curve interpolation algorithm to obtain key point information corresponding to the forehead area.
  • the first determining unit is configured to detect the facial region by using a facial key point detection algorithm to obtain key point information of organs in the facial region and edges of the facial region The key point information of the outer edge is obtained based on the key point information of the edge of the face area.
  • the first determining unit is configured to obtain a first set of contour point information below the eyes in the face area; and determine a second set of contour point information corresponding to the forehead area And determining the key point information of the edge of the face region based on the first group of contour point information and the second group of contour point information.
  • the first determining unit is configured to determine the relative positional relationship between the key point information of the edge of the face area and the midpoint of the face area, the relative position The relationship includes the distance between the key point of the edge of the face area and the center point of the face area and the direction of the key point of the face area relative to the center point of the face area; based on the relative position The relationship extends the key points of the first edge by a preset distance in the direction toward the outside of the facial area to obtain the outer edge key points corresponding to the key points of the first edge; wherein the key points of the first edge are Any one of the key points of the edge of the face area; the preset distance is related to the distance between the key point of the first edge and the center point of the face area.
  • the device further includes a second determining unit configured to determine a deflection parameter of the face area, and determine that each deformation area in the at least partial deformation area corresponds to the deflection parameter based on the deflection parameter.
  • the deformation parameters and direction of the deformation so that each deformation area performs image deformation processing according to the corresponding deformation parameters and deformation direction.
  • the second determining unit is configured to determine a left edge key point, a right edge key point, and a center key point of any area in the face area; the area includes At least one of the following areas: face area, nose area, mouth area; determining the first distance between the left key point and the center key point, and determining the right edge key point and the The second distance between the center key points; the deflection parameter of the face region is determined based on the first distance and the second distance.
  • the device further includes an image processing unit configured to recognize a second target area in the face area, perform feature processing on the second target area, and generate a third image;
  • the second target area includes at least one of the following: eye area, nasolabial fold area, tooth area, eye area, and apple muscle area.
  • the embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the method described in the embodiment of the present application are implemented.
  • An embodiment of the present application also provides an image processing device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the program, the image processing device described in the embodiment of the present application is implemented. Method steps.
  • the method includes: obtaining a first image, identifying a facial area in the first image, and determining key point information related to the facial area, where the key point information includes : Key point information and outer edge key point information of the face area; the area corresponding to the outer edge key point information includes the face area and is larger than the face area; determining multiple deformation areas based on the key point information, Performing an image deformation process on the face region based on at least a part of the deformation region in the plurality of deformation regions to generate a second image.
  • the deformation area of the outer edge of the face area is determined, so as to facilitate the process of deforming the face area.
  • the deformation processing on the outer edge of the face area avoids the occurrence of holes or pixel overlap in the image caused by the deformation processing of the face area, and improves the image processing effect.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the application
  • FIG. 2 is a schematic diagram of a deformed area in the image processing method of an embodiment of the application
  • 3a to 3c are schematic diagrams of key points of a face in an image processing method according to an embodiment of the application.
  • FIG. 4 is a schematic diagram of another flow chart of an image processing method according to an embodiment of the application.
  • FIG. 5 is a schematic flowchart of another image processing method according to an embodiment of the application.
  • FIG. 6 is a schematic diagram of an application of image processing according to an embodiment of the application.
  • FIG. 7 is a schematic diagram of a composition structure of an image processing device according to an embodiment of the application.
  • FIG. 8 is a schematic diagram of another composition structure of an image processing device according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of another composition structure of the image processing device according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of the hardware composition structure of an image processing apparatus according to an embodiment of the application.
  • Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the application; as shown in Fig. 1, the method includes:
  • Step 101 Obtain a first image, identify the face area in the first image, and determine key point information related to the face area.
  • the key point information includes: key point information of the face area and outer edge key point information; outer edge key point information corresponds The area of includes the facial area and is larger than the facial area;
  • Step 102 Determine a plurality of deformation regions based on the key point information, and perform image deformation processing on the facial region based on at least part of the deformation regions in the plurality of deformation regions, to generate a second image.
  • the first image contains the face of the target object; the target object may be a real person in the image. In other embodiments, the target object may also be a virtual character, such as a cartoon character. It can be understood that the first image includes a human face.
  • the embodiment of this application mainly performs image processing on the human face in the image.
  • the embodiment of the present application may also perform image processing on the face of other target objects.
  • a preset face recognition algorithm can be used to perform face recognition on the first image to identify the facial area in the first image.
  • the key point information related to the face area includes the position information of the key point.
  • the position information of the key point may be represented by the coordinate information of the key point.
  • the key point information of the face area includes the key point information of the organs of the face area and the key point information of the edge of the face area; the edge of the face area corresponds to the contour of the face area; the outer edge key point information is based on the key point of the edge of the face area Information is OK.
  • the key point information of the organ includes the central key point information of the organ and/or the outline key point information of the organ.
  • key points related to the face area include: key points of the organs contained in the face area, key points of the edge of the face area, and key points of the outer edge.
  • determining key point information related to the face area includes: detecting the face area through a face key point detection algorithm, and obtaining key point information of each organ in the face area and information about the face area The key point information of the edge; the key point information of the outer edge is obtained based on the key point information of the edge of the face area.
  • obtaining the key point information of the edge of the face area includes: obtaining a first set of contour point information of the area below the eye in the face area; determining the second set of contour point information of the forehead area, based on the first set of contour points The point information and the second set of contour point information determine the key point information of the edge of the face area.
  • determining the second set of contour point information of the forehead area includes: determining at least three key points of the forehead area; determining the key point information of the forehead area based on the at least three key points and the first group of contour point information.
  • the first key point of the at least three key points is located on the midline of the forehead area; the second key point and the third key point of the at least three key points are located on both sides of the midline.
  • determining the key point information corresponding to the forehead area based on the at least three key points and the first set of contour point information includes: based on the key points located at both ends of the first group of contour point information below the eyes in the face area And at least three key points in the forehead area are curve fitting to obtain curve fitting key point information; based on the curve interpolation algorithm, curve fitting key point information is interpolated to obtain key point information corresponding to the forehead area.
  • Figure 2 is a schematic diagram of the deformed area in the image processing method of an embodiment of the application
  • Figures 3a to 3c are schematic diagrams of the key points of the face in the image processing method of the embodiment of the application; combined with Figure 2 and Figures 3a to 3c
  • the key points of the organs included in the facial area are specifically key points of at least one of the following organs included in the facial area: eyebrows, eyes, nose, and mouth.
  • the key point information of the organ may include the central key point information of the organ and/or the outline key point information of the organ.
  • the key point information of the eye can include the key point information of the center of the eye and the key point information of the contour of the eye; if the organ is the eyebrow as an example, the key point information of the eyebrow can include the key point of the contour of the eyebrow information.
  • the key point information of each organ in the face area is obtained by detecting the key point detection algorithm of the face.
  • the first group of contour points below the eyes in the face area is obtained by the facial key point detection algorithm.
  • the first group of contour points is shown in Fig. 3a from key point 0 to key point 32, as shown in Fig. 3b
  • the solid dot " ⁇ " indicates the key points of the first group of contours.
  • the facial key point detection algorithm can be used to obtain a small number of M1 contour points in the area below the eyes in the facial area, such as 5 contour points, etc.; and then for the M1 contour points by means of curve interpolation Obtain M2 contour points, and use M1 contour points and M2 contour points as the first group of contour point information.
  • the face key point detection algorithm can use any face recognition algorithm.
  • the third aspect is to obtain information on key points in the forehead area.
  • at least three key point information in the forehead area of the face area can be determined based on preset parameters to determine the three key point information as an example, then key point 1 corresponds to the key point located on the midline of the forehead area , Marked as the first key point; and key point 2 and key point 3 are located on both sides of key point 1, based on the key point 4 and key point 5 at both ends of the first set of contour point information (for example, the key point in Figure 3a Point 0 and key point 32), as well as key point 1, key point 2 and key point 3 for curve fitting to obtain curve fitting key point information; based on the curve interpolation algorithm, curve fitting key point information is interpolated to obtain and The second set of contour point information matching the forehead area.
  • the first group of contour point information and the second group of contour point information are combined to form the key point information of the edge of the face area.
  • the key point corresponding to the key point information of the edge of the face area is located in the face area. All positions of the edges, which cover all the edges of the face area.
  • obtaining the outer edge key point information based on the key point information of the edge of the face area includes: determining the relative position relationship between the key point information of the edge of the face area and the center point of the face area.
  • the relative position relationship includes the face.
  • the outer direction extends a preset distance to obtain the outer edge key point corresponding to the key point of the first edge; wherein the key point of the first edge is any one of the key points of the edge of the face area; the preset distance is equal to The key point of the first edge is related to the distance between the center point of the face area; the greater the distance between the key point of the first edge and the center point of the face area, the greater the preset distance of extension.
  • other key points can also be selected and not limited to the center point of the face area.
  • the key points corresponding to the tip of the nose can be selected, which is not limited in this embodiment.
  • the key points related to the facial area obtained in this embodiment include not only the key points of the face area, but also the key points of the outer edge; the key points of the outer edge are located outside the face area.
  • the area corresponding to the edge key point includes the face area and is larger than the face area.
  • the number of key points on the outer edge may be the same as the number of key points on the edge of the face region, that is, the key point information on the outer edge is determined based on the key point information on the edge of the face region. In other embodiments, the number of key points on the outer edge may also be different from the number of key points on the edge of the face area.
  • the number of key points on the outer edge may be greater than the number of key points on the edge of the face area.
  • N1 outer edge key points are determined, and then N1 outer edge key points are interpolated by curve.
  • N2 outer edge key points are obtained, and N1 outer edge key points information and N2 outer edge key points information are used as outer edge key point information in this embodiment.
  • the purpose of determining the key point information of the outer edge is to use the outer edge in the image deformation process, especially in the image deformation process using the triangular deformation area shown in Figure 2
  • the triangular deformation area formed by the key point information and the key point information of the edge of the face area is adaptively deformed, that is, the transition area associated with the face area (that is, the key point between the outer edge and the key point of the face area Region) to perform adaptive deformation processing, so that a better image deformation effect can be obtained, and the facial fusion effect is more natural.
  • the number of key points on the outer edge is greater than the number of key points on the edge of the face area.
  • the effect is to reduce the triangular deformation area in the transition area (that is, the area between the key points on the outer edge and the key points on the edge of the face area).
  • the area of ??to improve the accuracy of deformation processing, making the deformation effect better.
  • the recognition of key points on the face can only identify the relatively sparse key points of the organs in the face.
  • the embodiment of the present application adds the key points by interpolation, for example, in the area of the eyebrows. A few key points.
  • the existing face key point recognition can only identify part of the key points below the eyes of the face.
  • the face key point recognition in this embodiment increases the number of points in the forehead area. Two key points, the added key points correspond to the position of the forehead or hairline, so that the forehead area or hairline can be adjusted based on the key points of the forehead.
  • the number of key points corresponding to the obtained key point information may be 106.
  • determining multiple deformation areas based on key point information includes: determining multiple deformation areas based on any three adjacent key points in the key point information. Refer to Figure 2 for details. In this embodiment, image deformation processing is performed on the target area based on the determined triangular deformation area.
  • the triangular deformation area corresponding to the outer edge area can be determined based on the outer edge key point and the contour key point corresponding to the face area, that is, this embodiment
  • the deformation area in includes the deformation area corresponding to the transition area other than the face area as shown in FIG. 2. Therefore, when performing deformation processing based on the deformation area in the face area, adaptive deformation processing is performed on the deformation area outside the face area to avoid the appearance of holes in the image due to the compression of the face area or due to the face. The stretching of the partial area leads to overlapping of pixels in the image.
  • the deformation area of the outer edge of the face area is determined, so that the face area can be adjusted adaptively during the process of deforming the face area.
  • the outer edge of the area is deformed, which avoids the appearance of holes or pixel overlap in the image caused by the deformation of the face area, and improves the image processing effect.
  • performing image deformation processing on the face region based on at least a part of the deformation regions in the plurality of deformation regions includes: determining a first target region to be processed in the facial region; based on the first target region The corresponding key point information determines the deformation area corresponding to the first target area from the plurality of deformation areas, and performs image deformation processing on the deformation area corresponding to the first target area.
  • the target area to be deformed in the face area is determined, and the target area includes at least one of the following: eye area, nose area, mouth area, chin area, human center area, forehead area, face area, etc. And so on; the deformation area corresponding to the target area is determined for different target areas, and the deformation processing of the target area is realized based on the deformation processing for the deformation area, and the second image is generated.
  • determining the deformation area corresponding to the target area for different target areas includes: determining the key point information corresponding to the target area, and determining all the deformation areas containing the key point information from the multiple deformation areas. For example, the target area is the eyebrow area, then all the key points corresponding to the eyebrow area are determined, and the deformation area including all the key points is used as the deformation area to be deformed.
  • the first target area is the eye area; the eye area includes the left eye area and/or the right eye area; based on the key point information corresponding to the first target area, the A deformation area corresponding to a target area includes: determining a first group of deformation areas corresponding to the left eye area from a plurality of deformation areas based on the key point information corresponding to the left eye area, and/or, based on the key corresponding to the right eye area Point information, determining the second group of deformation areas corresponding to the right eye area from the plurality of deformation areas; performing image deformation processing on the deformation area corresponding to the first target area, including: performing the first group of deformation areas and/or the second group
  • the deformed area is subjected to image deformation processing; wherein the image deformation direction of the first group of deformed areas is opposite to the image deformation direction of the second group of deformed areas, so that the distance between the left eye area and the right eye area is increased or reduced.
  • the first group of deformed areas and the second group of deformed areas are all deformed areas including the key points of the eye area.
  • This embodiment is used to adjust the position of the eye area in the face area; if the face area includes two eye areas, that is, the left eye area and the right eye area, it can be understood as adjusting the position between the left eye and the right eye. If the facial area includes only one eye area, such as a side face scene, it can be understood as adjusting the position of the eye area in the face area.
  • the first group of deformed areas and the second group of deformed areas can be deformed in opposite directions of image deformation, for example, determine the line between the center point of the left eye and the center point of the right eye, and determine the line
  • the midpoint of the first group of deformed areas and the second group of deformed areas are moved towards the midpoint of the line respectively, and the distance between the left eye area and the right eye area is reduced accordingly, or the first group is deformed
  • the area and the second group of deformed areas are moved away from the midpoint of the line respectively, and the distance between the left eye area and the right eye area is increased accordingly.
  • the first target area is the corner of the eye area; the corner of the eye area includes the corner of the eye area of the left eye and/or the corner of the eye area of the right eye; based on the key point information corresponding to the first target area, it is determined from multiple deformed areas
  • the deformation area corresponding to the first target area includes: determining a third group of deformation areas corresponding to the left eye corner area from a plurality of deformation areas based on key point information corresponding to the left eye corner area, and/or, based on The key point information corresponding to the corner area of the right eye is used to determine the fourth group of deformation areas corresponding to the corner area of the right eye from the multiple deformation areas; performing image deformation processing on the deformation area corresponding to the first target area includes: Stretching or compressing the third group of deformed areas and/or the fourth group of deformed areas in a specific direction to adjust the position of the corner of the eye in the left eye area and/or the position of the corner of the eye in the right eye area.
  • the third group of deformed areas are all deformed areas including key points corresponding to the corner of the left eye area
  • the fourth group of deformed areas are all deformed areas including key points corresponding to the corner of the right eye.
  • the corner of the eye can be the inner corner and/or the outer corner of the eye area.
  • the inner corner and the outer corner are a relative concept, for example, taking the center point of the left eye and the center point of the right eye as a reference,
  • the so-called inner corner of the eye refers to the corner of the eye close to the midpoint of the aforementioned line
  • the so-called outer corner of the eye refers to the corner of the eye that is far from the midpoint of the aforementioned line.
  • This embodiment is used to adjust the position of the corner of the eye in the facial area, or can be understood as adjusting the size of the corner of the eye area.
  • the key points of the inner or outer corner of the eye to be adjusted can be determined, the deformation area containing the key point can be determined, and the deformation area can be moved in the direction toward the midpoint of the above-mentioned line, or as far away from the above-mentioned line.
  • the direction of the point moves.
  • the first specific direction is a direction toward the midpoint of the aforementioned line, or the first specific direction is a direction away from the midpoint of the aforementioned line.
  • the first target area is the eye area; the eye area includes the left eye area and/or the right eye area; based on the key point information corresponding to the first target area, the A deformation area corresponding to a target area includes: determining a fifth group of deformation areas corresponding to the left eye area from a plurality of deformation areas based on key point information corresponding to the left eye area and/or right eye area, and/or, based on The key point information corresponding to the right-eye area determines the sixth group of deformation areas corresponding to the right-eye area from the multiple deformation areas; image deformation processing is performed on the deformation area corresponding to the first target area, including: the fifth group of deformation areas Perform deformation processing so that the contour key point of the left eye area is rotated relative to the center key point of the left eye area, and the angle of rotation meets the first set angle, and/or the deformation processing is performed on the sixth group of deformation areas to The contour key point of the right eye area is rotated relative to the center key point of
  • the fifth group of deformation areas are all deformation areas including the key points of the left eye area
  • the sixth group of deformation areas are all deformation areas including the key points of the right eye area.
  • This embodiment is used to adjust the angle of the eye area, which can be understood as adjusting the relative angle between the eyes and other organs of the face, such as the relative angle between the eyes and the nose.
  • the eye The center point is the center of rotation, which is realized by rotating a specific angle clockwise or counterclockwise.
  • the deformed region corresponding to the eye region can be deformed through a preset rotation matrix, so that the contour key point of the eye region is rotated relative to the center key point of the eye region.
  • the angle of rotation of the contour key point of the left eye area relative to the center key point of the left eye area meets the first set angle
  • the angle of rotation of the contour key point of the right eye area relative to the center key point of the right eye area meets the second set angle Set the angle
  • the rotation direction of the left eye area and the rotation direction of the right eye area can be opposite
  • the values of the first set angle and the second set angle can be the same or different.
  • the first target area is a nose area; based on key point information corresponding to the first target area, a deformation area corresponding to the first target area is determined from a plurality of deformation areas; including: based on the nose area Key point information, determine the seventh group of deformation areas corresponding to the nose area from multiple deformation areas; perform image deformation processing on the deformation area corresponding to the first target area, including: stretching or compressing the seventh group according to the second specific direction Deform the area to lengthen or shorten the nose area.
  • the seventh group of deformation regions is all deformation regions including key points of the nose.
  • This embodiment is used to adjust the length or height of the nose region, which can be understood as adjusting the length of the nose region or adjusting the height of the nose.
  • the seventh group of deformed regions can be stretched or compressed toward the second specific direction to lengthen or shorten the nose area.
  • the second specific direction is along the length of the facial area. For example, the midpoint of the line between the two eyebrows, the center of the nose and the center of the lips in the facial area can be used as the straight line of the facial area.
  • the seventh group of deformed areas is stretched from the center of the nose area toward the outside of the nose area along the length direction, and the nose area is elongated; and the first group is compressed along the length direction from the outside of the nose area toward the center of the nose area. For seven groups of deformed areas, shorten the nose area.
  • the second specific direction may also be a direction perpendicular to and away from the face area, and the height of the nose area is adjusted according to the second specific direction.
  • this embodiment is suitable for a scene where the face in the image is a side face, that is, in a scene where the face in the image is a side face, the second specific direction is determined by determining the deflection parameter of the facial area based on the deflection parameter , That is, the direction corresponding to the height of the nose is determined based on the deflection of the face, and then the seventh group of deformed regions corresponding to the nose region is deformed according to the second specific direction to increase or shorten the height of the nose.
  • the first target area is the nose area; based on the key point information corresponding to the first target area, the deformation area corresponding to the first target area is determined from the plurality of deformation areas; including: based on the nose area corresponding Key point information, determine the eighth group of deformation areas corresponding to the nose area from the multiple deformation areas; perform image deformation processing on the deformation area corresponding to the first target area, including: compressing or stretching the eighth group according to the third specific direction Deform the area to narrow or widen the nose area.
  • the eighth group of deformed regions is all the deformed regions that include the key points corresponding to the nasal alar region.
  • the nasal alar region refers to the area contained on both sides of the nose tip.
  • This embodiment is used to adjust the width of the nasal alar region, which is understandable To adjust the width of the nose.
  • the key points corresponding to the nose area can be determined, the deformation area containing the key points can be determined, and the deformation area can be compressed or stretched in the third specific direction to narrow or widen the nose area; among them, the third specific The direction is the width direction of the face area, and the width direction of the face area is perpendicular to the length direction of the face area.
  • the first target area is the chin area or the middle area; based on the key point information corresponding to the first target area, the deformation area corresponding to the first target area is determined from the plurality of deformation areas; including: According to the key point information corresponding to the chin area or the human area, the ninth group of deformation areas corresponding to the chin area or the human area are determined from the multiple deformation areas; image deformation processing is performed on the deformation area corresponding to the first target area, including: The ninth group of deformed areas is compressed or stretched in the fourth specific direction to shorten or lengthen the chin area or the middle area.
  • the ninth group of deformed regions is all deformed regions that include key points of the chin or the person.
  • This embodiment is used to adjust the length of the chin area or the human area, which can be understood as adjusting the chin area or the person.
  • the length of the middle area Among them, the chin area refers to the lower jaw area; the human area refers to the area between the nose and the mouth.
  • the ninth group of deformed areas can be compressed or stretched toward the fourth specific direction to shorten or lengthen the chin area or the middle area. Among them, the fourth specific direction is along the length of the face area.
  • the first target area is a mouth area; based on key point information corresponding to the first target area, determining a deformation area corresponding to the first target area from a plurality of deformation areas; including: based on the mouth area Corresponding key point information, determine the tenth group of deformation areas corresponding to the mouth area from the multiple deformation areas; perform image deformation processing on the deformation area corresponding to the first target area, including: according to the edge of the mouth area toward the mouth The direction of the center of the area compresses the tenth group of deformed areas, or the tenth group of deformed areas is stretched in the direction from the center of the mouth area to the edge of the mouth area.
  • the tenth group of deformed regions is all deformed regions that include key points of the mouth.
  • This embodiment is used to adjust the size of the mouth area, which can be understood as an increase in the mouth area or the mouth area The reduction processing.
  • the key points corresponding to the mouth area can be determined, and all the deformation areas containing the key points can be determined as the tenth group of deformation areas.
  • the deformation area is aligned with the tenth group according to the direction from the edge of the mouth area to the center of the mouth area.
  • the deformation area is compressed, or the tenth group of deformation areas is stretched in the direction from the center of the mouth area to the edge of the mouth area.
  • a deformation area corresponding to the first target area is determined from a plurality of deformation areas; including: key point information based on the edge of the face area, from multiple deformation areas Determine the eleventh group of deformed areas corresponding to the face area in the deformed area; perform image deformation processing on the deformed area corresponding to the first target area, including: deform the eleventh group according to the direction of the edge of the face area toward the center line of the face area The area is compressed, or the eleventh group of deformed areas is stretched according to the direction of the center line of the face area toward the edge of the face area.
  • the eleventh group of deformed areas are all deformed areas including the key points of the edge of the face area.
  • the key points of the edge of the face area refer to the first group of contour key points and/or the second set of contour key points shown in FIG.
  • this embodiment is used to adjust the width of the face area, which can be understood as "face thinning" or "fat face” processing.
  • the eleventh group of deformed areas can be compressed according to the direction that the edge of the face area faces the middle line of the face area, or the eleventh group of deformed areas can be stretched according to the direction that the middle line of the face area faces the edge of the face area.
  • the eleventh group of deformed regions can be compressed according to the direction from the edge of the facial region to the midpoint of the facial region , Or stretch the eleventh group of deformed regions according to the direction from the midpoint of the facial region to the edge of the facial region.
  • the deformation ratios of the deformation areas corresponding to the key points at different positions are different.
  • the deformation ratios of the deformation areas corresponding to the key points contained in the cheek area are the largest, and the deformation ratios of the deformation areas corresponding to other areas may be slowing shrieking.
  • the deformation ratio of the deformation area corresponding to the key points near key point 0, key point 16, and key point 32 is the smallest, and the deformation ratio of the deformation area corresponding to the key points near key point 8 and key point 24 Maximum, so that the deformation effect (such as face-lifting effect or fat face effect) more natural.
  • the first target area is the forehead area; based on the key point information corresponding to the first target area, a deformation area corresponding to the first target area is determined from a plurality of deformation areas; including: a key based on the forehead area Point information, determine the twelfth group of deformed areas corresponding to the forehead area from the multiple deformed areas; perform image deformation processing on the deformed area corresponding to the first target area, including: the twelfth group of deformed areas according to the fifth specific direction Stretch or compress to raise or lower the hairline of the facial area; the fifth specific direction is the direction where the key point of the forehead area points to the center of the eyebrow closest to the key point, or the fifth specific direction is the key point of the forehead area Point away from the center of the eyebrow closest to the key point.
  • the twelfth group of deformed areas is all deformed areas including the key points of the forehead area, and the method for determining the key points of the forehead area can be referred to the foregoing, which will not be repeated here.
  • This embodiment is used to adjust the width of the forehead area, which can be understood as adjusting the relative height of the hairline in the face area.
  • the key points of the forehead area can be determined, and all deformation areas containing the key points from multiple deformation areas can be determined as the twelfth group of deformation areas, such as the triangular deformation area corresponding to the forehead area shown in Figure 2 and The triangular deformation area corresponding to the outer edge area outside the forehead area is used as the twelfth group of deformation areas in this embodiment; the twelfth group of deformation areas are stretched or compressed according to the fifth specific direction to increase or decrease The hairline of the face area.
  • the center of the eyebrow with the closest distance to the feature point can be determined first, and the direction of the feature point and the center of the eyebrow can be determined, and the direction is regarded as the fifth Specific direction;
  • the fifth specific direction corresponding to each key point is determined respectively, and the deformation area is deformed according to the fifth specific direction corresponding to each feature point, specifically for the deformation area The three key points in move according to their corresponding fifth specific direction.
  • the image processing method of this embodiment can achieve: 1.
  • the adjustment of the hairline that is, the position of the hairline can be adjusted to realize the heightening or lowering of the hairline; 2.
  • the length adjustment of the nose area That is, it can realize the adjustment of the length of the nose, and realize the lengthening or shortening of the nose; 3.
  • the adjustment of the nose area that is, the adjustment of the width of the nose; 4.
  • the adjustment of the human area that is, the adjustment of the length of the human area, and the realization of human The lengthening or shortening of the middle area; 5.
  • the adjustment of the mouth shape that is, the adjustment of the size of the mouth; 6.
  • the adjustment of the chin area that is, the adjustment of the length of the chin area, and the lengthening or shortening of the chin area; 7 8.
  • the adjustment of the face shape can realize the adjustment of the facial contour, so that the facial contour can be narrowed or widened, such as "slim face"; 8.
  • the adjustment of the eye distance can adjust the distance between the left eye and the right eye; 9. ,
  • the adjustment of the eye angle that is, the relative angle of the eyes can be adjusted; 10.
  • the adjustment of the corner of the eye that is, the position of the corner of the eye can be adjusted to achieve "opening the corner of the eye” and enlarge the eyes; 11. Adjust the height of the nose in the side face scene , That is, it can realize the "rhinoplasty" of the profile.
  • Fig. 4 is a schematic flowchart of another image processing method according to an embodiment of the application; as shown in Fig. 4, the method includes:
  • Step 201 Obtain a first image, identify the face area in the first image, and determine key point information related to the face area.
  • the key point information includes: key point information of the face area and outer edge key point information; outer edge key point information corresponds The area of includes the facial area and is larger than the facial area;
  • Step 202 Determine multiple deformation regions based on key point information
  • Step 203 Determine the deflection parameter of the face area, and determine the deformation parameter and the deformation direction corresponding to each deformation area in at least part of the deformation area based on the deflection parameter;
  • Step 204 Perform image deformation processing on the face region based on at least a part of the deformation regions and the deformation parameters and deformation directions corresponding to each deformation region to generate a second image.
  • step 201 to step 202 in this embodiment reference may be made to the description of step 101 to step 102 in the foregoing embodiment, which will not be repeated here.
  • the foregoing embodiment is mainly for the case where the face area is not deflected, and for the case where the face area is deflected, that is, the scene of the side face, it is necessary to first determine the deflection parameter of the face area, and then determine each to be deformed according to the deflection parameter.
  • the deformation parameters and deformation directions corresponding to the deformation area are deformed according to the determined deformation parameters and deformation directions.
  • determining the deflection parameter of the face area includes: determining the left edge key point, the right edge key point, and the center key point of any area in the face area; the area includes at least of the following areas One: face area, nose area, mouth area; determine the first distance between the left key point and the center key point, and determine the second distance between the right edge key point and the center key point; based on the first The first distance and the second distance determine the deflection parameter of the face area.
  • taking the nose area as an example, determine the center point of the nose (such as the tip of the nose), the key point on the leftmost side of the nose, and the key point on the rightmost side of the nose, and calculate the leftmost key point of the nose and the nose.
  • the first distance between the center points, and the second distance between the rightmost key point of the nose wing and the center point of the nose determine the deflection parameter of the face region based on the first distance and the second distance.
  • the deformation direction of the first target area in the foregoing embodiment is further adjusted based on the deflection parameter.
  • the deformation parameters of the left and right areas of the nose are different. If the first distance is greater than the second distance, the left side of the nose The deformation parameter of the area is greater than the deformation parameter of the right area of the nose; as an example, the movement ratio of the leftmost key point of the nose can be the first distance divided by the distance between the leftmost key point of the nose and the center point of the nose The distance is limited between 0 and 1.
  • the movement ratio of the key point on the far right of the nose can be the second distance divided by the distance between the key point on the far right of the nose and the center of the nose, and it is limited to Between 0 and 1, in this way, the movement distance of the key points on both sides of the nose will change with the deflection of the facial area.
  • the deformation area of the outer edge of the face area is determined, so as to facilitate the process of deforming the face area.
  • the deformation processing on the outer edge of the face area avoids the occurrence of holes or pixel overlap in the image caused by the deformation processing of the face area, and improves the image processing effect.
  • the deformation processing of the forehead area for the face area is realized.
  • the height of the nose in the side face scene is adjusted.
  • FIG. 5 is a schematic flowchart of another image processing method according to an embodiment of the application; as shown in FIG. 5, the method includes:
  • Step 301 Obtain a first image, identify the face area in the first image, and determine key point information related to the face area.
  • the key point information includes: key point information of the face area and outer edge key point information; outer edge key point information corresponds The area of includes the facial area and is larger than the facial area;
  • Step 302 Determine a plurality of deformation regions based on the key point information, and perform image deformation processing on the face region based on at least part of the deformation regions in the plurality of deformation regions to generate a second image.
  • Step 303 Identify the second target area in the face area, perform feature processing on the second target area to generate a third image; the second target area includes at least one of the following: eye area, nasolabial fold area, tooth area, and eye Area, apple muscle area.
  • step 301 to step 302 in this embodiment please refer to the description of step 101 to step 102 in the foregoing embodiment. In order to reduce the length, it will not be repeated here.
  • this embodiment in addition to performing image deformation processing on the face region based on the deformation area, this embodiment can also perform feature processing based on the image.
  • the feature processing of the image may be processing the pixels in the image.
  • the processing method may include at least one of the following: noise reduction processing, Gaussian blur processing, high and low frequency processing, mask processing, and so on.
  • the processing of the second target area may specifically be the processing of removing dark circles; when the second target area is the law pattern area, the processing of the second target area may specifically be The process of removing nasolabial folds; when the second target area is the tooth area, the treatment of the second target area may specifically be the treatment of whitening teeth; when the second target area is the eye area, the treatment of the second target area Specifically, it can be the brightness enhancement processing of the eye area; when the second target area is the apple muscle area, the processing of the second target area can be the processing of increasing or reducing the apple muscle area and/or the brightness processing of the apple muscle area, etc. Wait.
  • Gaussian blur processing can be performed on the second target area, which is equivalent to performing skinning processing on the second target area.
  • the mask processing method that is, covering the second target area with a mask matching the second target area, as shown in FIG. 6, which shows an example of processing for the second target area.
  • the eye area is determined first, and the eye area is determined based on the determined eye area.
  • the second target area peripheral area
  • the mask corresponding to the eye area can be preset, and then the mask corresponding to the eye area is covered on the eye area to generate the third target area.
  • the processing method for the law pattern area is similar to that of the eye area, that is, first determine the law pattern area, and cover the law pattern area with the mask corresponding to the law pattern area through the preset mask corresponding to the law pattern area. Generate a third image.
  • the target parameter of the characterizing color to be replaced is determined through the preset color look-up table; the tooth region is determined, and the parameter corresponding to the tooth region is adjusted as the target parameter, thereby adjusting the tooth color.
  • the processing of the eye area specifically, it may be to increase the brightness of the eye area.
  • FIG. 7 is a schematic diagram of a composition structure of the image processing device of the embodiment of the application; as shown in FIG. 7, the device includes: a first determining unit 41 and a deformation processing unit 42 ;among them,
  • the first determining unit 41 is configured to obtain a first image, identify a face area in the first image, and determine key point information related to the face area.
  • the key point information includes: key point information of the face area and outer edge key point information;
  • the area corresponding to the edge key point information includes the face area and is larger than the face area; it is further configured to determine multiple deformation areas based on the key point information;
  • the deformation processing unit 42 is configured to perform image deformation processing on the face region based on at least a part of the deformation regions among the plurality of deformation regions, and generate a second image.
  • the key point information of the face area includes key point information of the organs of the face area and key point information of the edges of the face area; the edges of the face area correspond to the contours of the face area; the key points of the organs
  • the information includes the central key point information of the organ and/or the outline key point information of the organ.
  • the first determining unit 41 is configured to determine multiple deformation regions based on any three adjacent key points in the key point information.
  • the first determining unit 41 is configured to determine the first target area to be processed in the face area; determine from multiple deformed areas based on key point information corresponding to the first target area A deformed area corresponding to the first target area;
  • the deformation processing unit 42 is configured to perform image deformation processing on the deformation area corresponding to the first target area.
  • the first target area is an eye area; the eye area includes a left eye area and/or a right eye area;
  • the first determining unit 41 is configured to determine a first group of deformed regions corresponding to the left-eye region from the plurality of deformed regions based on the key point information corresponding to the left-eye region, and/or based on the key point information corresponding to the right-eye region , Determine the second group of deformed regions corresponding to the right eye region from the plurality of deformed regions;
  • the deformation processing unit 42 is configured to perform image deformation processing on the first group of deformation areas and/or the second group of deformation areas, wherein the image deformation direction of the first group of deformation areas and the image deformation direction of the second group of deformation areas are opposite, so as to Increase or decrease the distance between the left eye area and the right eye area.
  • the first target area is an eye corner area;
  • the eye corner area includes the left eye corner area and/or the right eye corner area;
  • the first determining unit 41 is configured to determine a third group of deformed regions corresponding to the corner of the eye region of the left eye from the plurality of deformation regions based on the key point information corresponding to the corner of the eye region of the left eye, and/or based on the corner of the right eye
  • the key point information corresponding to the region determines the fourth group of deformed regions corresponding to the corner of the right eye from the multiple deformed regions;
  • the deformation processing unit 42 is configured to stretch or compress the third group of deformation areas and/or the fourth group of deformation areas according to the first specific direction to adjust the position of the corner of the eye in the left eye area and/or the position of the corner of the right eye area.
  • the first target area is an eye area; the eye area includes a left eye area and/or a right eye area;
  • the first determining unit 41 is configured to determine a fifth group of deformed regions corresponding to the left-eye region from the plurality of deformed regions based on the key point information corresponding to the left-eye region, and/or, based on the key point information corresponding to the right-eye region , Determine the sixth group of deformed regions corresponding to the right eye region from the multiple deformed regions;
  • the deformation processing unit 42 is configured to perform deformation processing on the fifth group of deformed regions, so that the contour key point of the left eye area is rotated relative to the center key point of the left eye area, and the rotation angle satisfies the first set angle, and/ Or, performing a deformation process on the sixth group of deformed regions, so that the contour key point of the right eye area is rotated relative to the center key point of the right eye area, and the rotation angle meets the second set angle.
  • the first target area is the nose area
  • the first determining unit 41 is configured to determine a seventh group of deformed regions corresponding to the nose region from the plurality of deformed regions based on key point information corresponding to the nose region;
  • the deformation processing unit 42 is configured to stretch or compress the seventh group of deformed regions in a second specific direction to lengthen or shorten the nose region.
  • the first target area is the nose area
  • the first determining unit 41 is configured to determine an eighth group of deformation regions corresponding to the nose region from the plurality of deformation regions based on key point information corresponding to the nose region;
  • the deformation processing unit 42 is configured to compress or stretch the eighth group of deformation regions according to a third specific direction, so as to narrow or widen the nose wing region.
  • the first target area is a chin area or a human center area
  • the first determining unit 41 is configured to determine a ninth group of deformation areas corresponding to the chin area or the human area from a plurality of deformation areas based on the key point information corresponding to the chin area or the human area;
  • the deformation processing unit 42 is configured to compress or stretch the ninth group of deformed regions in a fourth specific direction to shorten or lengthen the chin region or the middle region.
  • the first target area is the mouth area
  • the first determining unit 41 is configured to determine a tenth group of deformed regions corresponding to the mouth region from a plurality of deformed regions based on key point information corresponding to the mouth region;
  • the deformation processing unit 42 is configured to compress the tenth group of deformed regions according to the direction from the edge of the mouth area to the center of the mouth area, or to compress the tenth group according to the direction from the center of the mouth area to the edge of the mouth area The deformed area is stretched.
  • the first determining unit 41 is configured to determine an eleventh group of deformed regions corresponding to the facial region from the plurality of deformed regions based on key point information of the edge of the facial region;
  • the deformation processing unit 42 is configured to compress the eleventh group of deformed regions according to the direction from the edge of the face area to the midpoint of the face area, or to compress the eleventh group of deformation areas according to the direction from the midpoint of the face area to the edge of the face area The deformed area is stretched.
  • the first target area is the forehead area
  • the first determining unit 41 is configured to determine the twelfth group of deformed regions corresponding to the forehead region from the plurality of deformed regions based on the key point information corresponding to the forehead region;
  • the deformation processing unit 42 is configured to stretch or compress the twelfth group of deformed regions according to the fifth specific direction to raise or lower the hairline of the facial area;
  • the fifth specific direction is the key point direction and key of the forehead area Point to the direction of the closest eyebrow center, or, the fifth specific direction is the direction of the key point in the forehead area away from the eyebrow center closest to the key point.
  • the first determining unit 41 is configured to determine at least three key points of the forehead area; determine the key point information of the forehead area based on the at least three key points and the first set of contour point information below the eyes in the face area.
  • the first key point of the at least three key points is located on the midline of the forehead area; the second key point and the third key point of the at least three key points are located on both sides of the midline.
  • the first determining unit 41 is configured to perform curve fitting based on the key points at both ends of the first set of contour point information below the eyes in the face area and the above-mentioned at least three key points to obtain curve fitting Key point information: Based on the curve interpolation algorithm, the curve fitting key point information is interpolated to obtain the key point information with the forehead area.
  • the first determining unit 41 is configured to detect the face area by a face key point detection algorithm, and obtain key point information of the organs of the face area and key point information of the edges of the face area; based on the face The key point information of the edge of the area obtains the key point information of the outer edge.
  • the first determining unit 41 is configured to obtain a first group of contour point information below the eyes in the face area; determine the second group of contour point information corresponding to the forehead area, based on the first group The contour point information and the second set of contour point information determine the key point information of the edge of the face area.
  • the first determining unit 41 is configured to determine the relative positional relationship between the key point information of the edge of the face area and the midpoint of the face area.
  • the above-mentioned relative positional relationship includes the edge of the face area.
  • the above-mentioned device further includes a second determining unit 43, configured to determine a deflection parameter of the face area, and determine that each deformed area corresponds to at least part of the deformed area based on the deflection parameter.
  • the deformation parameters and direction of the deformation so that each deformation area performs image deformation processing according to the corresponding deformation parameters and deformation direction.
  • the second determining unit 43 is configured to determine the left edge key point, the right edge key point, and the center key point of any area in the face area; the area includes at least one of the following areas: face area , Nose area, mouth area; determine the first distance between the left key point and the center key point, and determine the second distance between the right edge key point and the center key point; based on the first distance and the second distance Determine the deflection parameters of the face area.
  • the device further includes an image processing unit 44, configured to recognize the second target area in the face area, perform feature processing on the second target area, and generate a third image ;
  • the second target area includes at least one of the following: eye area, nasolabial fold area, tooth area, eye area, apple muscle area.
  • the first determining unit 41, the deformation processing unit 42, the second determining unit 43, and the image processing unit 44 in the device can all be implemented by a central processing unit (CPU, Central Processing Unit), digital signal Processor (DSP, Digital Signal Processor), Microcontroller Unit (MCU, Microcontroller Unit) or Programmable Gate Array (FPGA, Field-Programmable Gate Array) implementation.
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • MCU Microcontroller Unit
  • FPGA Field-Programmable Gate Array
  • the image processing apparatus provided in the above embodiment performs image processing
  • only the division of the above-mentioned program modules is used as an example for illustration.
  • the above-mentioned processing can be allocated by different program modules as needed. That is, the internal structure of the device is divided into different program modules to complete all or part of the processing described above.
  • the image processing device provided in the foregoing embodiment and the image processing method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, and will not be repeated here.
  • FIG. 10 is a schematic diagram of the hardware composition of the image processing device according to the embodiment of the application.
  • the image processing device includes a memory 52, a processor 51, and is stored on the memory 52.
  • bus system 53 various components in the image processing apparatus may be coupled together through the bus system 53. It can be understood that the bus system 53 is used to implement connection and communication between these components. In addition to the data bus, the bus system 53 also includes a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are marked as the bus system 53 in FIG. 10.
  • the memory 52 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory can be a read only memory (ROM, Read Only Memory), a programmable read only memory (PROM, Programmable Read-Only Memory), an erasable programmable read only memory (EPROM, Erasable Programmable Read- Only Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), magnetic random access memory (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface memory , CD-ROM, or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be magnetic disk storage or tape storage.
  • the volatile memory may be random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • SSRAM synchronous static random access memory
  • DRAM dynamic random access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM enhanced -Type synchronous dynamic random access memory
  • SLDRAM SyncLink Dynamic Random Access Memory
  • direct memory bus random access memory DRRAM, Direct Rambus Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • the memory 52 described in the embodiment of the present application is intended to include, but is not limited to, these and any other suitable types of memory.
  • the method disclosed in the foregoing embodiment of the present application may be applied to the processor 51 or implemented by the processor 51.
  • the processor 51 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 51 or instructions in the form of software.
  • the aforementioned processor 51 may be a general-purpose processor, a DSP, or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and the like.
  • the processor 51 may implement or execute various methods, steps, and logical block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium, and the storage medium is located in the memory 52.
  • the processor 51 reads the information in the memory 52 and completes the steps of the foregoing method in combination with its hardware.
  • the image processing device may be used by one or more application specific integrated circuits (ASIC, Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable Logic Device), and complex programmable logic device (CPLD). , Complex Programmable Logic Device, FPGA, general-purpose processor, controller, MCU, microprocessor (Microprocessor), or other electronic components to implement the foregoing method.
  • ASIC Application Specific Integrated Circuit
  • DSP programmable logic device
  • PLD programmable logic device
  • CPLD complex programmable logic device
  • FPGA field-programmable gate array
  • controller programmable Logic Device
  • MCU microprocessor
  • Microprocessor microprocessor
  • the embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the above method of the embodiment of the present application are implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the above-mentioned units is only a logical function division.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units; Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the functional units in the embodiments of the present application can all be integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, ROM, RAM, magnetic disk, or optical disk.
  • the above-mentioned integrated unit of this application is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) is caused to execute all or part of the foregoing methods of the various embodiments of the present application.
  • the aforementioned storage media include: removable storage devices, ROM, RAM, magnetic disks, or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种图像处理方法及装置。所述方法包括:获得第一图像,识别所述第一图像中的面部区域,确定所述面部区域相关的关键点信息(101),所述关键点信息包括:所述面部区域的关键点信息和外缘关键点信息;所述外缘关键点信息对应的区域包括所述面部区域且大于所述面部区域;基于所述关键点信息确定多个变形区域,基于所述多个变形区域中的至少部分变形区域对所述面部区域进行图像变形处理,生成第二图像(102)。

Description

一种图像处理方法及装置
相关申请的交叉引用
本申请基于申请号为201910169503.4、申请日为2019年03月06日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请涉及图像处理技术,具体涉及一种图像处理方法及装置。
背景技术
随着图像处理技术的不断发展,出现了越来越多的图像处理方式,以实现对人脸进行图像处理。若只针对人脸区域进行压缩处理,图像中会出现空洞;若针对人脸区域进行拉伸处理,图像中会出现像素重叠的情况。
发明内容
本申请实施例提供一种图像处理方法及装置。
为达到上述目的,本申请实施例的技术方案是这样实现的:
本申请实施例提供了一种图像处理方法,所述方法包括:获得第一图像,识别所述第一图像中的面部区域,确定所述面部区域相关的关键点信息,所述关键点信息包括:所述面部区域的关键点信息和外缘关键点信息;所述外缘关键点信息对应的区域包括所述面部区域且大于所述面部区域;基于所述关键点信息确定多个变形区域,基于所述多个变形区域中的至少部分变形区域对所述面部区域进行图像变形处理,生成第二图像。
在本申请的一些可选实施例中,所述面部区域的关键点信息包括所述面部区域的器官的关键点信息和所述面部区域的边缘的关键点信息;所述面部区域的边缘对应所述面部区域的轮廓;所述器官的关键点信息包括器官的中心关键点信息和/或器官的轮廓关键点信息。
在本申请的一些可选实施例中,所述基于所述关键点信息确定多个变形区域,包括:基于所述关键点信息中的任意相邻的三个关键点确定所述多个变形区域。
在本申请的一些可选实施例中,所述基于所述多个变形区域中的至少部分变形区域对所述面部区域进行图像变形处理,包括:确定所述面部区域中的待处理的第一目标区域;基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;对所述第一目标区域对应的变形区域进行图像变形处理。
在本申请的一些可选实施例中,所述第一目标区域为眼部区域;所述眼部区域包括左眼区域和/或右眼区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域,包括:基于所述左眼区域对应的关键点信息,从所述多个变形区域中确定与所述左眼区域对应的第一组变形区域,和/或,基于所述右眼区域对应的关键点信息,从所述多个变形区域中确定与所述右眼区域对应的第二组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:对所述第一组变形区域和/或所述第二组变形区域进行图像变形处理;其中,所述第一组变形区域的图像变形方向和所述第二组变形区域的图像变形方向相反,以使所述左眼区域和所述右眼区域之间的距离增大或缩小。
在本申请的一些可选实施例中,所述第一目标区域为眼角区域;所述眼角区域包括左眼的眼角区域和/或右眼的眼角区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中 确定与所述第一目标区域对应的变形区域,包括:基于所述左眼的眼角区域对应的关键点信息,从所述多个变形区域中确定第三组变形区域,和/或,基于所述右眼的眼角区域对应的关键点信息,从所述多个变形区域中确定与所述右眼的眼角区域对应的第四组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:按照第一特定方向拉伸或压缩所述第三组变形区域和/或所述第四组变形区域,以调整所述左眼区域的眼角的位置和/或所述右眼区域的眼角的位置。
在本申请的一些可选实施例中,所述第一目标区域为眼部区域;所述眼部区域包括左眼区域和/或右眼区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域,包括:基于所述左眼区域对应的关键点信息,从所述多个变形区域中确定与所述左眼区域对应的第五组变形区域,和/或,基于所述右眼区域对应的关键点信息,从所述多个变形区域中确定与所述右眼区域对应的第六组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:对所述第五组变形区域进行变形处理,以使左眼区域的轮廓关键点相对于左眼区域的中心关键点旋转,且旋转的角度满足第一设定角度,和/或,对所述第六组变形区域进行变形处理,以使右眼区域的轮廓关键点相对于右眼区域的中心关键点旋转,且旋转的角度满足第二设定角度。
在本申请的一些可选实施例中,所述第一目标区域为鼻子区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:基于所述鼻子区域对应的关键点信息,从所述多个变形区域中确定与所述鼻子区域对应的第七组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:按照第二特定方向拉伸或压缩所述第七组变形区域,以拉长或缩短所述鼻子区域。
在本申请的一些可选实施例中,所述第一目标区域为鼻翼区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:基于所述鼻翼区域对应的关键点信息,从所述多个变形区域中确定与所述鼻翼区域对应的第八组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:按照第三特定方向压缩或拉伸所述第八组变形区域,以使所述鼻翼区域变窄或变宽。
在本申请的一些可选实施例中,所述第一目标区域为下巴区域或人中区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:基于所述下巴区域或人中区域对应的关键点信息,从所述多个变形区域中确定与所述下巴区域或人中区域对应的第九组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:按照第四特定方向压缩或拉伸所述第九组变形区域,以缩短或拉长所述下巴区域或人中区域。
在本申请的一些可选实施例中,所述第一目标区域为嘴部区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:基于所述嘴部区域对应的关键点信息,从所述多个变形区域中确定与所述嘴部区域对应的第十组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:按照所述嘴部区域的边缘朝向所述嘴部区域的中心的方向对所述第十组变形区域进行压缩处理,或者按照所述嘴部区域的中心朝向所述嘴部区域的边缘的方向对所述第十组变形区域进行拉伸处理。
在本申请的一些可选实施例中,所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:基于所述面部区域的边缘的关键点信息,从所述多个变形区域中确定与所述面部区域对应的第十一组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:按照所述面部区域的边缘朝向所述面部区域的中点的方向对所述第十一组变形区域进行压缩处理,或者按照所述面部区域的中点朝向所述面部区域的边缘的方向对所述第十一组变形区域进行拉伸处理。
在本申请的一些可选实施例中,所述第一目标区域为额头区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:基于所述额头区域的关键点信息,从所述多个变形区域中确定与所述额头区域对应的第十二组变形区域;所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:按照第五特定方向对所述第十二组变形区域进行拉伸或压缩处理,以提升或降低所述面部区域的发际线;所述第五特定方向为所述额头区域的关键点指向与所述关键点距离最近的眉心的方向,或者,所述第五特定方向为所述额头区域的关键点远离与所述关键点距离最近的眉心的方向。
在本申请的一些可选实施例中,所述额头区域的关键点信息的确定方式包括:确定所述额头区域的至少三个关键点;基于所述至少三个关键点和所述面部区域中眼部以下的第一组轮廓点信息确定所述额头区域的关键点信息。
在本申请的一些可选实施例中,所述至少三个关键点中的第一关键点位于所述额头区域的中线上;所述至少三个关键点中的第二关键点和第三关键点位于所述中线的两侧。
在本申请的一些可选实施例中,所述基于所述至少三个关键点和所述面部区域中眼部以下的第一组轮廓点信息确定所述额头区域的关键点信息,包括:基于所述面部区域中眼部以下的第一组轮廓点信息中位于两端的关键点以及所述至少三个关键点进行曲线拟合,获得曲线拟合关键点信息;基于曲线插值算法对所述曲线拟合关键点信息进行插值处理,获得所述额头区域的关键点信息。
在本申请的一些可选实施例中,所述确定所述面部区域相关的关键点信息,包括:通过面部关键点检测算法检测所述面部区域,获得所述面部区域包含的器官的关键点信息以及所述面部区域的边缘的关键点信息;基于所述面部区域的边缘的关键点信息获得所述外缘关键点信息。
在本申请的一些可选实施例中,获得所述面部区域的边缘的关键点信息,包括:获得所述面部区域中眼部以下的第一组轮廓点信息;确定所述额头区域的第二组轮廓点信息,基于所述第一组轮廓点信息和所述第二组轮廓点信息确定所述面部区域的边缘的关键点信息。
在本申请的一些可选实施例中,所述基于所述面部区域的边缘的关键点信息获得所述外缘关键点信息,包括:确定所述面部区域的边缘的关键点信息与所述面部区域的中点之间的相对位置关系,所述相对位置关系包括所述面部区域的边缘的关键点与所述面部区域的中心点之间的距离以及所述面部区域的边缘的关键点相对于所述面部区域的中心点的方向;基于所述相对位置关系将第一边缘的关键点按照朝向所述面部区域的外部的方向延伸预设距离,获得所述第一边缘的关键点对应的外缘关键点;其中,所述第一边缘的关键点为所述面部区域的边缘的关键点中的任一关键点;所述预设距离与所述第一边缘的关键点与面部区域的中心点之间的距离相关。
在本申请的一些可选实施例中,所述方法还包括:确定所述面部区域的偏转参数,基于所述偏转参数确定所述至少部分变形区域中每个变形区域对应的变形参数和变形方向,以使每个变形区域按照对应的变形参数和变形方向进行图像变形处理。
在本申请的一些可选实施例中,所述确定所述面部区域的偏转参数,包括:确定所述面部区域中任一区域的左侧边缘关键点、右侧边缘关键点和中心关键点;所述区域包括以下区域的至少之一:脸部区域、鼻子区域、嘴部区域;确定所述左侧关键点与所述中心关键点之间的第一距离,以及确定所述右侧边缘关键点与所述中心关键点之间的第二距离;基于所述第一距离和所述第二距离确定所述面部区域的偏转参数。
在本申请的一些可选实施例中,所述方法还包括:识别所述面部区域中的第二目标区域,对所述第二目标区域进行特征处理,生成第三图像;所述第二目标区域包括以下至少之一:眼周区域、法令纹区域、牙齿区域、眼部区域、苹果肌区域。
本申请实施例还提供了一种图像处理装置,所述装置包括:第一确定单元和变形处理单元;其中,所述第一确定单元,配置为获得第一图像,识别所述第一图像中的面部区域,确定所述面部区域相关的关键点信息,所述关键点信息包括:所述面部区域的关键点信息和外缘关键点信息;所述外缘关键点信息对应的区域包括所述面部区域且大于所述面部区域;还配置为基于所述关键点信息确定多个变形区域;
所述变形处理单元,配置为基于所述多个变形区域中的至少部分变形区域对所述面部区域进行图像变形处理,生成第二图像。
在本申请的一些可选实施例中,所述面部区域的关键点信息包括所述面部区域的器官的关键点信息和所述面部区域的边缘的关键点信息;所述面部区域的边缘对应所述面部区域的轮廓;所述器官的关键点信息包括器官的中心关键点信息和/或器官的轮廓关键点信息。
在本申请的一些可选实施例中,所述第一确定单元,配置为基于所述关键点信息中的任意相邻的三个关键点确定所述多个变形区域。
在本申请的一些可选实施例中,所述第一确定单元,配置为确定所述面部区域中的待处理的第一目标区域;基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;
所述变形处理单元,配置为对所述第一目标区域对应的变形区域进行图像变形处理。
在本申请的一些可选实施例中,所述第一目标区域为眼部区域;所述眼部区域包括左眼区域和/或右眼区域;所述第一确定单元,配置为基于所述左眼区域对应的关键点信息,从所述多个变形区域中确定与所述左眼区域对应的第一组变形区域,和/或,基于所述右眼区域对应的关键点信息,从所述多个变形区域中确定与所述右眼区域对应的第二组变形区域;
所述变形处理单元,配置为对所述第一组变形区域和/或所述第二组变形区域进行图像变形处 理,其中,所述第一组变形区域的图像变形方向和所述第二组变形区域的图像变形方向相反,以使所述左眼区域和所述右眼区域之间的距离增大或缩小。
在本申请的一些可选实施例中,所述第一目标区域为眼角区域;所述眼角区域包括左眼的眼角区域和/或右眼的眼角区域;所述第一确定单元,配置为基于所述左眼的眼角区域对应的关键点信息,从所述多个变形区域中确定与所述左眼的眼角区域对应的第三组变形区域,和/或,基于所述右眼的眼角区域对应的关键点信息,从所述多个变形区域中确定与所述右眼的眼角区域对应的第四组变形区域;
所述变形处理单元,配置为按照第一特定方向拉伸或压缩所述第三组变形区域和/或所述第四组变形区域,以调整所述左眼区域的眼角的位置和/或所述右眼区域的眼角的位置。
在本申请的一些可选实施例中,所述第一目标区域为眼部区域;所述眼部区域包括左眼区域和/或右眼区域;所述第一确定单元,配置为基于所述左眼区域对应的关键点信息,从所述多个变形区域中确定与所述左眼区域对应的第五组变形区域,和/或,基于所述右眼区域对应的关键点信息,从所述多个变形区域中确定与所述右眼区域对应的第六组变形区域;
所述变形处理单元,配置为对所述第五组变形区域进行变形处理,以使左眼区域的轮廓关键点相对于左眼区域的中心关键点旋转,且旋转的角度满足第一设定角度,和/或,对所述第六组变形区域进行变形处理,以使右眼区域的轮廓关键点相对于右眼区域的中心关键点旋转,且旋转的角度满足第二设定角度。
在本申请的一些可选实施例中,所述第一目标区域为鼻子区域;所述第一确定单元,配置为基于所述鼻子区域对应的关键点信息,从所述多个变形区域中确定与所述鼻子区域对应的第七组变形区域;
所述变形处理单元,配置为按照第二特定方向拉伸或压缩所述第七组变形区域,以拉长或缩短所述鼻子区域。
在本申请的一些可选实施例中,所述第一目标区域为鼻翼区域;所述第一确定单元,配置为基于所述鼻翼区域对应的关键点信息,从所述多个变形区域中确定与所述鼻翼区域对应的第八组变形区域;
所述变形处理单元,配置为按照第三特定方向压缩或拉伸所述第八组变形区域,以使所述鼻翼区域变窄或变宽。
在本申请的一些可选实施例中,所述第一目标区域为下巴区域或人中区域;所述第一确定单元,配置为基于所述下巴区域或人中区域对应的关键点信息,从所述多个变形区域中确定与所述下巴区域或人中区域对应的第九组变形区域;
所述变形处理单元,配置为按照第四特定方向压缩或拉伸所述第九组变形区域,以缩短或拉长所述下巴区域或人中区域。
在本申请的一些可选实施例中,所述第一目标区域为嘴部区域;所述第一确定单元,配置为基于所述嘴部区域对应的关键点信息,从所述多个变形区域中确定与所述嘴部区域对应的第十组变形区域;
所述变形处理单元,配置为按照所述嘴部区域的边缘朝向所述嘴部区域的中心的方向对所述第十组变形区域进行压缩处理,或者按照所述嘴部区域的中心朝向所述嘴部区域的边缘的方向对所述第十组变形区域进行拉伸处理。
在本申请的一些可选实施例中,所述第一确定单元,配置为基于所述面部区域的边缘的关键点信息,从所述多个变形区域中确定与所述面部区域对应的第十一组变形区域;
所述变形处理单元,配置为按照所述面部区域的边缘朝向所述面部区域的中线的方向对所述第十一组变形区域进行压缩处理,或者按照所述面部区域的中线朝向所述面部区域的边缘的方向对所述第十一组变形区域进行拉伸处理。
在本申请的一些可选实施例中,所述第一目标区域为额头区域;所述第一确定单元,配置为基于所述额头区域对应的关键点信息,从所述多个变形区域中确定与所述额头区域对应的第十二组变形区域;
所述变形处理单元,配置为按照第五特定方向对所述第十二组变形区域进行拉伸或压缩处理,以提升或降低所述面部区域的发际线;所述第五特定方向为所述额头区域的关键点指向与所述关键点距离最近的眉心的方向,或者,所述第五特定方向为所述额头区域的关键点远离与所述关键点距离最近的眉心的方向。
在本申请的一些可选实施例中,所述第一确定单元,配置为确定所述额头区域的至少三个关键 点;基于所述至少三个关键点和所述面部区域中眼部以下的第一组轮廓点信息确定所述额头区域的关键点信息。
在本申请的一些可选实施例中,所述至少三个关键点中的第一关键点位于所述额头区域的中线上;所述至少三个关键点中的第二关键点和第三关键点位于所述中线的两侧。
在本申请的一些可选实施例中,所述第一确定单元,配置为基于所述面部区域中眼部以下的第一组轮廓点信息中位于两端的关键点信息以及所述至少三个关键点信息进行曲线拟合,获得曲线拟合关键点信息;基于曲线插值算法对所述曲线拟合关键点信息进行插值处理,获得与所述额头区域对应的关键点信息。
在本申请的一些可选实施例中,所述第一确定单元,配置为通过面部关键点检测算法检测所述面部区域,获得所述面部区域的器官的关键点信息以及所述面部区域的边缘的关键点信息;基于所述面部区域的边缘的关键点信息获得所述外缘关键点信息。
在本申请的一些可选实施例中,所述第一确定单元,配置为获得所述面部区域中眼部以下的第一组轮廓点信息;确定所述额头区域对应的第二组轮廓点信息,基于所述第一组轮廓点信息和所述第二组轮廓点信息确定所述面部区域的边缘的关键点信息。
在本申请的一些可选实施例中,所述第一确定单元,配置为确定所述面部区域的边缘的关键点信息与所述面部区域的中点之间的相对位置关系,所述相对位置关系包括所述面部区域的边缘的关键点与所述面部区域的中心点之间的距离以及所述面部区域的边缘的关键点相对于所述面部区域的中心点的方向;基于所述相对位置关系将第一边缘的关键点按照朝向所述面部区域的外部的方向延伸预设距离,获得所述第一边缘的关键点对应的外缘关键点;其中,所述第一边缘的关键点为所述面部区域的边缘的关键点中的任一关键点;所述预设距离与所述第一边缘的关键点与面部区域的中心点之间的距离相关。
在本申请的一些可选实施例中,所述装置还包括第二确定单元,配置为确定所述面部区域的偏转参数,基于所述偏转参数确定所述至少部分变形区域中每个变形区域对应的变形参数和变形方向,以使每个变形区域按照对应的变形参数和变形方向进行图像变形处理。
在本申请的一些可选实施例中,所述第二确定单元,配置为确定所述面部区域中任一区域的左侧边缘关键点、右侧边缘关键点和中心关键点;所述区域包括以下区域的至少之一:脸部区域、鼻子区域、嘴部区域;确定所述左侧关键点与所述中心关键点之间的第一距离,以及确定所述右侧边缘关键点与所述中心关键点之间的第二距离;基于所述第一距离和所述第二距离确定所述面部区域的偏转参数。
在本申请的一些可选实施例中,所述装置还包括图像处理单元,配置为识别所述面部区域中的第二目标区域,对所述第二目标区域进行特征处理,生成第三图像;所述第二目标区域包括以下至少之一:眼周区域、法令纹区域、牙齿区域、眼部区域、苹果肌区域。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现本申请实施例所述方法的步骤。
本申请实施例还提供了一种图像处理装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例所述方法的步骤。
本申请实施例提供的图像处理方法及装置,所述方法包括:获得第一图像,识别所述第一图像中的面部区域,确定所述面部区域相关的关键点信息,所述关键点信息包括:所述面部区域的关键点信息和外缘关键点信息;所述外缘关键点信息对应的区域包括所述面部区域且大于所述面部区域;基于所述关键点信息确定多个变形区域,基于所述多个变形区域中的至少部分变形区域对所述面部区域进行图像变形处理,生成第二图像。
采用本申请实施例的技术方案,一方面,通过对面部区域的外缘的关键点的确定,从而确定出面部区域外缘的变形区域,以便于对面部区域进行变形处理的过程中,适应性的对面部区域的外缘进行变形处理,避免了由于面部区域的变形处理导致的图像中出现空洞或者出现像素重叠的现象发生,提升了图像处理效果。
附图说明
图1为本申请实施例的图像处理方法的一种流程示意图;
图2为本申请实施例的图像处理方法中的变形区域的示意图;
图3a至图3c为本申请实施例的图像处理方法中的脸部关键点示意图;
图4为本申请实施例的图像处理方法的另一种流程示意图;
图5为本申请实施例的图像处理方法的又一种流程示意图;
图6为本申请实施例的图像处理的一种应用示意图;
图7为本申请实施例的图像处理装置的一种组成结构示意图;
图8为本申请实施例的图像处理装置的另一种组成结构示意图;
图9为本申请实施例的图像处理装置的又一种组成结构示意图;
图10为本申请实施例的图像处理装置的硬件组成结构示意图。
具体实施方式
下面结合附图及具体实施例对本申请作进一步详细的说明。
本申请实施例提供了一种图像处理方法。图1为本申请实施例的图像处理方法的一种流程示意图;如图1所示,方法包括:
步骤101:获得第一图像,识别第一图像中的面部区域,确定面部区域相关的关键点信息,关键点信息包括:面部区域的关键点信息和外缘关键点信息;外缘关键点信息对应的区域包括面部区域且大于面部区域;
步骤102:基于关键点信息确定多个变形区域,基于多个变形区域中的至少部分变形区域对面部区域进行图像变形处理,生成第二图像。
本实施例中,第一图像中包含目标对象的脸部;该目标对象可以是图像中的真实人物,在其它实施方式中,目标对象也可以是虚拟人物,例如卡通人物形象等。可以理解,第一图像中包括人脸。本申请实施例主要是针对图像中的人脸进行图像处理。当然,本申请实施例也可以针对其它目标对象的脸部进行图像处理。实际应用中,可通过预设的人脸识别算法对第一图像进行脸部识别,识别出第一图像中的面部区域。
本实施例中,面部区域相关的关键点信息包括关键点的位置信息,示例性的,关键点的位置信息可通过关键点的坐标信息表示。其中,面部区域的关键点信息包括面部区域的器官的关键点信息和面部区域的边缘的关键点信息;面部区域的边缘对应面部区域的轮廓;外缘关键点信息基于面部区域的边缘的关键点信息确定。其中,器官的关键点信息包括器官的中心关键点信息和/或器官的轮廓关键点信息。
可以理解,面部区域相关的关键点包括:面部区域包含的器官的关键点、面部区域的边缘的关键点、外缘关键点。
在本申请的一些可选实施例中,对于步骤101,确定面部区域相关的关键点信息,包括:通过面部关键点检测算法检测面部区域,获得面部区域的各器官的关键点信息以及面部区域的边缘的关键点信息;基于面部区域的边缘的关键点信息获得外缘关键点信息。
在一些实施例中,获得面部区域的边缘的关键点信息,包括:获得面部区域中眼部以下区域的第一组轮廓点信息;确定额头区域的第二组轮廓点信息,基于第一组轮廓点信息和第二组轮廓点信息确定面部区域的边缘的关键点信息。
其中,确定额头区域的第二组轮廓点信息,包括:确定额头区域的至少三个关键点;基于至少三个关键点和第一组轮廓点信息确定额头区域的关键点信息。至少三个关键点中的第一关键点位于额头区域的中线上;至少三个关键点中的第二关键点和第三关键点位于中线的两侧。
在一些实施例中,基于至少三个关键点和第一组轮廓点信息确定额头区域对应的关键点信息,包括:基于面部区域中眼部以下的第一组轮廓点信息中位于两端的关键点以及上述额头区域中的至少三个关键点进行曲线拟合,获得曲线拟合关键点信息;基于曲线插值算法对曲线拟合关键点信息进行插值处理,获得与额头区域对应的关键点信息。
图2为本申请实施例的图像处理方法中的变形区域的示意图;图3a至图3c为本申请实施例的图像处理方法中的脸部关键点示意图;结合图2和图3a至图3c所示,第一方面,面部区域包含的器官的关键点具体为面部区域包括的以下至少一个器官的关键点:眉毛、眼睛、鼻子、嘴。在一些实施方式中,器官的关键点信息可包括器官的中心关键点信息和/或器官的轮廓关键点信息。以器官为眼睛为例,则眼睛的关键点信息可包括眼睛的中心关键点信息和眼睛的轮廓关键点信息;又以器官为眉毛为例,则眉毛的关键点信息可包括眉毛的轮廓关键点信息。则本实施例中,首先通过面部 关键点检测算法检测获得面部区域中各器官的关键点信息。
第二方面,通过面部关键点检测算法获得面部区域中眼部以下的第一组轮廓点信息,第一组轮廓点如图3a中的关键点0至关键点32,又如图3b中所示的实心圆点“·”表示的第一组轮廓关键点。在一些实施例中;可通过面部关键点检测算法获得面部区域中眼部以下区域的数量较少的M1个轮廓点,例如5个轮廓点等;再针对该M1个轮廓点通过曲线插值的方式获得M2个轮廓点,将M1个轮廓点和M2个轮廓点作为第一组轮廓点信息。
其中,面部关键点检测算法可采用任一人脸识别算法。
第三方面,针对额头区域的关键点信息进行获取。作为一种示例,可基于预设参数确定面部区域的额头区域中的至少三个关键点信息,以确定三个关键点信息为例,则关键点1对应于位于额头区域的中线上的关键点,记为第一关键点;而关键点2和关键点3分别位于关键点1的两侧;基于第一组轮廓点信息中位于两端的关键点4和关键点5(例如图3a中的关键点0和关键点32)、以及关键点1、关键点2和关键点3进行曲线拟合,获得曲线拟合关键点信息;基于曲线插值算法对曲线拟合关键点信息进行插值处理,获得与额头区域匹配的第二组轮廓点信息。
至此,通过对第一组轮廓点信息和第二组轮廓点信息组合成为面部区域的边缘的关键点信息,结合图2所示,面部区域的边缘的关键点信息对应的关键点位于面部区域的边缘的所有位置,也即覆盖了面部区域的所有边缘。
在一些实施例中,基于面部区域边缘的关键点信息获得外缘关键点信息,包括:确定面部区域的边缘的关键点信息与面部区域的中心点之间的相对位置关系,相对位置关系包括面部区域的边缘的关键点与面部区域的中心点之间的距离以及面部区域的边缘的关键点相对于面部区域的中心点的方向;基于相对位置关系将第一边缘的关键点按照朝向面部区域的外部的方向延伸预设距离,获得与第一边缘的关键点对应的外缘关键点;其中,第一边缘的关键点为面部区域的边缘的关键点中的任一个关键点;预设距离与第一边缘的关键点与面部区域的中心点之间的距离相关;第一边缘的关键点与面部区域的中心点之间的距离越大,延伸的预设距离越大,相反的,第一边缘的关键点与面部区域的中心点之间的距离越小,延伸的预设距离越小。当然,在其它实施方式中,也可选取其它关键点而不限于选取面部区域的中心点,例如,可选取鼻子的鼻尖对应的关键点等等,本实施例中对此不做限定。
结合图3c所示,本实施例中获得的面部区域相关的关键点除了位于面部区域的关键点之外,还包括外缘关键点;外缘关键点位于脸部区域的外部,可以理解,外缘关键点对应的区域包含面部区域且大于面部区域。在一些实施例中,外缘关键点的数量可以与面部区域的边缘的关键点的数量一致,即可基于面部区域的边缘的关键点信息确定外缘关键点信息。在另一些实施例中,外缘关键点的数量也可以与面部区域的边缘的关键点的数量不同,例如外缘关键点的数量可大于面部区域的边缘的关键点的数量。实际应用中,可通过上述方式获得与面部区域的边缘的关键点的数量一致的外缘关键点后,例如确定了N1个外缘关键点,再针对N1个外缘关键点通过曲线插值的方式获得N2个外缘关键点,将N1个外缘关键点的信息和N2个外缘关键点的信息作为本实施例中的外缘关键点信息。
本实施例中,确定外缘关键点信息的目的在于,在对图像进行变形处理过程中,尤其是采用图2所示的三角变形区域的变形处理方式进行图像变形处理过程中,可利用外缘关键点信息和面部区域的边缘的关键点信息形成的三角变形区域进行适应性的变形处理,即对面部区域相关联的过渡区域(即外缘关键点与面部区域的边缘的关键点之间的区域)进行适应性的变形处理,从而可以获得更佳的图像变形效果,使得面部融合的效果更为自然。而外缘关键点的数量大于面部区域的边缘的关键点的数量的作用在于,可减小过渡区域(即外缘关键点与面部区域的边缘的关键点之间的区域)中的三角变形区域的面积,从而提升变形处理精度,使得变形效果更佳。
相关技术中,一方面,人脸关键点的识别仅能够识别出面部中的器官的较为稀疏的关键点,则在此基础上,本申请实施例通过插值方式增加关键点,例如在眉心区域增加几个关键点。另一方面,现有的人脸关键点识别仅能够识别出人脸的眼部以下的部分关键点,可参照图3a所示,则本实施例的人脸关键点识别在额头区域增加了多个关键点,增加的关键点对应于额头或发际线的位置,以便于可基于额头的关键点对额头区域或发际线进行调整。
作为一种示例,如图2所示,获得的关键点信息对应的关键点的数量可以为106个。
在本申请的一些可选实施例中,针对步骤102,基于关键点信息确定多个变形区域,包括:基于关键点信息中的任意相邻的三个关键点确定多个变形区域。具体可参照图2所示。本实施例中基于确定的三角形的变形区域对目标区域进行图像变形处理。
由于本实施例中的面部区域相关的关键点信息包括外缘关键点信息,基于外缘关键点和面部区域对应的轮廓关键点可确定对应于外缘区域的三角变形区域,也即本实施例中的变形区域包括如图2所示的脸部区域以外的过渡区域对应的变形区域。因此,在基于脸部区域内的变形区域进行变形处理时,相应的对脸部区域以外的变形区域进行适应性的变形处理,以避免由于脸部区域的压缩导致图像中出现空洞,或者由于脸部区域的拉伸导致图像中出现像素重叠的现象发生。
采用本申请实施例的技术方案,通过对面部区域的外缘的关键点的确定,从而确定出面部区域外缘的变形区域,以便于对面部区域进行变形处理的过程中,适应性的对面部区域的外缘进行变形处理,避免了由于面部区域的变形处理导致的图像中出现空洞或者出现像素重叠的现象发生,提升了图像处理效果。
在本申请的一些可选实施例中,基于多个变形区域中的至少部分变形区域对面部区域进行图像变形处理,包括:确定面部区域中的待处理的第一目标区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域,对第一目标区域对应的变形区域进行图像变形处理。
本实施例中,确定面部区域中待进行变形处理的目标区域,目标区域包括以下至少之一:眼部区域、鼻子区域、嘴部区域、下巴区域、人中区域、额头区域,脸部区域等等;则针对不同的目标区域确定目标区域对应的变形区域,基于针对变形区域的变形处理从而实现对目标区域的变形处理,生成第二图像。其中,针对不同的目标区域确定目标区域对应的变形区域,包括:确定目标区域对应的关键点信息,从多个变形区域中确定包含关键点信息的所有变形区域。例如目标区域为眉毛区域,则确定眉毛区域对应的所有关键点,包含该所有关键点的变形区域作为待进行变形处理的变形区域。
作为第一种实施方式,第一目标区域为眼部区域;眼部区域包括左眼区域和/或右眼区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域,包括:基于左眼区域对应的关键点信息,从多个变形区域中确定与左眼区域对应的第一组变形区域,和/或,基于右眼区域对应的关键点信息,从多个变形区域中确定与右眼区域对应的第二组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:对第一组变形区域和/或第二组变形区域进行图像变形处理;其中,第一组变形区域的图像变形方向和第二组变形区域的图像变形方向相反,以使左眼区域和右眼区域之间的距离增大或缩小。
本实施例中,第一组变形区域和第二组变形区域为包含眼部区域的关键点的所有变形区域。本实施例用于对眼部区域在面部区域的位置进行调整;若面部区域中包括两个眼部区域,即包括左眼区域和右眼区域,则可以理解为调节左眼和右眼之间的距离;若面部区域中仅包括一个眼部区域,例如侧脸场景,则可以理解为调节眼部区域在面部区域的位置。实际应用中,可将第一组变形区域和第二组变形区域朝向相反的图像变形方向进行图像变形,例如确定左眼的中心点和右眼的中心点之间的连线,确定该连线的中点;使第一组变形区域和第二组变形区域分别朝向该连线的中点的方向移动,则相应使左眼区域和右眼区域之间的距离缩小,或者使第一组变形区域和第二组变形区域分别远离该该连线的中点的方向移动,则相应使左眼区域和右眼区域之间的距离增大。
作为第二种实施方式,第一目标区域为眼角区域;眼角区域包括左眼的眼角区域和/或右眼的眼角区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域,包括:基于左眼的眼角区域对应的关键点信息,从多个变形区域中确定与左眼的眼角区域对应的第三组变形区域,和/或,基于右眼的眼角区域对应的关键点信息,从多个变形区域中确定与右眼的眼角区域对应的第四组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:按照第一特定方向拉伸或压缩第三组变形区域和/或第四组变形区域,以调整左眼区域的眼角的位置和/或右眼区域的眼角的位置。
本实施例中,第三组变形区域为包含左眼的眼角区域对应的关键点的所有变形区域,第四组变形区域为包含右眼的眼角区域对应的关键点的所有变形区域。其中,眼角可以是眼部区域的内眼角和/或外眼角,内眼角和外眼角是一种相对概念,例如以左眼的中心点和右眼的中心点的连线的中点为参照,所谓内眼角指的是靠近上述连线的中点的眼角,所谓外眼角指的是远离上述连线的中点的眼角。本实施例用于对眼角在面部区域的位置进行调整,或者可以理解为对眼部的眼角区域的大小进行调整。实际应用中,可确定待调整的内眼角或外眼角的关键点,确定包含该关键点的变形区域,将变形区域按照朝向上述连线的中点的方向移动,或者按照远离上述连线的中点的方向移动。示例性的,第一特定方向为朝向上述连线的中点的方向,或者,第一特定方向为远离上述连线的中点的方向。
作为第三种实施方式,第一目标区域为眼部区域;眼部区域包括左眼区域和/或右眼区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域,包括:基于左眼区域和/或右眼区域对应的关键点信息,从多个变形区域中确定与左眼区域对应的第五组变形区域,和/或,基于右眼区域对应的关键点信息,从多个变形区域中确定与右眼区域对应的第六组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:对第五组变形区域进行变形处理,以使左眼区域的轮廓关键点相对于左眼区域的中心关键点旋转,且旋转的角度满足第一设定角度,和/或,对第六组变形区域进行变形处理,以使右眼区域的轮廓关键点相对于右眼区域的中心关键点旋转,且旋转的角度满足第二设定角度。
本实施例中,第五组变形区域为包含左眼区域的关键点的所有变形区域,第六组变形区域为包含右眼区域的关键点的所有变形区域。本实施例用于对眼部区域的角度进行调整,可以理解为调节眼部与面部的其它器官之间的相对角度,例如眼部与鼻子之间的相对角度,实际应用中,以眼部的中心点为旋转中心,顺时针或逆时针旋转特定角度实现。作为一种示例,可通过预设的旋转矩阵对眼部区域对应的变形区域进行变形处理,以使眼部区域的轮廓关键点相对于眼部区域的中心关键点旋转。其中,左眼区域的轮廓关键点相对于左眼区域的中心关键点旋转的角度满足第一设定角度,右眼区域的轮廓关键点相对于右眼区域的中心关键点旋转的角度满足第二设定角度;左眼区域的旋转方向和右眼区域的旋转方向可相反;第一设定角度和第二设定角度的数值可相同也可不同。
作为第四种实施方式,第一目标区域为鼻子区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域;包括:基于鼻子区域对应的关键点信息,从多个变形区域中确定与鼻子区域对应的第七组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:按照第二特定方向拉伸或压缩第七组变形区域,以拉长或缩短鼻子区域。
本实施例中,第七组变形区域为包含鼻子关键点的所有变形区域,本实施例用于对鼻子区域的长度或高度进行调整,可以理解为调节鼻子区域的长度或者调节鼻子的高度。实际应用中,可将第七组变形区域朝向第二特定方向拉伸或压缩,以拉长或缩短鼻子区域。其中,作为一些实施方式,第二特定方向为沿面部区域的长度方向,例如,面部区域中的两个眉心的连线的中点、鼻子中心点和嘴唇中心点形成的直线可以作为面部区域的长度方向,则沿该长度方向从鼻子区域的中心朝向鼻子区域的外部拉伸第七组变形区域,则拉长鼻子区域;沿该长度方向从鼻子区域的外部朝向鼻子区域的中心的方向压缩第七组变形区域,则缩短鼻子区域。
作为另一种实施方式,第二特定方向还可以是垂直与面部区域且远离面部区域的方向,则按照第二特定方向对鼻子区域的高度进行调整。实际应用中,本实施方式适用于图像中的人脸为侧脸的场景,即图像中的人脸为侧脸的场景下,通过确定面部区域的偏转参数,基于该偏转参数确定第二特定方向,也即基于人脸的偏转情况确定鼻子高度对应的方向,再按照第二特定方向对鼻子区域对应的第七组变形区域进行变形处理,以增加或缩短鼻子高度。
作为第五种实施方式,第一目标区域为鼻翼区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域;包括:基于鼻翼区域对应的关键点信息,从多个变形区域中确定与鼻翼区域对应的第八组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:按照第三特定方向压缩或拉伸第八组变形区域,以使鼻翼区域变窄或变宽。
本实施例中,第八组变形区域为包含鼻翼区域对应的关键点的所有变形区域,鼻翼区域指的是鼻尖两侧包含的区域,本实施例用于对鼻翼区域的宽窄进行调整,可以理解为调节鼻翼的宽窄。实际应用中,可确定鼻翼区域对应的关键点,确定包含该关键点的变形区域,将变形区域按照第三特定方向压缩或拉伸,以使鼻翼区域变窄或变宽;其中,第三特定方向为脸部区域的宽度方向,脸部区域的宽度方向与上述脸部区域的长度方向垂直。
作为第六种实施方式,第一目标区域为下巴区域或人中区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域;包括:基于下巴区域或人中区域对应的关键点信息,从多个变形区域中确定与下巴区域或人中区域对应的第九组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:按照第四特定方向压缩或拉伸第九组变形区域,以缩短或拉长下巴区域或人中区域。
本实施例中,第九组变形区域为包含下巴关键点或人中关键点的所有变形区域,本实施例用于对下巴区域或人中区域的长度进行调整,可以理解为调节下巴区域或人中区域的长度。其中,下巴区域指的是下颌区域;人中区域指的是鼻子与嘴巴之间的区域。实际应用中,可将第九组变形区域朝向第四特定方向压缩或拉伸,以缩短或拉长下巴区域或人中区域。其中,第四特定方向为沿面部区域的长度方向。
作为第七种实施方式,第一目标区域为嘴部区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域;包括:基于嘴部区域对应的关键点信息,从多个变形区域中确定与嘴部区域对应的第十组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:按照嘴部区域的边缘朝向嘴部区域的中心的方向对第十组变形区域进行压缩处理,或者按照嘴部区域的中心朝向嘴部区域的边缘的方向对第十组变形区域进行拉伸处理。
本实施例中,第十组变形区域为包含嘴部的关键点的所有变形区域,本实施例用于对嘴部区域的大小进行调整,可以理解为嘴部区域的增大处理或者嘴部区域的缩小处理。实际应用中,可确定嘴部区域对应的关键点,确定包含关键点的所有变形区域作为第十组变形区域,将变形区域按照嘴部区域的边缘朝向嘴部区域的中心的方向对第十组变形区域进行压缩处理,或者按照嘴部区域的中心朝向嘴部区域的边缘的方向对第十组变形区域进行拉伸处理。
作为第八种实施方式,基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域;包括:基于面部区域的边缘的关键点信息,从多个变形区域中确定与面部区域对应的第十一组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:按照面部区域的边缘朝向面部区域的中线的方向对第十一组变形区域进行压缩处理,或者按照面部区域的中线朝向面部区域的边缘的方向对第十一组变形区域进行拉伸处理。
本实施例中,第十一组变形区域为包含面部区域的边缘的关键点的所有变形区域,面部区域的边缘的关键点可参照图3b所示的第一组轮廓关键点和/或第二组轮廓关键点中的至少部分关键点,本实施例用于对面部区域的宽度进行调整,可以理解为“瘦脸”或“胖脸”处理。实际应用中,可按照面部区域的边缘朝向面部区域的中线的方向对第十一组变形区域进行压缩处理,或者按照面部区域的中线朝向面部区域的边缘的方向对第十一组变形区域进行拉伸处理;示例性的,面部区域的中线包括面部区域的中点(鼻尖对应的关键点),则可按照面部区域的边缘朝向面部区域的中点的方向对第十一组变形区域进行压缩处理,或者按照面部区域的中点朝向面部区域的边缘的方向对第十一组变形区域进行拉伸处理。
在一些实施例中,针对不同位置的关键点对应的变形区域的变形比例不同,示例性的,脸颊区域包含的关键点对应的变形区域的变形比例最大,其它区域对应的变形区域的变形比例可逐渐减小。例如,如图3a所示的关键点0、关键点16、关键点32附近的关键点对应的变形区域的变形比例最小,关键点8和关键点24附近的关键点对应的变形区域的变形比例最大,从而使变形效果(例如瘦脸效果或胖脸效果)更为自然。
作为第九种实施方式,第一目标区域为额头区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域;包括:基于额头区域的关键点信息,从多个变形区域中确定与额头区域对应的第十二组变形区域;对第一目标区域对应的变形区域进行图像变形处理,包括:按照第五特定方向对第十二组变形区域进行拉伸或压缩处理,以提升或降低面部区域的发际线;第五特定方向为额头区域的关键点指向与关键点距离最近的眉心的方向,或者,第五特定方向为额头区域的关键点远离与关键点距离最近的眉心的方向。
本实施例中,第十二组变形区域为包含额头区域的关键点的所有变形区域,额头区域的关键点的确定方式可参照前述,这里不再赘述。本实施例用于对额头区域的宽度进行调整,可以理解为对发际线在面部区域中的相对的高度进行调整。实际应用中,可确定额头区域的关键点,从多个变形区域中确定包含该关键点的所有变形区域作为第十二组变形区域,例如图2中所示的额头区域对应的三角变形区域以及额头区域以外的对应于外缘区域的三角变形区域作为本实施例中的第十二组变形区域;将该第十二组变形区域按照第五特定方向进行拉伸或压缩处理,以提升或降低脸部区域的发际线。其中,若图像中的人脸包括两个眉毛,则针对额头区域对应的某特征点,可先确定该特征点距离最近的眉心,确定该特征点与该眉心的方向,将该方向作为第五特定方向;而针对变形区域中包括的三个关键点,分别确定各关键点对应的第五特定方向,按照每个特征点对应的第五特定方向对该变形区域进行变形处理,具体对变形区域中的三个关键点按照各自对应的第五特定方向进行移动。
由此可见,本实施例的图像处理方法能够实现:1、发际线的调节,即能够调节发际线的位置,实现发际线的调高或调低;2、鼻子区域的长短调节,即能够实现鼻子长短的调节,实现鼻子增长或缩短;3、鼻翼区域的调节,即能够实现鼻翼宽窄的调节;4、人中区域的调节,即能够实现人中区域的长度的调整,实现人中区域的拉长或缩短;5、嘴型的调节,即能够实现嘴巴的大小的调整;6、下巴区域调节,即能够实现下巴区域的长度的调整,实现下巴区域的拉长或缩短;7、脸型的调节,即能够实现脸部轮廓的调节,使脸部轮廓变窄或变宽,例如“瘦脸”;8、眼距的调整,即能够调节左 眼和右眼之间的距离;9、眼睛角度的调整,即能够调节眼睛的相对角度;10、眼角位置的调整,即能够调节眼角的位置,实现“开眼角”,增大眼睛;11、侧脸场景下的鼻子的高度的调整,即能够实现侧脸的“隆鼻”。
本申请实施例还提供了一种图像处理方法。图4为本申请实施例的图像处理方法的另一种流程示意图;如图4所示,方法包括:
步骤201:获得第一图像,识别第一图像中的面部区域,确定面部区域相关的关键点信息,关键点信息包括:面部区域的关键点信息和外缘关键点信息;外缘关键点信息对应的区域包括面部区域且大于面部区域;
步骤202:基于关键点信息确定多个变形区域;
步骤203:确定面部区域的偏转参数,基于偏转参数确定至少部分变形区域中每个变形区域对应的变形参数和变形方向;
步骤204:基于多个变形区域中的至少部分变形区域以及每个变形区域对应的变形参数和变形方向对面部区域进行图像变形处理,生成第二图像。
本实施例中的步骤201至步骤202的描述具体可参照前述实施例中的步骤101至步骤102的描述,这里不再赘述。
可以理解,前述实施例主要针对面部区域未偏转的情况,而针对面部区域偏转的情况,即侧脸的场景,需要先确定面部区域的偏转参数,再根据偏转参数确定待进行变形处理的每个变形区域对应的变形参数和变形方向,按照确定的变形参数和变形方向对变形区域进行变形处理。
在本申请的一些可选实施例中,确定面部区域的偏转参数,包括:确定面部区域中任一区域的左侧边缘关键点、右侧边缘关键点和中心关键点;区域包括以下区域的至少之一:脸部区域、鼻子区域、嘴部区域;确定左侧关键点与中心关键点之间的第一距离,以及确定右侧边缘关键点与中心关键点之间的第二距离;基于第一距离和第二距离确定面部区域的偏转参数。
作为一种示例,以鼻子区域为例,则分别确定鼻子的中心点(例如鼻尖)、鼻翼最左侧的关键点和鼻翼最右侧的关键点,计算鼻翼最左侧的关键点与鼻子的中心点之间的第一距离,以及鼻翼最右侧的关键点与鼻子的中心点之间的第二距离,基于第一距离和第二距离确定面部区域的偏转参数。进一步基于该偏转参数对前述实施例中的第一目标区域的变形方向进行调整。
以第一目标区域为鼻翼区域的变形处理为例,则由于脸部区域的偏转,使得鼻翼左侧区域和鼻翼右侧区域的变形参数不同,若第一距离大于第二距离,则鼻翼左侧区域的变形参数要大于鼻翼右侧区域的变形参数;作为一种示例,鼻翼最左侧的关键点的移动比例可以为第一距离除以鼻翼最左侧的关键点与鼻翼中心点之间的距离,且限制在0至1之间;同理,鼻翼最右侧的关键点的移动比例可以为第二距离除以鼻翼最右侧的关键点与鼻翼中心点之间的距离,且限制在0至1之间,这样,鼻翼两侧的关键点的移动距离会伴随着面部区域的偏转情况而变化。
采用本申请实施例的技术方案,一方面,通过对面部区域的外缘的关键点的确定,从而确定出面部区域外缘的变形区域,以便于对面部区域进行变形处理的过程中,适应性的对面部区域的外缘进行变形处理,避免了由于面部区域的变形处理导致的图像中出现空洞或者出现像素重叠的现象发生,提升了图像处理效果。另一方面,通过形成面部区域轮廓的闭合的关键点信息,实现了针对面部区域的额头区域的变形处理。又一方面,通过对脸部区域的偏转情况的检测,实现了侧脸场景下的鼻子高度的调整。
本申请实施例还提供了一种图像处理方法。图5为本申请实施例的图像处理方法的又一种流程示意图;如图5所示,方法包括:
步骤301:获得第一图像,识别第一图像中的面部区域,确定面部区域相关的关键点信息,关键点信息包括:面部区域的关键点信息和外缘关键点信息;外缘关键点信息对应的区域包括面部区域且大于面部区域;
步骤302:基于关键点信息确定多个变形区域,基于多个变形区域中的至少部分变形区域对面部区域进行图像变形处理,生成第二图像。
步骤303:识别面部区域中的第二目标区域,对第二目标区域进行特征处理,生成第三图像;第二目标区域包括以下至少之一:眼周区域、法令纹区域、牙齿区域、眼部区域、苹果肌区域。
本实施例中的步骤301至步骤302的描述具体可参照前述实施例中的步骤101至步骤102的描述,为减少篇幅,这里不再赘述。
本实施例中,除了基于变形区域对面部区域进行图像变形处理之外,本实施例还可基于图像进行特征处理,作为一些实施方式,图像的特征处理可以是对图像中的像素进行处理,特征处理的方 式可包括以下至少之一:降噪处理、高斯模糊处理、高低频处理、掩模处理等等。其中,当第二目标区域为眼周区域时,对第二目标区域的处理具体可以是去除黑眼圈的处理;当第二目标区域为法令纹区域时,对第二目标区域的处理具体可以是去除法令纹的处理;当第二目标区域为牙齿区域时,对第二目标区域的处理具体可以是亮白牙齿的处理;当第二目标区域为眼部区域时,对第二目标区域的处理具体可以是眼部区域的亮度提升处理;当第二目标区域为苹果肌区域时,对第二目标区域的处理可以是增大或缩小苹果肌区域的处理和/或苹果肌区域的亮度处理等等。
针对高斯处理方式,则可针对第二目标区域进行高斯模糊处理,相当于对第二目标区域进行磨皮处理。
针对掩模处理方式,即将与第二目标区域相匹配的掩模覆盖在第二目标区域,如图6所示,图6显示的是针对第二目标区域进行处理的示例。示例性的,以第二目标区域为眼周区域为例,则首先确定眼部区域,基于确定的眼部区域确定眼周区域,一般来说,黑眼圈位于眼睛的下方,则具体可将眼部区域的下方区域确定为第二目标区域(眼周区域);实际应用中,可预先设置眼周区域对应的掩模,则将眼周区域对应的掩模覆盖在眼周区域,生成第三图像。而针对法令纹区域的处理方式与眼周区域的处理方式相似,即先确定法令纹区域,通过预先设置的法令纹区域对应的掩膜,将法令纹区域对应的掩膜覆盖在法令纹区域,生成第三图像。
针对牙齿区域的处理,通过预设的颜色查找表确定待替换的表征颜色的目标参数;确定牙齿区域,调整牙齿区域对应的参数为目标参数,从而调整牙齿颜色。
针对眼部区域的处理,具体可以是提升眼部区域的亮度。
本申请实施例还提供了一种图像处理装置,图7为本申请实施例的图像处理装置的一种组成结构示意图;如图7所示,装置包括:第一确定单元41和变形处理单元42;其中,
第一确定单元41,配置为获得第一图像,识别第一图像中的面部区域,确定面部区域相关的关键点信息,关键点信息包括:面部区域的关键点信息和外缘关键点信息;外缘关键点信息对应的区域包括面部区域且大于面部区域;还配置为基于关键点信息确定多个变形区域;
变形处理单元42,配置为基于多个变形区域中的至少部分变形区域对面部区域进行图像变形处理,生成第二图像。
在本申请的一些可选实施例中,面部区域的关键点信息包括面部区域的器官的关键点信息和面部区域的边缘的关键点信息;面部区域的边缘对应面部区域的轮廓;器官的关键点信息包括器官的中心关键点信息和/或器官的轮廓关键点信息。
在本申请的一些可选实施例中,第一确定单元41,配置为基于关键点信息中的任意相邻的三个关键点确定多个变形区域。
在本申请的一些可选实施例中,第一确定单元41,配置为确定面部区域中的待处理的第一目标区域;基于第一目标区域对应的关键点信息,从多个变形区域中确定与第一目标区域对应的变形区域;
变形处理单元42,配置为对第一目标区域对应的变形区域进行图像变形处理。
作为第一种实施方式,第一目标区域为眼部区域;眼部区域包括左眼区域和/或右眼区域;
第一确定单元41,配置为基于左眼区域对应的关键点信息,从多个变形区域中确定与左眼区域对应的第一组变形区域,和/或,基于右眼区域对应的关键点信息,从多个变形区域中确定与右眼区域对应的第二组变形区域;
变形处理单元42,配置为对第一组变形区域和/或第二组变形区域进行图像变形处理,其中,第一组变形区域的图像变形方向和第二组变形区域的图像变形方向相反,以使左眼区域和右眼区域之间的距离增大或缩小。
作为第二种实施方式,第一目标区域为眼角区域;眼角区域包括左眼的眼角区域和/或右眼的眼角区域;
第一确定单元41,配置为基于左眼的眼角区域对应的关键点信息,从多个变形区域中确定与左眼的眼角区域对应的第三组变形区域,和/或,基于右眼的眼角区域对应的关键点信息,从多个变形区域中确定与右眼的眼角区域对应的第四组变形区域;
变形处理单元42,配置为按照第一特定方向拉伸或压缩第三组变形区域和/或第四组变形区域,以调整左眼区域的眼角的位置和/或右眼区域的眼角的位置。
作为第三种实施方式,第一目标区域为眼部区域;眼部区域包括左眼区域和/或右眼区域;
第一确定单元41,配置为基于左眼区域对应的关键点信息,从多个变形区域中确定与左眼区域对应的第五组变形区域,和/或,基于右眼区域对应的关键点信息,从多个变形区域中确定与右眼 区域对应的第六组变形区域;
变形处理单元42,配置为对第五组变形区域进行变形处理,以使左眼区域的轮廓关键点相对于左眼区域的中心关键点旋转,且旋转的角度满足第一设定角度,和/或,对第六组变形区域进行变形处理,以使右眼区域的轮廓关键点相对于右眼区域的中心关键点旋转,且旋转的角度满足第二设定角度。
作为第四种实施方式,第一目标区域为鼻子区域;
第一确定单元41,配置为基于鼻子区域对应的关键点信息,从多个变形区域中确定与鼻子区域对应的第七组变形区域;
变形处理单元42,配置为按照第二特定方向拉伸或压缩第七组变形区域,以拉长或缩短鼻子区域。
作为第五种实施方式,第一目标区域为鼻翼区域;
第一确定单元41,配置为基于鼻翼区域对应的关键点信息,从多个变形区域中确定与鼻翼区域对应的第八组变形区域;
变形处理单元42,配置为按照第三特定方向压缩或拉伸第八组变形区域,以使鼻翼区域变窄或变宽。
作为第六种实施方式,第一目标区域为下巴区域或人中区域;
第一确定单元41,配置为基于下巴区域或人中区域对应的关键点信息,从多个变形区域中确定与下巴区域或人中区域对应的第九组变形区域;
变形处理单元42,配置为按照第四特定方向压缩或拉伸第九组变形区域,以缩短或拉长下巴区域或人中区域。
作为第七种实施方式,第一目标区域为嘴部区域;
第一确定单元41,配置为基于嘴部区域对应的关键点信息,从多个变形区域中确定与嘴部区域对应的第十组变形区域;
变形处理单元42,配置为按照嘴部区域的边缘朝向嘴部区域的中心的方向对第十组变形区域进行压缩处理,或者按照嘴部区域的中心朝向嘴部区域的边缘的方向对第十组变形区域进行拉伸处理。
作为第八种实施方式,第一确定单元41,配置为基于面部区域的边缘的关键点信息,从多个变形区域中确定与面部区域对应的第十一组变形区域;
变形处理单元42,配置为按照面部区域的边缘朝向面部区域的中点的方向对第十一组变形区域进行压缩处理,或者按照面部区域的中点朝向面部区域的边缘的方向对第十一组变形区域进行拉伸处理。
作为第九种实施方式,第一目标区域为额头区域;
第一确定单元41,配置为基于额头区域对应的关键点信息,从多个变形区域中确定与额头区域对应的第十二组变形区域;
变形处理单元42,配置为按照第五特定方向对第十二组变形区域进行拉伸或压缩处理,以提升或降低面部区域的发际线;第五特定方向为额头区域的关键点指向与关键点距离最近的眉心的方向,或者,第五特定方向为额头区域的关键点远离与关键点距离最近的眉心的方向。
可选地,第一确定单元41,配置为确定额头区域的至少三个关键点;基于至少三个关键点和面部区域中眼部以下的第一组轮廓点信息确定额头区域的关键点信息。
在一些实施方式中,至少三个关键点中的第一关键点位于额头区域的中线上;至少三个关键点中的第二关键点和第三关键点位于中线的两侧。
在一些实施方式中,第一确定单元41,配置为基于面部区域中眼部以下的第一组轮廓点信息中位于两端的关键点以及上述至少三个关键点进行曲线拟合,获得曲线拟合关键点信息;基于曲线插值算法对曲线拟合关键点信息进行插值处理,获得与额头区域的关键点信息。
在本申请的一些可选实施例中,第一确定单元41,配置为通过面部关键点检测算法检测面部区域,获得面部区域的器官的关键点信息以及面部区域的边缘的关键点信息;基于面部区域的边缘的关键点信息获得外缘关键点信息。
在本申请的一些可选实施例中,第一确定单元41,配置为获得面部区域中眼部以下的第一组轮廓点信息;确定额头区域对应的第二组轮廓点信息,基于第一组轮廓点信息和第二组轮廓点信息确定面部区域的边缘的关键点信息。
在本申请的一些可选实施例中,第一确定单元41,配置为确定面部区域的边缘的关键点信息 与面部区域的中点之间的相对位置关系,上述相对位置关系包括面部区域的边缘的关键点与面部区域的中心点之间的距离以及面部区域的边缘的关键点相对于面部区域的中心点的方向;基于相对位置关系将第一边缘的关键点按照朝向面部区域的外部的方向延伸预设距离,获得第一边缘的关键点对应的外缘关键点;其中,上述第一边缘的关键点为面部区域的边缘的关键点中的任一关键点;上述预设距离与第一边缘的关键点与面部区域的中心点之间的距离相关。
在本申请的一些可选实施例中,如图8所示,上述装置还包括第二确定单元43,配置为确定面部区域的偏转参数,基于偏转参数确定至少部分变形区域中每个变形区域对应的变形参数和变形方向,以使每个变形区域按照对应的变形参数和变形方向进行图像变形处理。
在一些实施方式中,第二确定单元43,配置为确定面部区域中任一区域的左侧边缘关键点、右侧边缘关键点和中心关键点;区域包括以下区域的至少之一:脸部区域、鼻子区域、嘴部区域;确定左侧关键点与中心关键点之间的第一距离,以及确定右侧边缘关键点与中心关键点之间的第二距离;基于第一距离和第二距离确定面部区域的偏转参数。
在本申请的一些可选实施例中,如图9所示,装置还包括图像处理单元44,配置为识别面部区域中的第二目标区域,对第二目标区域进行特征处理,生成第三图像;第二目标区域包括以下至少之一:眼周区域、法令纹区域、牙齿区域、眼部区域、苹果肌区域。
本申请实施例中,装置中的第一确定单元41、变形处理单元42、第二确定单元43和图像处理单元44,在实际应用中均可由中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)、微控制单元(MCU,Microcontroller Unit)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现。
需要说明的是:上述实施例提供的图像处理装置在进行图像处理时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的图像处理装置与图像处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本申请实施例还提供了一种图像处理装置,图10为本申请实施例的图像处理装置的硬件组成结构示意图,如图10,图像处理装置包括存储器52、处理器51及存储在存储器52上并可在处理器51上运行的计算机程序,处理器51执行程序时实现本申请实施例方法的步骤。
可以理解,图像处理装置中的各个组件可通过总线系统53耦合在一起。可理解,总线系统53用于实现这些组件之间的连接通信。总线系统53除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图10中将各种总线都标为总线系统53。
可以理解,存储器52可以是易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,ferromagnetic random access memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本申请实施例描述的存储器52旨在包括但不限于这些和任意其它适合类型的存储器。
上述本申请实施例揭示的方法可以应用于处理器51中,或者由处理器51实现。处理器51可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器51中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器51可以是通用处理器、DSP,或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器51可以实现或者 执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器52,处理器51读取存储器52中的信息,结合其硬件完成前述方法的步骤。
在示例性实施例中,图像处理装置可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、FPGA、通用处理器、控制器、MCU、微处理器(Microprocessor)、或其它电子元件实现,用于执行前述方法。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现本申请实施例上述方法的步骤。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例上述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上上述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以上述权利要求的保护范围为准。

Claims (46)

  1. 一种图像处理方法,所述方法包括:
    获得第一图像,识别所述第一图像中的面部区域,确定所述面部区域相关的关键点信息,所述关键点信息包括:所述面部区域的关键点信息和外缘关键点信息;所述外缘关键点信息对应的区域包括所述面部区域且大于所述面部区域;
    基于所述关键点信息确定多个变形区域,基于所述多个变形区域中的至少部分变形区域对所述面部区域进行图像变形处理,生成第二图像。
  2. 根据权利要求1所述的方法,其中,所述面部区域的关键点信息包括所述面部区域的器官的关键点信息和所述面部区域的边缘的关键点信息;所述面部区域的边缘对应所述面部区域的轮廓;
    所述器官的关键点信息包括器官的中心关键点信息和/或器官的轮廓关键点信息。
  3. 根据权利要求1或2所述的方法,其中,所述基于所述关键点信息确定多个变形区域,包括:
    基于所述关键点信息中的任意相邻的三个关键点确定所述多个变形区域。
  4. 根据权利要求1至3任一项所述的方法,其中,所述基于所述多个变形区域中的至少部分变形区域对所述面部区域进行图像变形处理,包括:
    确定所述面部区域中的待处理的第一目标区域;
    基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;
    对所述第一目标区域对应的变形区域进行图像变形处理。
  5. 根据权利要求4所述的方法,其中,所述第一目标区域为眼部区域;所述眼部区域包括左眼区域和/或右眼区域;
    所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域,包括:
    基于所述左眼区域对应的关键点信息,从所述多个变形区域中确定与所述左眼区域对应的第一组变形区域,和/或,基于所述右眼区域对应的关键点信息,从所述多个变形区域中确定与所述右眼区域对应的第二组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    对所述第一组变形区域和/或所述第二组变形区域进行图像变形处理;
    其中,所述第一组变形区域的图像变形方向和所述第二组变形区域的图像变形方向相反,以使所述左眼区域和所述右眼区域之间的距离增大或缩小。
  6. 根据权利要求4所述的方法,其中,所述第一目标区域为眼角区域;所述眼角区域包括左眼的眼角区域和/或右眼的眼角区域;
    所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域,包括:
    基于所述左眼的眼角区域对应的关键点信息,从所述多个变形区域中确定与所述左眼的眼角区域对应的第三组变形区域,和/或,基于所述右眼的眼角区域对应的关键点信息,从所述多个变形区域中确定与所述右眼的眼角区域对应的第四组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    按照第一特定方向拉伸或压缩所述第三组变形区域和/或所述第四组变形区域,以调整所述左眼区域的眼角的位置和/或所述右眼区域的眼角的位置。
  7. 根据权利要求4所述的方法,其中,所述第一目标区域为眼部区域;所述眼部区域包括左眼区域和/或右眼区域;
    所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域,包括:
    基于所述左眼区域对应的关键点信息,从所述多个变形区域中确定与所述左眼区域对应的第五组变形区域,和/或,基于所述右眼区域对应的关键点信息,从所述多个变形区域中确定与所述右眼区域对应的第六组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    对所述第五组变形区域进行变形处理,以使左眼区域的轮廓关键点相对于左眼区域的中心关键 点旋转,且旋转的角度满足第一设定角度,和/或,对所述第六组变形区域进行变形处理,以使右眼区域的轮廓关键点相对于右眼区域的中心关键点旋转,且旋转的角度满足第二设定角度。
  8. 根据权利要求4所述的方法,其中,所述第一目标区域为鼻子区域;
    所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:
    基于所述鼻子区域对应的关键点信息,从所述多个变形区域中确定与所述鼻子区域对应的第七组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    按照第二特定方向拉伸或压缩所述第七组变形区域,以拉长或缩短所述鼻子区域。
  9. 根据权利要求4所述的方法,其中,所述第一目标区域为鼻翼区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:
    基于所述鼻翼区域对应的关键点信息,从所述多个变形区域中确定与所述鼻翼区域对应的第八组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    按照第三特定方向压缩或拉伸所述第八组变形区域,以使所述鼻翼区域变窄或变宽。
  10. 根据权利要求4所述的方法,其中,所述第一目标区域为下巴区域或人中区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:
    基于所述下巴区域或人中区域对应的关键点信息,从所述多个变形区域中确定与所述下巴区域或人中区域对应的第九组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    按照第四特定方向压缩或拉伸所述第九组变形区域,以缩短或拉长所述下巴区域或人中区域。
  11. 根据权利要求4所述的方法,其中,所述第一目标区域为嘴部区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:
    基于所述嘴部区域对应的关键点信息,从所述多个变形区域中确定与所述嘴部区域对应的第十组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    按照所述嘴部区域的边缘朝向所述嘴部区域的中心的方向对所述第十组变形区域进行压缩处理,或者按照所述嘴部区域的中心朝向所述嘴部区域的边缘的方向对所述第十组变形区域进行拉伸处理。
  12. 根据权利要求4所述的方法,其中,所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:
    基于所述面部区域的边缘的关键点信息,从所述多个变形区域中确定与所述面部区域对应的第十一组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    按照所述面部区域的边缘朝向所述面部区域的中线的方向对所述第十一组变形区域进行压缩处理,或者按照所述面部区域的中线朝向所述面部区域的边缘的方向对所述第十一组变形区域进行拉伸处理。
  13. 根据权利要求4所述的方法,其中,所述第一目标区域为额头区域;所述基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;包括:
    基于所述额头区域的关键点信息,从所述多个变形区域中确定与所述额头区域对应的第十二组变形区域;
    所述对所述第一目标区域对应的变形区域进行图像变形处理,包括:
    按照第五特定方向对所述第十二组变形区域进行拉伸或压缩处理,以提升或降低所述面部区域的发际线;所述第五特定方向为所述额头区域的关键点指向与所述关键点距离最近的眉心的方向,或者,所述第四特定方向为所述额头区域的关键点远离与所述关键点距离最近的眉心的方向。
  14. 根据权利要求13所述的方法,其中,所述额头区域的关键点信息的确定方式包括:
    确定所述额头区域的至少三个关键点;
    基于所述至少三个关键点和所述面部区域中眼部以下的第一组轮廓点信息确定所述额头区域的关键点信息。
  15. 根据权利要求14所述的方法,其中,所述至少三个关键点中的第一关键点位于所述额头区 域的中线上;所述至少三个关键点中的第二关键点和第三关键点位于所述中线的两侧。
  16. 根据权利要求14或15所述的方法,其中,所述基于所述至少三个关键点和所述面部区域中眼部以下的第一组轮廓点信息确定所述额头区域的关键点信息,包括:
    基于所述面部区域中眼部以下的第一组轮廓点信息中位于两端的关键点以及所述至少三个关键点进行曲线拟合,获得曲线拟合关键点信息;
    基于曲线插值算法对所述曲线拟合关键点信息进行插值处理,获得所述额头区域的关键点信息。
  17. 根据权利要求1至16任一项所述的方法,其中,所述确定所述面部区域相关的关键点信息,包括:
    通过面部关键点检测算法检测所述面部区域,获得所述面部区域的器官的关键点信息以及所述面部区域的边缘的关键点信息;基于所述面部区域的边缘的关键点信息获得所述外缘关键点信息。
  18. 根据权利要求17所述的方法,其中,获得所述面部区域的边缘的关键点信息,包括:获得所述面部区域中眼部以下的第一组轮廓点信息;
    确定所述额头区域的第二组轮廓点信息,基于所述第一组轮廓点信息和所述第二组轮廓点信息确定所述面部区域的边缘的关键点信息。
  19. 根据权利要求17所述的方法,其中,所述基于所述面部区域的边缘的关键点信息获得所述外缘关键点信息,包括:
    确定所述面部区域的边缘的关键点信息与所述面部区域的中心点之间的相对位置关系,所述相对位置关系包括所述面部区域的边缘的关键点与所述面部区域的中心点之间的距离以及所述面部区域的边缘的关键点相对于所述面部区域的中心点的方向;
    基于所述相对位置关系将第一边缘的关键点按照朝向所述面部区域的外部的方向延伸预设距离,获得所述第一边缘的关键点对应的外缘关键点;其中,所述第一边缘的关键点为所述面部区域的边缘的关键点中的任一关键点;所述预设距离与所述第一边缘的关键点与面部区域的中心点之间的距离相关。
  20. 根据权利要求1至19任一项所述的方法,其中,所述方法还包括:
    确定所述面部区域的偏转参数,基于所述偏转参数确定所述至少部分变形区域中每个变形区域对应的变形参数和变形方向,以使每个变形区域按照对应的变形参数和变形方向进行图像变形处理。
  21. 根据权利要求20所述的方法,其中,所述确定所述面部区域的偏转参数,包括:
    确定所述面部区域中任一区域的左侧边缘关键点、右侧边缘关键点和中心关键点;所述区域包括以下区域的至少之一:脸部区域、鼻子区域、嘴部区域;
    确定所述左侧关键点与所述中心关键点之间的第一距离,以及确定所述右侧边缘关键点与所述中心关键点之间的第二距离;
    基于所述第一距离和所述第二距离确定所述面部区域的偏转参数。
  22. 根据权利要求1至21任一项所述的方法,其中,所述方法还包括:
    识别所述面部区域中的第二目标区域,对所述第二目标区域进行特征处理,生成第三图像;
    所述第二目标区域包括以下至少之一:眼周区域、法令纹区域、牙齿区域、眼部区域、苹果肌区域。
  23. 一种图像处理装置,所述装置包括:第一确定单元和变形处理单元;其中,
    所述第一确定单元,配置为获得第一图像,识别所述第一图像中的面部区域,确定所述面部区域相关的关键点信息,所述关键点信息包括:所述面部区域的关键点信息和外缘关键点信息;所述外缘关键点信息对应的区域包括所述面部区域且大于所述面部区域;还配置为基于所述关键点信息确定多个变形区域;
    所述变形处理单元,配置为基于所述多个变形区域中的至少部分变形区域对所述面部区域进行图像变形处理,生成第二图像。
  24. 根据权利要求23所述的装置,其中,所述面部区域的关键点信息包括所述面部区域的器官的关键点信息和所述面部区域的边缘的关键点信息;所述面部区域的边缘对应所述面部区域的轮廓;
    所述器官的关键点信息包括器官的中心关键点信息和/或器官的轮廓关键点信息。
  25. 根据权利要求23或24所述的装置,其中,所述第一确定单元,配置为基于所述关键点信息中的任意相邻的三个关键点确定所述多个变形区域。
  26. 根据权利要求23至25任一项所述的装置,其中,所述第一确定单元,配置为确定所述面部区域中的待处理的第一目标区域;基于所述第一目标区域对应的关键点信息,从所述多个变形区域中确定与所述第一目标区域对应的变形区域;
    所述变形处理单元,配置为对所述第一目标区域对应的变形区域进行图像变形处理。
  27. 根据权利要求26所述的装置,其中,所述第一目标区域为眼部区域;所述眼部区域包括左眼区域和/或右眼区域;
    所述第一确定单元,配置为基于所述左眼区域对应的关键点信息,从所述多个变形区域中确定与所述左眼区域对应的第一组变形区域,和/或,基于所述右眼区域对应的关键点信息,从所述多个变形区域中确定与所述右眼区域对应的第二组变形区域;
    所述变形处理单元,配置为对所述第一组变形区域和/或所述第二组变形区域进行图像变形处理,其中,所述第一组变形区域的图像变形方向和所述第二组变形区域的图像变形方向相反,以使所述左眼区域和所述右眼区域之间的距离增大或缩小。
  28. 根据权利要求26所述的装置,其中,所述第一目标区域为眼角区域;所述眼角区域包括左眼的眼角区域和/或右眼的眼角区域;
    所述第一确定单元,配置为基于所述左眼的眼角区域对应的关键点信息,从所述多个变形区域中确定与所述左眼的眼角区域对应的第三组变形区域,和/或,基于所述右眼的眼角区域对应的关键点信息,从所述多个变形区域中确定与所述右眼的眼角区域对应的第四组变形区域;
    所述变形处理单元,配置为按照第一特定方向拉伸或压缩所述第三组变形区域和/或所述第四组变形区域,以调整所述左眼区域的眼角的位置和/或所述右眼区域的眼角的位置。
  29. 根据权利要求26所述的装置,其中,所述第一目标区域为眼部区域;所述眼部区域包括左眼区域和/或右眼区域;
    所述第一确定单元,配置为基于所述左眼区域对应的关键点信息,从所述多个变形区域中确定与所述左眼区域对应的第五组变形区域,和/或,基于所述右眼区域对应的关键点信息,从所述多个变形区域中确定与所述右眼区域对应的第六组变形区域;
    所述变形处理单元,配置为对所述第五组变形区域进行变形处理,以使左眼区域的轮廓关键点相对于左眼区域的中心关键点旋转,且旋转的角度满足第一设定角度,和/或,对所述第六组变形区域进行变形处理,以使右眼区域的轮廓关键点相对于右眼区域的中心关键点旋转,且旋转的角度满足第二设定角度。
  30. 根据权利要求26所述的装置,其中,所述第一目标区域为鼻子区域;
    所述第一确定单元,配置为基于所述鼻子区域对应的关键点信息,从所述多个变形区域中确定与所述鼻子区域对应的第七组变形区域;
    所述变形处理单元,配置为按照第二特定方向拉伸或压缩所述第七组变形区域,以拉长或缩短所述鼻子区域。
  31. 根据权利要求26所述的装置,其中,所述第一目标区域为鼻翼区域;
    所述第一确定单元,配置为基于所述鼻翼区域对应的关键点信息,从所述多个变形区域中确定与所述鼻翼区域对应的第八组变形区域;
    所述变形处理单元,配置为按照第三特定方向压缩或拉伸所述第八组变形区域,以使所述鼻翼区域变窄或变宽。
  32. 根据权利要求26所述的装置,其中,所述第一目标区域为下巴区域或人中区域;
    所述第一确定单元,配置为基于所述下巴区域或人中区域对应的关键点信息,从所述多个变形区域中确定与所述下巴区域或人中区域对应的第九组变形区域;
    所述变形处理单元,配置为按照第四特定方向压缩或拉伸所述第九组变形区域,以缩短或拉长所述下巴区域或人中区域。
  33. 根据权利要求26所述的装置,其中,所述第一目标区域为嘴部区域;
    所述第一确定单元,配置为基于所述嘴部区域对应的关键点信息,从所述多个变形区域中确定与所述嘴部区域对应的第十组变形区域;
    所述变形处理单元,配置为按照所述嘴部区域的边缘朝向所述嘴部区域的中心的方向对所述第十组变形区域进行压缩处理,或者按照所述嘴部区域的中心朝向所述嘴部区域的边缘的方向对所述第十组变形区域进行拉伸处理。
  34. 根据权利要求26所述的装置,其中,所述第一确定单元,配置为基于所述面部区域的边缘的关键点信息,从所述多个变形区域中确定与所述面部区域对应的第十一组变形区域;
    所述变形处理单元,配置为按照所述面部区域的边缘朝向所述面部区域的中线的方向对所述第十一组变形区域进行压缩处理,或者按照所述面部区域的中线朝向所述面部区域的边缘的方向对所述第十一组变形区域进行拉伸处理。
  35. 根据权利要求26所述的装置,其中,所述第一目标区域为额头区域;
    所述第一确定单元,配置为基于所述额头区域对应的关键点信息,从所述多个变形区域中确定与所述额头区域对应的第十二组变形区域;
    所述变形处理单元,配置为按照第五特定方向对所述第十二组变形区域进行拉伸或压缩处理,以提升或降低所述面部区域的发际线;所述第五特定方向为所述额头区域的关键点指向与所述关键点距离最近的眉心的方向,或者,所述第五特定方向为所述额头区域的关键点远离与所述关键点距离最近的眉心的方向。
  36. 根据权利要求35所述的装置,其中,所述第一确定单元,配置为确定所述额头区域的至少三个关键点;基于所述至少三个关键点和所述面部区域中眼部以下的第一组轮廓点信息确定所述额头区域的关键点信息。
  37. 根据权利要求36所述的装置,其中,所述至少三个关键点中的第一关键点位于所述额头区域的中线上;所述至少三个关键点中的第二关键点和第三关键点位于所述中线的两侧。
  38. 根据权利要求36或37所述的装置,其中,所述第一确定单元,配置为基于所述面部区域中眼部以下的第一组轮廓点信息中位于两端的关键点以及所述至少三个关键点信息进行曲线拟合,获得曲线拟合关键点信息;基于曲线插值算法对所述曲线拟合关键点信息进行插值处理,获得所述额头区域的关键点信息。
  39. 根据权利要求23至38任一项所述的装置,其中,所述第一确定单元,配置为通过面部关键点检测算法检测所述面部区域,获得所述面部区域的器官的关键点信息以及所述面部区域的边缘的关键点信息;基于所述面部区域的边缘的关键点信息获得所述外缘关键点信息。
  40. 根据权利要求39所述的装置,其中,所述第一确定单元,配置为获得所述面部区域中眼部以下的第一组轮廓点信息;确定所述额头区域的第二组轮廓点信息,基于所述第一组轮廓点信息和所述第二组轮廓点信息确定所述面部区域的边缘的关键点信息。
  41. 根据权利要求39所述的装置,其中,所述第一确定单元,配置为确定所述面部区域的边缘的关键点信息与所述面部区域的中点之间的相对位置关系,所述相对位置关系包括所述面部区域的边缘的关键点与所述面部区域的中心点之间的距离以及所述面部区域的边缘的关键点相对于所述面部区域的中心点的方向;基于所述相对位置关系将第一边缘的关键点按照朝向所述面部区域的外部的方向延伸预设距离,获得所述第一边缘的关键点对应的外缘关键点;其中,所述第一边缘的关键点为所述面部区域的边缘的关键点中的任一关键点;所述预设距离与所述第一边缘的关键点与面部区域的中心点之间的距离相关。
  42. 根据权利要求23至41任一项所述的装置,其中,所述装置还包括第二确定单元,配置为确定所述面部区域的偏转参数,基于所述偏转参数确定所述至少部分变形区域中每个变形区域对应的变形参数和变形方向,以使每个变形区域按照对应的变形参数和变形方向进行图像变形处理。
  43. 根据权利要求42所述的装置,其中,所述第二确定单元,配置为确定所述面部区域中任一区域的左侧边缘关键点、右侧边缘关键点和中心关键点;所述区域包括以下区域的至少之一:脸部区域、鼻子区域、嘴部区域;确定所述左侧关键点与所述中心关键点之间的第一距离,以及确定所述右侧边缘关键点与所述中心关键点之间的第二距离;基于所述第一距离和所述第二距离确定所述面部区域的偏转参数。
  44. 根据权利要求23至43任一项所述的装置,其中,所述装置还包括图像处理单元,配置为识别所述面部区域中的第二目标区域,对所述第二目标区域进行特征处理,生成第三图像;所述第二目标区域包括以下至少之一:眼周区域、法令纹区域、牙齿区域、眼部区域、苹果肌区域。
  45. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1至22任一项所述方法的步骤。
  46. 一种图像处理装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至22任一项所述方法的步骤。
PCT/CN2019/119534 2019-03-06 2019-11-19 一种图像处理方法及装置 WO2020177394A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
SG11202006345UA SG11202006345UA (en) 2019-03-06 2019-11-19 Image processing methods and apparatuses
KR1020207013711A KR102442483B1 (ko) 2019-03-06 2019-11-19 이미지 처리 방법 및 장치
JP2020536145A JP7160925B2 (ja) 2019-03-06 2019-11-19 画像処理方法及び装置
US16/920,972 US11244449B2 (en) 2019-03-06 2020-07-06 Image processing methods and apparatuses

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910169503.4A CN109934766B (zh) 2019-03-06 2019-03-06 一种图像处理方法及装置
CN201910169503.4 2019-03-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/920,972 Continuation US11244449B2 (en) 2019-03-06 2020-07-06 Image processing methods and apparatuses

Publications (1)

Publication Number Publication Date
WO2020177394A1 true WO2020177394A1 (zh) 2020-09-10

Family

ID=66986598

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/119534 WO2020177394A1 (zh) 2019-03-06 2019-11-19 一种图像处理方法及装置

Country Status (7)

Country Link
US (1) US11244449B2 (zh)
JP (1) JP7160925B2 (zh)
KR (1) KR102442483B1 (zh)
CN (1) CN109934766B (zh)
SG (1) SG11202006345UA (zh)
TW (1) TW202034280A (zh)
WO (1) WO2020177394A1 (zh)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087239B (zh) * 2018-07-25 2023-03-21 腾讯科技(深圳)有限公司 一种人脸图像处理方法、装置及存储介质
CN109934766B (zh) * 2019-03-06 2021-11-30 北京市商汤科技开发有限公司 一种图像处理方法及装置
CN110728620A (zh) * 2019-09-30 2020-01-24 北京市商汤科技开发有限公司 一种图像处理方法、装置和电子设备
EP3971820A4 (en) * 2019-09-30 2022-08-10 Beijing Sensetime Technology Development Co., Ltd. IMAGE PROCESSING METHOD, DEVICE AND ELECTRONIC DEVICE
CN111104846B (zh) * 2019-10-16 2022-08-30 平安科技(深圳)有限公司 数据检测方法、装置、计算机设备和存储介质
CN111031305A (zh) * 2019-11-21 2020-04-17 北京市商汤科技开发有限公司 图像处理方法及装置、图像设备及存储介质
JP2022512262A (ja) 2019-11-21 2022-02-03 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド 画像処理方法及び装置、画像処理機器並びに記憶媒体
CN111145110B (zh) * 2019-12-13 2021-02-19 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN111179156B (zh) * 2019-12-23 2023-09-19 北京中广上洋科技股份有限公司 一种基于人脸检测的视频美化方法
CN111753685B (zh) * 2020-06-12 2024-01-12 北京字节跳动网络技术有限公司 图像中人脸发际线调整方法、装置及电子设备
CN111723803B (zh) * 2020-06-30 2023-09-26 广州繁星互娱信息科技有限公司 图像处理方法、装置、设备及存储介质
CN113034349B (zh) * 2021-03-24 2023-11-14 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN113344878B (zh) * 2021-06-09 2022-03-18 北京容联易通信息技术有限公司 一种图像处理方法及系统
CN116109479B (zh) * 2023-04-17 2023-07-18 广州趣丸网络科技有限公司 虚拟形象的面部调整方法、装置、计算机设备和存储介质
CN117274432B (zh) * 2023-09-20 2024-05-14 书行科技(北京)有限公司 图像描边特效的生成方法、装置、设备和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120113106A1 (en) * 2010-11-04 2012-05-10 Electronics And Telecommunications Research Institute Method and apparatus for generating face avatar
CN104992402A (zh) * 2015-07-02 2015-10-21 广东欧珀移动通信有限公司 一种美颜处理方法及装置
CN107330868A (zh) * 2017-06-26 2017-11-07 北京小米移动软件有限公司 图片处理方法及装置
CN107341777A (zh) * 2017-06-26 2017-11-10 北京小米移动软件有限公司 图片处理方法及装置
CN109934766A (zh) * 2019-03-06 2019-06-25 北京市商汤科技开发有限公司 一种图像处理方法及装置

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4678603B2 (ja) * 2007-04-20 2011-04-27 富士フイルム株式会社 撮像装置及び撮像方法
JP2011053942A (ja) * 2009-09-02 2011-03-17 Seiko Epson Corp 画像処理装置、画像処理方法および画像処理プログラム
KR101558202B1 (ko) * 2011-05-23 2015-10-12 한국전자통신연구원 아바타를 이용한 애니메이션 생성 장치 및 방법
KR101165017B1 (ko) * 2011-10-31 2012-07-13 (주) 어펙트로닉스 3차원 아바타 생성 시스템 및 방법
CN103337085A (zh) 2013-06-17 2013-10-02 大连理工大学 一种高效的人像面部变形方法
CN104268591B (zh) * 2014-09-19 2017-11-28 海信集团有限公司 一种面部关键点检测方法及装置
JP6506053B2 (ja) * 2015-03-09 2019-04-24 学校法人立命館 画像処理装置、画像処理方法、及びコンピュータプログラム
CN105205779B (zh) 2015-09-15 2018-10-19 厦门美图之家科技有限公司 一种基于图像变形的眼部图像处理方法、系统及拍摄终端
US9978119B2 (en) * 2015-10-22 2018-05-22 Korea Institute Of Science And Technology Method for automatic facial impression transformation, recording medium and device for performing the method
CN107103271A (zh) 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 一种人脸检测方法
CN105931178A (zh) * 2016-04-15 2016-09-07 乐视控股(北京)有限公司 一种图像处理方法及装置
CN105975935B (zh) * 2016-05-04 2019-06-25 腾讯科技(深圳)有限公司 一种人脸图像处理方法和装置
CN108229279B (zh) * 2017-04-14 2020-06-02 深圳市商汤科技有限公司 人脸图像处理方法、装置和电子设备
US10210648B2 (en) * 2017-05-16 2019-02-19 Apple Inc. Emojicon puppeting
CN108876704B (zh) * 2017-07-10 2022-03-04 北京旷视科技有限公司 人脸图像变形的方法、装置及计算机存储介质
CN107506732B (zh) * 2017-08-25 2021-03-30 奇酷互联网络科技(深圳)有限公司 贴图的方法、设备、移动终端以及计算机存储介质
CN107680033B (zh) * 2017-09-08 2021-02-19 北京小米移动软件有限公司 图片处理方法及装置
CN107705248A (zh) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN108765274A (zh) * 2018-05-31 2018-11-06 北京市商汤科技开发有限公司 一种图像处理方法、装置和计算机存储介质
CN108830783B (zh) * 2018-05-31 2021-07-02 北京市商汤科技开发有限公司 一种图像处理方法、装置和计算机存储介质
CN109087238B (zh) * 2018-07-04 2021-04-23 北京市商汤科技开发有限公司 图像处理方法和装置、电子设备以及计算机可读存储介质
CN108985241B (zh) * 2018-07-23 2023-05-02 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN109087239B (zh) * 2018-07-25 2023-03-21 腾讯科技(深圳)有限公司 一种人脸图像处理方法、装置及存储介质
CN109147012B (zh) * 2018-09-20 2023-04-14 麒麟合盛网络技术股份有限公司 图像处理方法和装置
CN109377446B (zh) * 2018-10-25 2022-08-30 北京市商汤科技开发有限公司 人脸图像的处理方法及装置、电子设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120113106A1 (en) * 2010-11-04 2012-05-10 Electronics And Telecommunications Research Institute Method and apparatus for generating face avatar
CN104992402A (zh) * 2015-07-02 2015-10-21 广东欧珀移动通信有限公司 一种美颜处理方法及装置
CN107330868A (zh) * 2017-06-26 2017-11-07 北京小米移动软件有限公司 图片处理方法及装置
CN107341777A (zh) * 2017-06-26 2017-11-10 北京小米移动软件有限公司 图片处理方法及装置
CN109934766A (zh) * 2019-03-06 2019-06-25 北京市商汤科技开发有限公司 一种图像处理方法及装置

Also Published As

Publication number Publication date
US20200334812A1 (en) 2020-10-22
CN109934766A (zh) 2019-06-25
TW202034280A (zh) 2020-09-16
JP2021517999A (ja) 2021-07-29
KR102442483B1 (ko) 2022-09-13
CN109934766B (zh) 2021-11-30
SG11202006345UA (en) 2020-10-29
JP7160925B2 (ja) 2022-10-25
KR20200107930A (ko) 2020-09-16
US11244449B2 (en) 2022-02-08

Similar Documents

Publication Publication Date Title
WO2020177394A1 (zh) 一种图像处理方法及装置
US11055906B2 (en) Method, device and computing device of face image fusion
US11043011B2 (en) Image processing method, apparatus, terminal, and storage medium for fusing images of two objects
JP6636154B2 (ja) 顔画像処理方法および装置、ならびに記憶媒体
US11288796B2 (en) Image processing method, terminal device, and computer storage medium
US11238569B2 (en) Image processing method and apparatus, image device, and storage medium
CN110049351B (zh) 视频流中人脸变形的方法和装置、电子设备、计算机可读介质
WO2021062998A1 (zh) 一种图像处理方法、装置和电子设备
WO2020220679A1 (zh) 一种图像处理方法、装置和计算机存储介质
WO2020057667A1 (zh) 一种图像处理方法、装置和计算机存储介质
CN109242760B (zh) 人脸图像的处理方法、装置和电子设备
CN113592988A (zh) 三维虚拟角色图像生成方法及装置
KR20200133778A (ko) 이미지 처리 방법, 장치 및 컴퓨터 저장 매체
JP7102554B2 (ja) 画像処理方法、装置及び電子機器
KR20160139657A (ko) 메시 워핑을 이용한 가상 성형수술의 방법 및 시스템
US11734953B2 (en) Image parsing method and apparatus
JP6905588B2 (ja) 画像処理装置、撮像装置、画像印刷装置、画像処理装置の制御方法、および画像処理プログラム
Chou et al. Simulation of face/hairstyle swapping in photographs with skin texture synthesis
CN113421197B (zh) 一种美颜图像的处理方法及其处理系统
US20220374649A1 (en) Face swapping with neural network-based geometry refining
CN110766603A (zh) 一种图像处理方法、装置和计算机存储介质
CN116343276A (zh) 人脸处理方法、装置、电子设备、芯片及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020536145

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19918168

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19918168

Country of ref document: EP

Kind code of ref document: A1