WO2022161009A1 - 图像处理方法及装置、存储介质、终端 - Google Patents

图像处理方法及装置、存储介质、终端 Download PDF

Info

Publication number
WO2022161009A1
WO2022161009A1 PCT/CN2021/139036 CN2021139036W WO2022161009A1 WO 2022161009 A1 WO2022161009 A1 WO 2022161009A1 CN 2021139036 W CN2021139036 W CN 2021139036W WO 2022161009 A1 WO2022161009 A1 WO 2022161009A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
skin color
eye
region
Prior art date
Application number
PCT/CN2021/139036
Other languages
English (en)
French (fr)
Inventor
谢富名
Original Assignee
展讯通信(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 展讯通信(上海)有限公司 filed Critical 展讯通信(上海)有限公司
Publication of WO2022161009A1 publication Critical patent/WO2022161009A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Embodiments of the present invention relate to the field of image processing, and in particular, to an image processing method and device, a storage medium, and a terminal.
  • Face beautification is a basic function of the camera of a portable mobile device smartphone. Users can obtain high-quality, high-definition portrait pictures with one click, without the need for complex beautification processes such as manual retouching in the later stage.
  • the technical problem solved by the embodiments of the present invention is that the beautification processing effect of the existing portrait images is poor.
  • an embodiment of the present invention provides an image processing method, which includes: performing face recognition on an image to be processed to obtain at least one face region; performing face attribute detection on each face region respectively to obtain each face region face attribute information of the region; based on the face attribute information of each face region, beautification processing is performed on the corresponding face region respectively to obtain a beauty image corresponding to the image to be processed.
  • performing beautification processing on the corresponding face regions based on the face attribute information of each face region includes at least one of the following: based on the face attribute information of each face region, respectively The face area is subjected to microdermabrasion processing; based on the face attribute information of each face area, the corresponding face area is subjected to whitening processing.
  • the face attribute information includes: face gender and age, and performing microdermabrasion on the corresponding face area based on the face attribute information of each face area, including: based on each face area According to the gender and age of the face, determine the microdermabrasion grade of each face area; use the microdermabrasion grade of each face area to perform microdermabrasion treatment on the corresponding face area.
  • the face attribute information includes race skin color information, and performing whitening processing on the corresponding face areas based on the face attribute information of each face area, including: calculating the skin color brightness of each face area. ; Determine the whitening intensity of each face area according to the skin color brightness and ethnic skin color information of each face area, wherein the ethnic skin color information is used to characterize the race of the beautifying object; Adopt the whitening intensity of each face area to whiten the corresponding face area.
  • the image processing method further includes: performing facial skin color detection on the to-be-processed image to obtain a facial skin color template for each face region, where the facial skin color template is used to characterize each pixel as a facial skin color.
  • the probability of based on the face attribute information of each face region and the face skin color template, the corresponding face region is processed for beauty.
  • performing beautification processing on the corresponding face area based on the face attribute information and the face skin color template of each face area includes at least one of the following: based on the face attribute information of each face area and Face skin color template, perform microdermabrasion processing on the corresponding face area; perform whitening processing on the corresponding face area based on the face attribute information of each face area and the face skin color template.
  • performing microdermabrasion processing on the corresponding face area based on the face attribute information of each face area and the face skin color template including: determining each face area based on the face attribute information of each face area.
  • performing microdermabrasion processing on the corresponding face area based on the target skin resurfacing level and face skin color template of each face area to obtain the beautifying image including: using the target dermabrasion of each face area.
  • the skin level performs microdermabrasion on the corresponding face area to obtain a first image; based on the face skin color template, the target microdermabrasion level and the maximum microdermabrasion level, a first fusion coefficient is obtained; the first fusion coefficient is used to pair The to-be-processed image and the first image are fused to obtain the beauty image.
  • performing whitening processing on the corresponding face region based on the face attribute information and the face skin color template of each face region including: calculating the skin color brightness of each face region; Brightness and ethnic skin color information to determine the whitening intensity of each face area, wherein the ethnic skin color information is used to represent the ethnicity of the beauty object, and the face attribute information includes the ethnic skin color information;
  • the face skin color template of the face region determines the face skin color region in each face region; whitening is performed on the face skin color region in each face region according to the determined whitening intensity of each face region.
  • performing face skin color detection on the to-be-processed image to obtain a face skin color template for each face region includes: when in the video mode, performing face recognition on the to-be-processed image, respectively Aligning face key points in each face area, determining the position of the face key points on the to-be-processed image; selecting face contour key points from the face key points; aligning the face contour key points The points are triangulated and rendered to obtain the face skin color template.
  • the image processing method further includes: after triangulating and rendering the face contour key points, a first intermediate template is obtained; filtering the first intermediate template to obtain the human face Skin tone template.
  • filtering the first intermediate template to obtain the face skin color template including: calculating the interpupillary distance and the difference between the center point of the interpupillary distance and the center of the mouth according to the positions of the key points of the human face. According to the interpupillary distance and the relative distance, the filter radius is determined; based on the filter radius, the first intermediate template is filtered to obtain the face skin color template.
  • performing facial skin color detection on the to-be-processed image to obtain a facial skin color template of each face includes: when in a photographing mode, performing facial skin color segmentation on the to-be-processed image; The face skin color segmentation result obtains each face skin color template.
  • the image processing method further includes: after performing facial skin color segmentation on the to-be-processed image, obtaining a second intermediate template; performing filtering processing on the second intermediate template to obtain the facial skin color template .
  • the performing filtering processing on the second intermediate template includes: using adaptive fast steering filtering to perform filtering processing on the second intermediate template, wherein the filtering parameters of the adaptive fast steering filtering include: defining smoothing The threshold value, filtering radius and downsampling ratio of the region and edge region, wherein: the filtering radius is related to the interpupillary distance and the relative distance between the center point of the interpupillary distance and the center of the mouth, and the downsampling ratio is related to the image to be processed is related to the size of the face skin color template.
  • the image processing method further includes: performing human eye key point alignment on each face region in the to-be-processed image; position, calculate the eye size coefficient of each face area; determine the eye type of each face area according to the eye size coefficient of each face area; determine the eye type that matches the eye type of each face area and the corresponding eye area is enlarged by using the determined eye magnification factor.
  • calculating the eye size coefficient of each face region according to the positions of the human eye key points on the image to be processed after the alignment of each face region includes: calculating the height and width of each eye, wherein , the height of the eye is the distance between the center of the upper eyelid and the center of the lower eyelid, and the width of the eye is the distance between the two corners of the eye; according to the ratio of the height and width of each eye, the size coefficient of each eye is obtained; The eye size factor, which determines the eye size factor.
  • the image processing method further includes: when the eye size coefficient is greater than a preset first threshold, calculating the ratio of the width of the eyes to the width of the face; according to the ratio of the width of the eyes to the width of the face , and determine the eye magnification factor.
  • the image processing method further includes: calculating the face shape of each face region; respectively determining a face-lifting coefficient adapted to each face shape; and using the face coefficient adapted to each face shape to perform face-lifting processing on the corresponding face region respectively.
  • the calculating the face shape of each face region includes: aligning key points of the chin for each face region in the image to be processed;
  • the included angle of the chin, the included angle of the chin is the included angle between the key points located at the bottom of the chin and the connecting lines of the key points located at the bottom of the chin; the face shape is determined according to the included angle of the chin.
  • An embodiment of the present invention further provides an image processing device, including: a face recognition unit, used for performing face recognition on the image to be processed, to obtain at least one face area; a face attribute detection unit, used for performing face recognition on each face area face attribute detection to obtain the face attribute information of each face area; the processing unit is used to perform beautification processing on the corresponding face area based on the face attribute information of each face area, and obtain the to-be-processed image The corresponding beauty image.
  • a face recognition unit used for performing face recognition on the image to be processed, to obtain at least one face area
  • a face attribute detection unit used for performing face recognition on each face area face attribute detection to obtain the face attribute information of each face area
  • the processing unit is used to perform beautification processing on the corresponding face area based on the face attribute information of each face area, and obtain the to-be-processed image The corresponding beauty image.
  • An embodiment of the present invention further provides a storage medium, where the storage medium is a non-volatile storage medium or a non-transitory storage medium, and a computer program is stored thereon, and the computer program executes any of the foregoing when the computer program is run by a processor The steps of an image processing method.
  • An embodiment of the present invention further provides a terminal, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor executes any one of the above images when running the computer program The steps of the processing method.
  • Perform face recognition on the image to be processed obtain at least one face area, perform face attribute detection on each face area respectively, obtain the face attribute information of each face area, and separately analyze the face attribute information of each face area according to the face attribute information of each face area.
  • the corresponding face area is subjected to beautification processing, thereby obtaining the beautifying image corresponding to the image to be processed. Since the beautification processing of each face area is based on the face attribute information corresponding to each face area, each face area is processed accordingly.
  • the feature of the corresponding face can be considered when performing the beauty processing on the region, so the beauty effect of the obtained beauty image can be improved.
  • FIG. 1 is a flowchart of an image processing method in an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a key point of a human face in an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an image processing apparatus in an embodiment of the present invention.
  • the face images obtained by the beautification processing methods of face images in the prior art are prone to excessive or insufficient beautification processing, resulting in poor image beautification processing effects, and it is difficult to satisfy user experience and need.
  • face recognition is performed on the image to be processed to obtain at least one face region
  • face attribute detection is performed on each face region respectively to obtain the face attribute information of each face region
  • beautify the corresponding face region respectively so as to obtain the beauty image corresponding to the image to be processed. Since the beauty processing of each face region is based on each face region Corresponding face attribute information, so that the feature of the corresponding face can be considered when performing the beauty processing on each face region, so the beauty effect of the obtained beauty image can be improved.
  • An embodiment of the present invention provides an image processing method.
  • the image processing method may be executed by a terminal, or may be executed by a chip used for image processing in the terminal or other devices or modules with image processing functions.
  • Step S11 performing face recognition on the image to be processed to obtain at least one face region.
  • face recognition can be performed on the image to be processed in various ways.
  • artificial intelligence Artificial Intelligence, AI
  • face recognition is performed using a traditional face recognition method. It can be understood that face recognition may also be performed in other ways, which are not limited here.
  • a face area corresponds to the face of a beauty object.
  • step S12 face attribute detection is performed on each face region respectively, and face attribute information of each face region is obtained.
  • the face attribute information may include at least one of the following: face gender, age, and race and skin color information.
  • the face attribute information is used to characterize the features of each beauty object.
  • Race skin color information is used to represent the race of the beauty object.
  • races can be divided into blacks, Indians, yellows, and whites. It is understandable that there are other ways of classifying races, depending on the needs. You can configure it, which is not limited here.
  • face attribute detection may be performed on each identified face region based on deep learning face attribute analysis to obtain face attribute information.
  • face attribute detection may also be performed on each face region based on an AI recognition method to obtain face attribute information of each face region.
  • Step S13 based on the face attribute information of each face region, perform beauty processing on the corresponding face regions respectively, and obtain a beauty image corresponding to the image to be processed.
  • beautification processing can be performed on the corresponding face region based on the face attribute information of each face region to obtain a beauty image corresponding to the image to be processed.
  • face recognition is performed on the image to be processed to obtain at least one face area, and face attribute detection is performed on each face area respectively, and the face attribute information of each face area is obtained.
  • the attribute information respectively performs beautification processing on the corresponding face area, so as to obtain the beautification image corresponding to the image to be processed.
  • the features of the corresponding face can be considered when performing the beauty processing on each face region, so the beauty effect of the obtained beauty image can be improved.
  • the beautification processing in the embodiments of the present invention refers to processing such as beautifying or modifying the image, including but not limited to performing skin resurfacing processing and whitening processing on the face area in the to-be-processed image.
  • microdermabrasion mainly refers to the removal of noise in the face area, etc.
  • the microdermabrasion process can remove the spots, blemishes or variegation of the skin part of the beautifying object, so as to improve the delicate skin after the beautification process.
  • Spend In a specific embodiment, the facial region in the to-be-processed image is subjected to skin grinding mainly by filtering the to-be-processed image.
  • a filtering algorithm with edge-preserving filtering effect can be used to filter the face area.
  • Algorithms with edge-preserving filtering effects may include local mean square error filtering, guided filtering, and the like.
  • the whitening process refers to the adjustment of the brightness of the pixels in the face area.
  • the skin tone brightness of the beautified object can be enhanced by the whitening process.
  • the corresponding face region may be subjected to dermabrasion processing. It is also possible to perform whitening processing on the corresponding face regions respectively based on the face attribute information of the face regions. According to actual needs, only the face area can be dermabrasion, or only the face area can be whitened, or both dermabrasion and whitening can be performed at the same time.
  • the microdermabrasion grades used in the microdermabrasion treatment are different, and the microdermabrasion effects obtained by the treatment are different. It is found through research that in order to obtain the microdermabrasion treatment effect suitable for the beauty object, it can be determined based on gender and age. Microdermabrasion grade.
  • the face attribute information may include face gender and age, and the microdermabrasion level of each face region is determined according to the face gender and age of each face region; According to the microdermabrasion level of the area, microdermabrasion is performed on the corresponding face area respectively.
  • the microdermabrasion level of the first level is used to perform microdermabrasion on face area A, and the level of microdermabrasion is used to perform microdermabrasion.
  • Level The face area B is subjected to microdermabrasion.
  • the level of skin resurfacing may be related to filtering parameters used in filtering, and the level of skin resurfacing can be achieved by adjusting the filtering parameters.
  • women or older beauty objects can use a higher level of microdermabrasion.
  • Men or younger beauty objects can use weaker microdermabrasion levels. The higher the microdermabrasion level, the better the microdermabrasion effect, and the higher the degree of skin delicacy in the skin area of the beauty object after microdermabrasion.
  • a corresponding microdermabrasion level range may be configured for the gender of the face, and the range of microdermabrasion levels configured when the gender of the face is male is different from the range of microdermabrasion levels configured when the gender of the face is female. For example, when the gender of the face is female, the corresponding microdermabrasion level is higher than that when the gender of the face is male.
  • corresponding microdermabrasion grade ranges may be configured for different age groups. Or configure the mapping relationship between age and microdermabrasion level, so that the appropriate microdermabrasion level can be determined according to the age in the face attribute information.
  • Age and microdermabrasion grade can be inversely correlated. For example, when the age-indicated beautifying object is an infant, the corresponding microdermabrasion level is smaller than the corresponding microdermabrasion level when the age is indicated by an adult. For another example, when the age indicates that the beautifying object is old, the corresponding microdermabrasion level is higher than the microdermabrasion level corresponding to the youth when the age indication is young.
  • microdermabrasion level when actually determining the microdermabrasion level, one or more of the above-mentioned corresponding relationship between gender and microdermabrasion level, and the corresponding relationship between age and microdermabrasion level may be considered.
  • the microdermabrasion level suitable for the actual situation of each face area can be obtained, so that the microdermabrasion process can be performed on each face area in a targeted manner. , in order to avoid loss of details caused by too high microdermabrasion level, or poor processing effect caused by too low microdermabrasion level.
  • the face attribute information may include race skin color information.
  • performing whitening processing on the corresponding face region may specifically include the following steps: calculating the skin color brightness of each face region; Determine the whitening intensity of each face area, wherein the race skin color information is used to represent the race of the beauty object; use the whitening intensity of each face area to perform whitening processing on the corresponding face area.
  • the higher the whitening intensity the greater the brightness of the skin color after the whitening treatment, and the better the whitening effect.
  • the brightness adjustment coefficient can be used to represent the whitening intensity, and the brightness of the skin color can be adjusted by the brightness adjustment coefficient.
  • the brightness adjustment coefficients corresponding to different whitening intensities are different. The greater the whitening intensity, the greater the value of the corresponding brightness adjustment coefficient. After the brightness of the skin color is adjusted by the brightness adjustment coefficient, if the brightness of the skin after adjustment is greater than the brightness of the skin before the adjustment, the whitening process is realized.
  • the brightness of the image to be processed can also be reduced through whitening processing, such as reducing the brightness of the skin color in the face area, so as to solve the problem of overexposure of the image to be processed. Excessive exposure and brightness.
  • the skin color brightness is inversely correlated with the whitening intensity, that is, the smaller the skin color brightness, the greater the whitening intensity; correspondingly, the greater the skin color brightness, the lower the whitening intensity.
  • corresponding whitening intensity ranges may be configured for different races. After the race corresponding to the face region is determined, within the whitening intensity range corresponding to the race, a whitening intensity suitable for the brightness of the skin color may be selected in combination with the brightness of the skin color.
  • the skin color brightness of each face can be calculated in the following manner: face recognition and face key point alignment processing is performed on the image to be processed.
  • a face key point in the embodiment of the present invention is provided. to obtain the relative positions of the aligned face key points in each face area.
  • the brightness of the skin color of the face is calculated according to the brightness of the pixels in the face region.
  • the face key points shown in FIG. 2 are pt 1 to pt 123 (that is, 1 to 123 in the figure), with a total of 123 key points.
  • the number of face key points is not limited to this, and may also be other numbers, which will not be repeated here.
  • a skin color region is selected between the eyes and the mouth, and the skin color brightness of the face region is obtained according to the brightness of the pixels in the selected skin color region.
  • the average value of the brightness of all pixels in a selected skin color area is taken as the skin color brightness of the human face skin color.
  • a corresponding weight is configured for each pixel, and the skin color brightness of the face region is calculated according to the brightness of each pixel and the corresponding weight. It can be understood that other methods may also be used to calculate the skin color brightness of the skin color of the human face, and examples will not be given here.
  • the selected skin color area should be in the face skin color area to ensure the accuracy of the calculated face skin color brightness.
  • the image processing can be performed in the YUV color space.
  • Y represents the brightness (Luminance or Luma), that is, the grayscale value
  • U and “V” represent the chroma (Chrominance or Chroma), which are used to describe the color and saturation of the image.
  • image processing may also be performed in the RGB color space. If the image processing is performed in the RGB color space, when whitening processing is involved in some embodiments, the RGB color space is converted into the YUV color space, the corresponding skin brightness is obtained based on the Y channel image, and the whitening processing is performed to improve the skin tone brightness and complete whitening. After processing, the YUV color space can be converted into the RGB color space. Among them, RGB is the color representing the three channels of red (R), green (G), and blue (B).
  • the face skin color detection is performed on the to-be-processed image to obtain the face skin color template of each face region;
  • the face attribute information of the face area and the face skin color template are used to perform beauty processing on the corresponding face area.
  • the face skin color template is used to represent the probability that each pixel is a face skin color.
  • the corresponding face region can be subjected to microdermabrasion processing, and the corresponding face region can also be subjected to whitening processing, and can also be ground at the same time. Skin treatment and beauty treatment.
  • the face skin color template is determined in different ways, so as to meet the requirements of different scenarios.
  • face recognition is performed on the to-be-processed image, face key points are aligned for each face region respectively, and the position of the face key points on the to-be-processed image is determined. position; select face contour key points from the face key points; triangulate and render the face contour key points to obtain the face skin color template.
  • this method to determine the face skin color template is fast, and can meet the real-time requirements of image processing speed in the video mode.
  • a first intermediate template is obtained; filtering the first intermediate template is performed to obtain the face skin color template.
  • a face skin color template is obtained, which can improve the edge jaggedness of the face skin color template and improve the smoothness of the boundary.
  • the fitting degree of the obtained human skin color template and the actual human face can be improved.
  • mean value filtering may be used to perform filtering processing on the first intermediate template, or other filtering methods such as local mean square error filtering or guided filtering may be used to perform filtering processing. There is no limitation here.
  • the filtering radius may be determined in the following manner: calculating the interpupillary distance and the relative distance between the center point of the interpupillary distance and the center of the mouth according to the positions of the key points of the face; The relative distance determines the filter radius.
  • the first intermediate template is filtered based on the determined filtering radius to obtain the face skin color template.
  • the interpupillary distance refers to the relative distance between the pupil centers of two eyes.
  • the center point of the interpupillary distance refers to the point located between the centers of the pupils of the two eyes.
  • face recognition is performed on the image to be processed, and face key point alignment is performed according to each face area to obtain the alignment of the human eye key points on the image to be processed after each face area is aligned.
  • face key point alignment is performed according to each face area to obtain the alignment of the human eye key points on the image to be processed after each face area is aligned.
  • Each face key point has corresponding semantic information, and different semantic information is used to represent different positions of the face.
  • the semantic information represented by the key points pt 85 to pt 104 represents the mouth.
  • the face key points take pt 1 to pt 33 (ie 1 to 33 in Figure 2) and pt 105 to pt 123 (ie 105 to 123 in Figure 2) as the contour points of the face.
  • These face contour points are triangulated and then rendered to obtain mask nTmp1 , and the following formula (1) is used to perform adaptive mean filtering on the first intermediate template.
  • mask n is the face color template
  • mask nTmp1 is the first intermediate template
  • MAX(Dist1, Dist2) is the maximum value of Dist1 and Dist2
  • Dist1 is the interpupillary distance
  • Dist2 is the distance between the center point of the interpupillary distance and the center of the mouth Relative distance
  • Blur() represents the mean filter with a radius of radio
  • radio is the filter radius.
  • the denominator value is 10. In practical applications, it can also be other values, such as 8, 9, etc. In practical applications, the filter can be configured according to experience. Radius, the filter radius can also be configured according to Dist1 and Dist2.
  • Dist1 can be calculated according to the positions of the pupil center key points pt 75 and pt 84 of the two eyes in the pupil center of the two eyes, and Dist2 can be calculated according to the center point key point pt 52 used to characterize the interpupillary distance and the key point pt in the center of the mouth The position of 99 is calculated.
  • face skin color segmentation is performed on the to-be-processed image; each face skin color template is obtained according to the facial skin color segmentation result.
  • a second intermediate template is obtained; filtering the second intermediate template is performed to obtain the human face skin color template.
  • a face skin color template is obtained, which can improve the edge jaggedness of the face skin color template and improve the smoothness of the boundary.
  • the fitting degree of the obtained human skin color template and the actual human face can be improved.
  • the second intermediate template may be filtered using adaptive fast guided filtering
  • the filtering parameters of the adaptive fast guided filtering include: thresholds for defining smooth regions and edge regions, filter radius and downsampling ratio.
  • the filter radius is related to the interpupillary distance and the relative distance between the center point of the interpupillary distance and the center of the mouth.
  • the downsampling ratio is related to the size of the to-be-processed image and the size of the face skin color template.
  • the following formula (3) can be used to perform adaptive fast guided filtering processing on the second intermediate template.
  • mask n fastGuideFilter(mask nTmp2 ,imgY,radio,eps,scale); (3)
  • fastGuideFilter() is the adaptive fast guided filter
  • mask nTmp2 is the second intermediate template
  • imgY is the image to be processed
  • eps is the threshold for the fixed smooth area and edge area
  • scale is the downsampling ratio.
  • the denominator value is 20. In practical applications, it can also be other values, such as 18, 19, etc. In practical applications, the filter can be configured according to experience. Radius, the filter radius can also be configured according to Dist1 and Dist2.
  • the adaptive means in the adaptive mean filtering processing and the adaptive fast guiding filtering processing mentioned in the above-mentioned embodiment can determine the filtering radius according to the interpupillary distance Dist1 and the relative distance Dist2 between the center point of the interpupillary distance and the center of the mouth, so as to realize Adapted filter radiuses are used for different face regions.
  • the target microdermabrasion level of each face region may be determined based on the face attribute information of each face region. Based on the target dermabrasion level of each face region and the face skin color template, dermabrasion is performed on the corresponding face region to obtain the beauty image.
  • the target microdermabrasion level of each face area to perform microdermabrasion processing on the corresponding face area to obtain a first image; based on the face skin color template, the target microdermabrasion level and the maximum microdermabrasion level, obtain a first fusion coefficient; the first fusion coefficient is used to fuse the to-be-processed image and the first image to obtain the beauty image.
  • the to-be-processed image and the first image may be fused based on the following formula (5).
  • imgDst smooth imgY*(1-k)+k*img smooth ; (5)
  • imgDst smooth is the beauty image
  • imgY is the image to be processed
  • k is the first fusion coefficient
  • img smooth is the first image
  • mask n is the face skin color template
  • smooth level is the Target microdermabrasion level
  • smooth max is the maximum microdermabrasion level.
  • the obtained beauty image can be comprehensively considered.
  • the background area of the non-face in the original image to be processed can also be considered, so that the texture information of the background area of the non-face can be effectively preserved.
  • a certain pixel is a pixel in the background area
  • the k of the pixel is 0. From formula (5), it can be known that the pixel of the original to-be-processed image is used for the fusion of this pixel.
  • a certain pixel is a pixel of the skin color region of the face region
  • k of the pixel is 1, it can be known from formula (5) that the pixel in the first image is used for fusion of this pixel.
  • the target smooth level is usually less than the maximum smooth max level
  • the probability that a certain pixel in the face area in mask n is a skin color area is less than between 0 and 1
  • the calculated The value of k is also between 0 and 1.
  • the obtained beauty image is the weighted sum of the pixel in the image to be processed and the pixel in the first image.
  • the skin color brightness of each face region can be calculated, and the whitening intensity of each face region can be determined according to the skin color brightness and ethnic skin color information of each face region, wherein the ethnic skin color information is used to represent beauty The race of the face object, and the face attribute information includes the race skin color information.
  • the face skin color template of each face region the face skin color region in each face region is determined; according to the determined whitening intensity of each face region, whitening processing is performed on the face skin color region in each face region.
  • the brightness is in an appropriate range to avoid overexposure and unnaturalness caused by excessive brightness in non-face skin color areas.
  • the whitening intensity can be characterized by a brightness adjustment coefficient. According to the determined whitening intensity of each face area, whitening processing is performed on the skin color area of each face area, that is, according to the brightness adjustment coefficient corresponding to the whitening intensity of each face area, respectively. The pixel brightness of the face color area is adjusted, and the whitening processing of the face area is realized by increasing the brightness of the human face skin color area.
  • the following formula (7) can be used to obtain the beautified image after the whitening process.
  • imgDst bright imgY*(1+mask n *bright level /bright max ); (7)
  • imgDst bright is the beauty image after whitening
  • imgY is the image to be processed
  • mask n is the face skin color template
  • bright level is the whitening intensity of the face area
  • bright max is the maximum whitening intensity.
  • each face skin tone area of the image to be processed can be whitened in a targeted manner, which can effectively avoid the overexposure problem caused by the high brightness of the background area and other non-face skin tone areas.
  • the cosmetic treatment includes dermabrasion treatment and whitening treatment
  • the dermabrasion treatment and the whitening treatment can have multiple processing sequences.
  • the to-be-processed image may be subjected to skin grinding processing to obtain the image after skin grinding processing.
  • a whitening process is performed on the image after the microdermabrasion process to obtain a beautifying image.
  • whitening processing may be performed on the image to be processed first to obtain an image after whitening processing.
  • the image after the whitening process is subjected to a skin grinding process to obtain a beautifying image.
  • the to-be-processed images may be subjected to microdermabrasion processing and whitening processing, respectively, to obtain the images after microdermabrasion processing and the images after whitening processing, and fuse the images after microdermabrasion processing and the images after whitening processing. , to get a beauty image.
  • eye processing may also be performed on the face region in the image to be processed.
  • the eye treatment may include eye enlargement or eye reduction.
  • the human eye key points are aligned on the face regions in the to-be-processed image; and each face region is calculated according to the positions of the human eye key points on the to-be-processed image after the alignment of the face regions.
  • the eye size coefficient of each face area determine the eye type of each face area;
  • the eye magnification factor performs magnification processing on the corresponding eye area.
  • eye types can be classified into microphthalmia, standard eyes, macrophthalmia, and the like. Different eye types correspond to different eye size coefficients.
  • corresponding eye size coefficient ranges may be configured for different eye shapes, and different eye size coefficient ranges correspond to different eye magnification coefficient ranges.
  • the eye size factor is inversely related to the eye magnification factor.
  • eye size coefficients for each face region may be calculated according to the height and width of each eye. Specifically, for each face region, which usually includes two eyes, the left eye and the right eye, respectively, the size coefficient of the left eye and the size coefficient of the right eye are calculated respectively. The eye size coefficient of the face is determined according to the size coefficient of the left eye and the size coefficient of the right eye.
  • the maximum value of the left eye size coefficient and the right eye size coefficient may be taken as the eye size coefficient.
  • the average value of the left eye size coefficient and the right eye size coefficient may be taken as the eye size coefficient.
  • a corresponding weight can also be set for each eye, and the eye size coefficient can be calculated according to the set weight, as well as the size coefficient of the left eye and the size coefficient of the right eye.
  • the height of the eye is the distance between the center of the upper eyelid and the center of the lower eyelid, and the width of the eye is the distance between the two corners of the eye.
  • the size factor of each eye can be calculated based on the ratio of the height to the width of each eye.
  • the size of the eye can be determined according to the eye size coefficient, that is, according to the ratio of the height to the width of the eye, and then the eye can be enlarged by enlarging the eye according to the eye enlargement coefficient determined by the eye size coefficient.
  • the eyes of some beauty objects are small and round eyes, usually the height of the eyes and the width of the eyes are close to the small and round eyes, so the calculated eye size When the coefficient is close to 1, the eye magnification factor determined according to the eye size coefficient is small, which may result in an unsatisfactory eye magnification processing effect.
  • the ratio of the width of the eyes to the width of the face is calculated.
  • the ratio of the eye width to the face width is inversely correlated with the eye enlargement coefficient. That is, the smaller the ratio of the eye width to the face width, the larger the eye magnification factor.
  • the eye size coefficient when the eye size coefficient is greater than a preset first threshold, if the ratio of the width of the eyes to the width of the face is less than the second threshold, it can be determined that the eyes are small, and a relatively large eye can be used Partial magnification factor.
  • the eye size coefficient is greater than the preset first threshold, if the ratio of the eye width to the face width is greater than the second threshold, it can be determined that the eyes are relatively large, and a relatively small eye enlargement coefficient can be used.
  • the eye key points of the left eye include the upper eyelid point pt 69 , the lower eyelid point pt 73 , the corner key points pt 67 and pt 71 , according to the upper eyelid point pt 69 and the lower eyelid point pt 73 of the left eye position, calculate the distance between pt 69 and pt 73 , the distance between pt 69 and pt 73 is the eye height of the left eye. Calculate the distance between the key points pt 67 and pt 71 at the corner of the eye, and the distance between the key points pt 67 and pt 71 is the width of the left eye.
  • the size coefficient of the left eye is calculated using the following formula (8).
  • the eye key points of the right eye include the upper eyelid point pt 78 , the lower eyelid point pt 82 , the corner key points pt 76 and pt 80 , according to the positions of the upper eyelid point pt 78 and the lower eyelid point pt 82 of the right eye, Calculate the distance between pt 78 and pt 82.
  • the distance between pt 78 and pt 82 is the eye height of the right eye.
  • Calculate the distance between the key points pt 76 and pt 80 at the corner of the eye, and the distance between the key points pt 76 and pt 80 is the width of the right eye.
  • the size coefficient of the right eye is calculated using the following formula (9).
  • el coeff represents the size coefficient of the left eye
  • er coeff represents the size coefficient of the right eye
  • Dist() returns the distance between two points in the image
  • pt n represents the nth face key point.
  • face shape adjustment may also be performed on the face in the face region.
  • the face area of the image to be processed can be thinned.
  • the face shape of each face region may be calculated, and the face-lifting coefficient suitable for each face shape may be determined respectively.
  • the face coefficients adapted to each face shape are used to perform face-lifting processing on the corresponding face regions respectively. It should be noted that, according to actual needs (such as in some special effects processing scenarios), the face in the face area can also be fattened by configuring the corresponding face-lifting coefficient. The fattening treatment plan will not be expanded here. illustrate.
  • corresponding face-lifting coefficient ranges can be configured for different face shapes respectively, and after the face shapes are determined, the corresponding face-lifting coefficients can be selected according to the range of face-lifting coefficients corresponding to the face shapes.
  • the face shape of each face region can be calculated in the following manner.
  • the chin key points are aligned on the face regions in the image to be processed, and the chin angle is calculated according to the key points at the bottom of the chin and the key points on both sides of the chin, and the face shape is determined according to the calculated chin angle.
  • the included angle of the chin is the included angle between the key points located at the bottom of the chin and the connecting lines of the key points located at the bottom of the chin, respectively.
  • the distances between the key points at the bottom of the chin and the key points on both sides of the chin are recorded as the first distance and the second distance
  • the distance between the key points on both sides of the chin is recorded as the third distance, according to Calculate the included angle of the chin from the first distance, the second distance and the third distance.
  • the key point of the chin is denoted as pt 17
  • the key points on both sides of the chin are respectively pt 13 and pt 21
  • the following formula (10) can be used to calculate the included angle of the chin:
  • is the angle of the chin
  • a and b are the distances between the key points at the bottom of the chin and the key points on both sides of the chin
  • c is the distance between the key points on both sides of the chin
  • arccos() represents the inverse triangle Arc cosine in a function.
  • face shapes can be classified into melon face shapes, standard face shapes, and round face shapes. Different face shapes have different chin angle ranges, correspondingly different face-lifting coefficients.
  • the face-lifting coefficient is positively correlated with the degree of face-lifting processing, that is, the smaller the face-lifting coefficient, the smaller the face-lifting degree.
  • the face-lifting coefficient corresponding to the melon seed face shape is smaller than the face-lifting coefficient corresponding to the standard face shape
  • the face-lifting coefficient corresponding to the standard face shape is smaller than the face-lifting coefficient corresponding to the round face shape. .
  • the method effectively solves the problems in the prior art that the face with a smaller face shape and the melon-seed-shaped face has the phenomenon of facial incoordination after face-lifting, and the face-lifting degree of the round face shape is low, and the face-lifting effect is not obvious.
  • An embodiment of the present invention further provides an image processing apparatus.
  • FIG. 3 a schematic structural diagram of an image processing apparatus in an embodiment of the present invention is given.
  • the image processing apparatus 30 may include:
  • the face recognition unit 31 is used for performing face recognition on the image to be processed to obtain at least one face area
  • the face attribute detection unit 32 is used to perform face attribute detection on each face area, and obtain the face attribute information of each face area;
  • the processing unit 33 is configured to perform beauty processing on the corresponding face regions based on the face attribute information of each face region, and obtain a beauty image corresponding to the to-be-processed image.
  • the above-mentioned image processing apparatus 30 may correspond to a chip with an image processing function in a terminal; or a chip module including a chip with an image processing function in the terminal, or a terminal.
  • each module/unit included in each device and product described in the above embodiments it may be a software module/unit, a hardware module/unit, or a part of a software module/unit, a part of which is a software module/unit. is a hardware module/unit.
  • each module/unit included therein may be implemented by hardware such as circuits, or at least some of the modules/units may be implemented by a software program.
  • Running on the processor integrated inside the chip the remaining (if any) part of the modules/units can be implemented by hardware such as circuits; for each device and product applied to or integrated in the chip module, the modules/units contained therein can be They are all implemented by hardware such as circuits, and different modules/units can be located in the same component of the chip module (such as chips, circuit modules, etc.) or in different components, or at least some of the modules/units can be implemented by software programs.
  • the software program runs on the processor integrated inside the chip module, and the remaining (if any) part of the modules/units can be implemented by hardware such as circuits; for each device and product applied to or integrated in the terminal, each module contained in it
  • the units/units may all be implemented in hardware such as circuits, and different modules/units may be located in the same component (eg, chip, circuit module, etc.) or in different components in the terminal, or at least some of the modules/units may be implemented by software programs Realization, the software program runs on the processor integrated inside the terminal, and the remaining (if any) part of the modules/units can be implemented in hardware such as circuits.
  • An embodiment of the present invention further provides a storage medium, the storage medium is a non-volatile storage medium or a non-transitory storage medium, and a computer program is stored thereon, and the computer program executes any one of the foregoing implementations when the computer program is run by a processor The steps of the image processing method provided in the example.
  • An embodiment of the present invention further provides a terminal, including a memory and a processor, where the memory stores a computer program that can be run on the processor, and when the processor runs the computer program, any one of the foregoing embodiments is executed The steps of the image processing method provided in .

Abstract

一种图像处理方法及装置、存储介质、终端,所述图像处理方法,包括:对待处理图像进行人脸识别,得到至少一个人脸区域;分别对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息;基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,得到所述待处理图像对应的美颜图像。上述方案能够提高图像美化处理效果。

Description

图像处理方法及装置、存储介质、终端
本申请要求2021年1月27日提交中国专利局、申请号为202110111649.0、发明名称为“图像处理方法及装置、存储介质、终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及图像处理领域,尤其涉及一种图像处理方法及装置、存储介质、终端。
背景技术
人脸美颜作为便携移动设备智能手机的相机的基本功能,用户一键拍照便能获得高颜值、高清的人像图片,而无需进行后期人工修图等复杂美化处理。
然而,现有的人像美化处理得到的美化后的人像图像的效果较差。
发明内容
本发明实施例解决的技术问题是现有的人像图像的美化处理效果较差。
为解决上述技术问题,本发明实施例提供一种图像处理方法,包括:对待处理图像进行人脸识别,得到至少一个人脸区域;分别对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息;基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,得到所述待处理图像对应的美颜图像。
可选的,所述基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,包括以下至少一种:基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行磨皮处理;基于各人脸区域的人 脸属性信息,分别对相应的人脸区域进行美白处理。
可选的,所述人脸属性信息包括:人脸性别及年龄,所述基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行磨皮处理,包括:基于各人脸区域的人脸性别及年龄,确定各人脸区域的磨皮等级;采用各人脸区域的磨皮等级,分别对相应的人脸区域进行磨皮处理。
可选的,所述人脸属性信息包括人种肤色信息,所述基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美白处理,包括:计算各人脸区域的肤色亮度;根据各人脸区域的肤色亮度以及人种肤色信息,确定各人脸区域的美白强度,其中,所述人种肤色信息用于表征美颜对象的人种;采用各人脸区域的美白强度,对相应的人脸区域进行美白处理。
可选的,所述图像处理方法,还包括:对所述待处理图像进行人脸肤色检测,得到各人脸区域的人脸肤色模板,所述人脸肤色模板用于表征各像素为人脸肤色的概率;基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美颜处理。
可选的,所述基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美颜处理,包括以下至少一种:基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行磨皮处理;基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美白处理。
可选的,所述基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行磨皮处理,包括:基于各人脸区域的人脸属性信息,确定各人脸区域的目标磨皮等级;基于各人脸区域的目标磨皮等级及人脸肤色模板,对相应的人脸区域进行磨皮处理,得到所述美颜图像。
可选的,所述基于各人脸区域的目标磨皮等级及人脸肤色模板,对相应的人脸区域进行磨皮处理,得到所述美颜图像,包括:采用各 人脸区域的目标磨皮等级对相应的人脸区域进行磨皮处理,得到第一图像;基于所述人脸肤色模板、所述目标磨皮等级以及最大磨皮等级,得到第一融合系数;采用第一融合系数对所述待处理图像及所述第一图像进行融合,得到所述美颜图像。
可选的,所述采用第一融合系数对所述待处理图像及所述第一图像进行融合,得到所述美颜图像,包括:采用如下公式对所述待处理图像及所述第一图像进行融合:imgDst smooth=imgY*(1-k)+k*img smooth;k=mask n*smooth level/smooth max,k∈[0,1];其中,imgDst smooth为所述美颜图像,imgY为所述待处理图像,k为第一融合系数,img smooth为所述第一图像,mask n为所述人脸肤色模板,smooth level为所述目标磨皮等级,smooth max为最大磨皮等级。
可选的,所述基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美白处理,包括:计算各人脸区域的肤色亮度;根据各人脸区域的肤色亮度以及人种肤色信息,确定各人脸区域的美白强度,其中,所述人种肤色信息用于表征美颜对象的人种,所述人脸属性信息包括所述人种肤色信息;根据各人脸区域的人脸肤色模板,确定各人脸区域中的人脸肤色区域;根据确定的各人脸区域的美白强度分别对各人脸区域中的人脸肤色区域进行美白处理。
可选的,所述对所述待处理图像进行人脸肤色检测,得到各人脸区域的人脸肤色模板,包括:当处于视频模式时,对所述待处理图像进行人脸识别,分别对各人脸区域进行人脸关键点对齐,确定所述人脸关键点在所述待处理图像上的位置;从所述人脸关键点中选取人脸轮廓关键点;对所述人脸轮廓关键点进行三角化处理并渲染得到所述人脸肤色模板。
可选的,所述图像处理方法还包括:对所述人脸轮廓关键点进行三角化处理并渲染之后,得到第一中间模板;对所述第一中间模板进行滤波处理,得到所述人脸肤色模板。
可选的,对所述第一中间模板进行滤波处理,得到所述人脸肤色 模板,包括:根据所述人脸关键点的位置,计算瞳距以及所述瞳距的中心点与嘴巴中心之间的相对距离;根据所述瞳距以及所述相对距离,确定滤波半径;基于所述滤波半径对所述第一中间模板进行滤波处理,得到所述人脸肤色模板。
可选的,所述对所述待处理图像进行人脸肤色检测,得到各人脸的人脸肤色模板,包括:当处于拍照模式时,对所述待处理图像进行人脸肤色分割;根据人脸肤色分割结果得到各人脸肤色模板。
可选的,所述图像处理方法还包括:在对所述待处理图像进行人脸肤色分割之后,得到第二中间模板;对所述第二中间模板进行滤波处理,得到所述人脸肤色模板。
可选的,所述对所述第二中间模板进行滤波处理,包括:采用自适应快速导向滤波对所述第二中间模板进行滤波处理,所述自适应快速导向滤波的滤波参数包括:界定平滑区域及边缘区域的阈值、滤波半径及降采样倍率,其中:所述滤波半径与瞳距及瞳距的中心点与嘴巴中心之间的相对距离相关,所述降采样倍率与所述待处理图像的尺寸及所述人脸肤色模板的尺寸相关。
可选的,所述图像处理方法还包括:对所述待处理图像中的各人脸区域进行人眼关键点对齐;根据各人脸区域对齐后的人眼关键点在所述待处理图像上的位置,计算各人脸区域的眼睛尺寸系数;根据所述各人脸区域的眼睛尺寸系数,确定各人脸区域的眼睛类型;确定与所述各人脸区域的眼睛类型相适配的眼部放大系数,并采用确定的眼部放大系数对相应的眼睛区域进行放大处理。
可选的,所述根据各人脸区域对齐后的人眼关键点在所述待处理图像上的位置,计算各人脸区域的眼睛尺寸系数,包括:计算每只眼睛的高度及宽度,其中,眼睛的高度为上眼皮中心与下眼皮中心之间的距离,眼睛的宽度为两眼角之间的距离;根据每只眼睛的高度与宽度的比值,得到每只眼睛的尺寸系数;根据每只眼睛的尺寸系数,确定所述眼睛尺寸系数。
可选的,所述图像处理方法还包括:当所述眼睛尺寸系数大于预设第一阈值时,则计算眼睛宽度与人脸宽度的比值;根据所述眼睛宽度与所述人脸宽度的比值,确定所述眼部放大系数。
可选的,所述图像处理方法还包括:计算各人脸区域的脸型;分别确定与各脸型适配的瘦脸系数;采用各脸型适配的人脸系数分别对相应的人脸区域进行瘦脸处理。
可选的,所述计算各人脸区域的脸型,包括:对所述待处理图像中的各人脸区域进行下巴关键点对齐;根据位于下巴底部的关键点以及下巴两侧的关键点,计算下巴夹角,所述下巴夹角为位于下巴底部的关键点分别与位于下巴底部的关键点的连线间的夹角;根据所述下巴夹角确定脸型。
本发明实施例还提供一种图像处理装置,包括:人脸识别单元,用于对待处理图像进行人脸识别,得到至少一个人脸区域;人脸属性检测单元,用于对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息;处理单元,用于基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,得到所述待处理图像对应的美颜图像。
本发明实施例还提供一种存储介质,所述存储介质为非易失性存储介质或非瞬态存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时执行上述任一种图像处理方法的步骤。
本发明实施例还提供一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序时执行上述任一种图像处理方法的步骤。
与现有技术相比,本发明实施例的技术方案具有以下有益效果:
对待处理图像进行人脸识别,得到至少一个人脸区域,分别对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息,根据各人脸区域的人脸属性信息分别对相应的人脸区域进行美颜处理,从 而得到待处理图像对应的美颜图像,由于对每个人脸区域美颜处理时依据的是各人脸区域对应的人脸属性信息,从而对每个人脸区域进行美颜处理时可以考虑对应的人脸的特征,故可以提高得到的美颜图像的美颜效果。
附图说明
图1是本发明实施例中的一种图像处理方法的流程图;
图2是本发明实施例中一种人脸关键点的示意图;
图3是本发明实施例中的一种图像处理装置的结构示意图。
具体实施方式
如背景技术中所述,现有技术中对人脸图像的美化处理方式得到的人脸图像,易出现美化处理过度或者美化处理不足,从而导致图像的美化处理效果较差,难以满足用户体验和需求。
为了解决上述问题,在本发明实施例中,对待处理图像进行人脸识别,得到至少一个人脸区域,分别对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息,根据各人脸区域的人脸属性信息分别对相应的人脸区域进行美颜处理,从而得到待处理图像对应的美颜图像,由于对每个人脸区域美颜处理时依据的是各人脸区域对应的人脸属性信息,从而对每个人脸区域进行美颜处理时可以考虑对应的人脸的特征,故可以提高得到的美颜图像的美颜效果。
为使本发明实施例的上述目的、特征和有益效果能够更为明显易懂,下面结合附图对本发明的具体实施例做详细的说明。
本发明实施例提供一种图像处理方法,图像处理方法可以有终端执行,也可以由终端中的用于图像处理的芯片或者其他具有图像处理功能的装置或模块执行。
参照图1,给出了本发明实施例中的一种图像处理方法的流程图,具体可以包括如下步骤:
步骤S11,对待处理图像进行人脸识别,得到至少一个人脸区域。
在具体实施中,可以采用多种方式对待处理图像进行人脸识别。例如,采用人工智能(Artificial Intelligence,AI)识别对待处理图像进行人脸识别。又如,采用传统的人脸识别方式进行人脸识别。可以理解的是,还可以采用其他方式进行人脸识别,此处不做限定。
通过对待处理图像进行人脸识别,根据人脸识别结果,可以得到一个或多个人脸区域。一个人脸区域对应于一个美颜对象的人脸。
步骤S12,分别对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息。
在具体实施中,人脸属性信息可以包括以下至少一种:人脸性别、年龄以及人种肤色信息等。人脸属性信息用于表征各美颜对象的特征。人种肤色信息用于表征美颜对象的人种,通常人种可以分为黑人、印度人、黄种人以及白种人等,可以理解的是,人种还有其他的分类方式,具体根据需求进行配置即可,此处不做限定。
在一些实施例中,可以基于深度学习人脸属性分析的方式对识别到的各人脸区域进行人脸属性检测,得到人脸属性信息。
在另一些实施例中,也可以基于AI识别方式对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息。
可以理解的是,还可以采用其他人脸属性分析方式进行人脸属性检测,此处不再一一举例。
步骤S13,基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,得到所述待处理图像对应的美颜图像。
在具体实施中,在得到各人脸区域的属性信息之后,可以基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,得到待处理图像对应的美颜图像。
由上可知,对待处理图像进行人脸识别,得到至少一个人脸区域, 分别对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息,根据各人脸区域的人脸属性信息分别对相应的人脸区域进行美颜处理,从而得到待处理图像对应的美颜图像,由于对每个人脸区域美颜处理时依据的是各人脸区域对应的人脸属性信息,从而对每个人脸区域进行美颜处理时可以考虑对应的人脸的特征,故可以提高得到的美颜图像的美颜效果。
本发明实施例中的美颜处理指对图像进行美化或者修饰等处理,包括但不限于对待处理图像中的人脸区域进行磨皮处理、美白处理等。
其中,磨皮处理主要指对人脸区域进行噪点去除等,通过磨皮处理可以为对美颜对象进行消除皮肤部分的斑点、瑕疵或者杂色处理等,以提高美颜处理之后呈现的皮肤细腻度。具体实施例中,主要通过对待处理图像进行滤波的方式,实现对待处理图像中的人脸区域进行磨皮处理。
为了提高对人脸区域的磨皮处理效果,使得磨皮后的人脸区域更加自然,可以采用具有保边滤波效果的滤波算法对人脸区域进行滤波处理。具有保边滤波效果的算法可以包括局部均方差滤波、导向滤波等。
其中,美白处理指对人脸区域的像素的亮度的调整。通过美白处理可以提高美颜对象的肤色亮度。
在具体实施中,可以基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行磨皮处理。也可以基于人脸区域的人脸属性信息,分别对相应的人脸区域进行美白处理。根据实际需求,可以只对人脸区域进行磨皮处理,也可以只对人脸区域进行美白处理,还可以同时进行磨皮处理和美白处理。
在具体实施中,磨皮处理时采用的磨皮等级不同,处理得到的磨皮效果不同,经研究发现,为了得到与美颜对象相适配的磨皮处理效 果,可以基于性别以及年龄来确定磨皮等级。为此,在本发明实施例中,所述人脸属性信息可以包括人脸性别及年龄,根据各人脸区域的人脸性别及年龄,确定各人脸区域的磨皮等级;采用各人脸区域的磨皮等级,分别对相应的人脸区域进行磨皮处理。例如,人脸区域A的磨皮等级为一级,人脸区域B的磨皮等级为三级,则采用一级的磨皮等级对人脸区域A进行磨皮处理,采用三级的磨皮等级对人脸区域B进行磨皮处理。
在一些实施例中,磨皮等级可以与滤波时采用滤波参数相关,通过调整滤波参数可以磨皮等级。
例如,女性或者年龄稍大的美颜对象可以采用较高的磨皮等级。男性或者年龄较小的美颜对象可以采用较弱的磨皮等级。磨皮等级越高对应的磨皮效果越好,磨皮处理之后的美颜对象的皮肤区域呈现的皮肤细腻程度越高。
在一些实施例中,可以为人脸性别配置对应的磨皮等级范围,人脸性别为男性时配置的磨皮等级范围,与人脸性别为女性时配置的磨皮等级范围不同。例如,人脸性别为女性时对应的磨皮等级高于人脸性别为男性时对应的磨皮等级。
在一些实施例中,可以为不同的年龄段分别配置对应的磨皮等级范围。或者配置年龄与磨皮等级之间的映射关系,以能够根据人脸属性信息中的年龄来确定适配的磨皮等级。年龄大小与磨皮等级可以反相关。例如,年龄指示美颜对象为婴儿时对应的磨皮等级小于年龄指示为成年人时对应的磨皮等级。又如,年龄指示美颜对象为老年时对应的磨皮等级高于年龄指示为青年人对应的磨皮等级。
可以理解的是,在实际确定磨皮等级时,可以考虑上述性别与磨皮等级的对应关系、年龄与磨皮等级中的对应关系中的一种或多种。
通过根据人脸属性信息中的人脸性别及年龄来确定磨皮等级,可以得到与各人脸区域的实际情况相适配的磨皮等级,从而有针对地对 各人脸区域进行磨皮处理,以避免出现磨皮等级过高引起的细节丢失,或者磨皮等级过低而引起的处理效果较差的情况。
在具体实施中,人脸属性信息可以包括人种肤色信息。基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美白处理具体可以包括如下步骤:计算各人脸区域的肤色亮度;根据各人脸区域的肤亮度以及人种肤色信息,确定各人脸区域的美白强度,其中,所述人种肤色信息用于表征美颜对象的人种;采用各人脸区域的美白强度,对相应的人脸区域进行美白处理。其中,美白强度越高,美白处理之后肤色亮度越大,美白效果越好。
在一些实施例中,可以采用亮度调整系数表征美白强度,通过亮度调整系数可以对肤色亮度进行调整。不同美白强度对应的亮度调整系数不同。美白强度越大,对应的亮度调整系数的取值越大。当通过亮度调整系数对肤色亮度调整之后,若调整后的肤色亮度大于调整前的肤色亮度,则实现了美白处理。
需要说明的是,在一些场景中,针对过曝或者亮度过高的待处理图像,也可以通过美白处理降低待处理图像的亮度,如降低人脸区域的肤色亮度,以解决待处理图像的过曝和亮度过高的情况。
总体来讲,肤色亮度与美白强度反相关,也即肤色亮度越小,美白强度越大;相应地,肤色亮度越大,美白强度越小。
例如,针对黑人采用较强的美白强度,以获取较理想的美白效果;针对印度人和黄种人采用适中的美白强度;针对白种人使用较弱的美白强度或者不进行美白处理,以避免人脸区域亮度过高导致过曝。
在具体实施中,可以针对不同人种分别配置对应的美白强度范围。在确定人脸区域对应的人种之后,可以在该人种对应的美白强度范围内,结合肤色亮度选取与肤色亮度适配的美白强度。
在具体实施中,可以采用如下方式计算各人脸的肤色亮度:对待处理图像进行面部识别和人脸关键点对齐处理,参照图2,给出了本 发明实施例中的一种人脸关键点的示意图,得到对齐后的人脸关键点在各人脸区域的相对位置。根据各人脸区域对应的人脸关键点位置信息,根据人脸区域的像素的亮度计算人脸的肤色亮度。其中,图2中示意的人脸关键点为pt 1至pt 123(也即图中的1至123),共123个关键点。在实际应用中,人脸关键点的数目不限于此,还可以为其他数目,此处不再赘述。
在一些实施例中,针对某一人脸区域,在眼睛和嘴巴之间选取一块肤色区域,根据选择的一块肤色区域内的像素的亮度,得到该人脸区域的肤色亮度。例如,将选择的一块肤色区域内的所有像素的亮度平均值作为人脸肤色的肤色亮度。又如,为每个像素配置对应的权重,根据每个像素的亮度以及对应的权重,计算人脸区域的肤色亮度。可以理解的是,还可以采用其他方式计算人脸肤色的肤色亮度,此处不再一一举例。
具体而言,如图2,针对第n个人脸区域,选择的肤色区域如图中的矩形方框选中的区域200,定义为:Rect n=(X s,Y s,W s,H s),选择的肤色区域的左上角坐标为(X s,Y s),选择的肤色区域的宽为W s,选择的肤色区域的高为H s。选择的肤色区域需处于人脸肤色区域,以确保计算得到的人脸肤色亮度的精度。
为了提高图像处理效率,降低算法复杂度的同时,确保处理后的图像质量,在本发明实施例中,可以在YUV颜色空间进行图像处理。其中,“Y”表示明亮度(Luminance或Luma),也就是灰阶值,“U”和“V”表示的则是色度(Chrominance或Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色。例如,基于Y通道图像进行美颜处理。
在另一些实施例中,也可以在RGB颜色空间进行图像处理。若是在RGB颜色空间进行图像处理,当一些实施例中涉及美白处理时,将RGB颜色空间转换成YUV颜色空间,基于Y通道图像获取相应的肤色亮度以及进行美白处理,提升肤色亮度调,完成美白处理之后, 再将YUV颜色空间转换成RGB颜色空间即可。其中,RGB即是代表红(R)、绿(G)、蓝(B)三个通道的颜色。
现有技术中,在对待处理图像进行美颜处理时,易出现美颜处理过度的情况,如磨皮处理过度,导致非肤色背景区域纹理细节丢失,从而影响整体图像处理效果。又如,对图像进行美白处理时,以导致图像的背景部分亮度变化较大,甚至出现过曝的现象,也会对整体图像处理效果造成影响。
为进一步提高图像美颜处理效果,使得美颜处理后的图像更加自然,本发明实施例中,对所述待处理图像进行人脸肤色检测,得到各人脸区域的人脸肤色模板;基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美颜处理。其中,所述人脸肤色模板用于表征各像素为人脸肤色的概率。通过对待处理图像进行人脸肤色检测,得到各人脸区域的人脸肤色模板,结合人脸肤色模板在对各人脸区域进行美颜处理时,精准定位人脸的位置,有针对性地对各人脸区域进行美颜处理,降低美颜处理过程中对背景等其他非人脸区域的影响。有效地避免背景等其他非人脸区域的细节信息的丢失。
在具体实施中,可以基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行磨皮处理,也可以对相应的人脸区域进行美白处理,还可以同时进行磨皮处理及美颜处理。
在具体实施中,针对不同的场景,人脸肤色模板的确定方式不同,以适应不同场景需求。
在具体实施中,当处于视频模式时,对所述待处理图像进行人脸识别,分别对各人脸区域进行人脸关键点对齐,确定所述人脸关键点在所述待处理图像上的位置;从所述人脸关键点中选取人脸轮廓关键点;对所述人脸轮廓关键点进行三角化处理并渲染得到所述人脸肤色模板。采用此种方式确定人脸肤色模板,速度较快,可以满足视频模式下的对图像处理速度实时性的要求。
进一步地,对所述人脸轮廓关键点进行三角化处理并渲染之后,得到第一中间模板;对所述第一中间模板进行滤波处理,得到所述人脸肤色模板。通过对第一中间模板进行滤波处理,得到人脸肤色模板,可以改善人脸肤色模板的边缘锯齿问题,并提高边界平滑性。通过提高边界平滑性可以提高得到的人肤色模板与实际人脸的贴合度。
在一些实施例中,可以采用均值滤波对第一中间模板进行滤波处理,也可以采用局部均方差滤波或者导向滤波等其他滤波方式进行滤波处理。此处不做限定。
在一些实施例中,可以采用如下方式确定滤波半径:根据所述人脸关键点的位置,计算瞳距以及所述瞳距的中心点与嘴巴中心之间的相对距离;根据所述瞳距以及所述相对距离,确定滤波半径。基于确定的滤波半径对所述第一中间模板进行滤波处理,得到所述人脸肤色模板。
其中,瞳距指两只眼睛的瞳孔中心之间的相对距离。瞳距的中心点指位于两只眼睛的瞳孔中心之间的点。
例如,结合图2,对所述待处理图像进行人脸识别,并根据各人脸区域进行人脸关键点对齐,得到各人脸区域对齐后的人眼关键点在所述待处理图像上的位置。
每个人脸关键点分别具有对应的语义信息,不同的语义信息用于表征人脸的不同位置。例如,关键点pt 85~pt 104(也即图2中的85~104)代表的语义信息表征嘴巴。人脸关键点取pt 1~pt 33(也即图2中的1~33)和pt 105~pt 123(也即图2中的105~123)作为人脸的轮廓点。对这些人脸轮廓点进行三角化处理后进行渲染,得到mask nTmp1,然后采用如下公式(1)对第一中间模板进行自适应均值滤波处理。
mask n=Blur(mask nTmp1,radio);      (1)
Figure PCTCN2021139036-appb-000001
其中,mask n为人脸肤色模板,mask nTmp1为第一中间模板,MAX(Dist1,Dist2)为取Dist1及Dist2的最大值,Dist1为瞳距,Dist2为瞳距的中心点与嘴巴中心之间的相对距离,Blur()表示半径为radio的均值滤波,radio为滤波半径。
需要说明的是,公式(2)中计算滤波半径radio时,分母取值为10,在实际应用中,还可以为其他取值,如8、9等,在实际应用中,可以根据经验配置滤波半径,也可以根据Dist1及Dist2配置滤波半径。
Dist1可以根据两只眼睛的瞳孔中心的两只眼睛的瞳孔中心关键点pt 75及pt 84的位置计算得到,Dist2可以根据用于表征瞳距的中心点关键点pt 52及嘴巴中心的关键点pt 99的位置计算得到。
在具体实施中,当处于拍照模式时,对所述待处理图像进行人脸肤色分割;根据人脸肤色分割结果得到各人脸肤色模板。
进一步地,在对所述待处理图像进行人脸肤色分割之后,得到第二中间模板;对所述第二中间模板进行滤波处理,得到所述人脸肤色模板。通过对第二中间模板进行滤波处理,得到人脸肤色模板,可以改善人脸肤色模板的边缘锯齿问题,并提高边界平滑性。通过提高边界平滑性可以提高得到的人肤色模板与实际人脸的贴合度。
进一步地,可以采用自适应快速导向滤波对所述第二中间模板进行滤波处理,所述自适应快速导向滤波的滤波参数包括:界定平滑区域及边缘区域的阈值、滤波半径及降采样倍率。
所述滤波半径与瞳距及瞳距的中心点与嘴巴中心之间的相对距离相关。
所述降采样倍率与所述待处理图像的尺寸及所述人脸肤色模板的尺寸相关。通过配置降采样倍率,在对第二中间模板进行滤波时,进行降采样处理,可以提高运算速度,以提高图像处理效率。
例如,结合图2,可以采用如下公式(3)对第二中间模板进行自适应快速导向滤波处理。
mask n=fastGuideFilter(mask nTmp2,imgY,radio,eps,scale);   (3)
Figure PCTCN2021139036-appb-000002
其中,fastGuideFilter()为自适应快速导向滤波;mask nTmp2为第二中间模板;imgY为待处理图像;eps为定平滑区域及边缘区域的阈值;scale为降采样倍率。
需要说明的是,公式(4)中计算滤波半径radio时,分母取值为20,在实际应用中,还可以为其他取值,如18、19等,在实际应用中,可以根据经验配置滤波半径,也可以根据Dist1及Dist2配置滤波半径。
上述实施例中提到的自适应均值滤波处理以及自适应快速导向滤波处理中的自适应指,可以根据瞳距Dist1及瞳距的中心点与嘴巴中心之间的相对距离Dist2确定滤波半径,实现针对不同人脸区域分别采用适配的滤波半径。
在具体实施中,针对磨皮处理流程,可以基于各人脸区域的人脸属性信息,确定各人脸区域的目标磨皮等级。基于各人脸区域的目标磨皮等级及人脸肤色模板,对相应的人脸区域进行磨皮处理,得到所述美颜图像。
进一步地,采用各人脸区域的目标磨皮等级对相应的人脸区域进行磨皮处理,得到第一图像;基于所述人脸肤色模板、所述目标磨皮等级以及最大磨皮等级,得到第一融合系数;采用第一融合系数对所述待处理图像及所述第一图像进行融合,得到所述美颜图像。
在一些实施例中,可以基于如下公式(5)对所述待处理图像及所述第一图像进行融合。
imgDst smooth=imgY*(1-k)+k*img smooth;       (5)
k=mask n*smooth level/smooth max,k∈[0,1];      (6)
其中,imgDst smooth为所述美颜图像,imgY为所述待处理图像,k为第一融合系数,img smooth为所述第一图像,mask n为所述人脸肤色模板,smooth level为所述目标磨皮等级,smooth max为最大磨皮等级。
基于人脸肤色模板得到的第一融合系数,对待处理图像及所述第一图像进行融合时,由于人脸肤色模板表征的为各个像素为人脸肤色的概率,从而得到的美颜图像可以综合考虑人脸区域美颜处理后的图像,也可以考虑原始的待处理图像中的非人脸的背景区域,从而可以有效的保留非人脸的背景区域的纹理信息。
例如,某一像素若是为背景区域内的像素,则该像素的k为0,由公式(5)中可知,针对该像素融合时采用的为原始的待处理图像的像素。又如,某一像素为人脸区域的肤色区域的像素,若像素的k为1,由公式(5)可知,针对该像素融合时采用的为第一图像中的像素。需要说明的是,由于通常目标磨皮等级smooth level小于最大磨皮等级smooth max,若mask n中人脸区域的某一像素为肤色区域的概率小于介于0与1之间,则计算得到的k的取值也介于0至1之间。当k介于0至1之间时,针对该像素,得到美颜图像为待处理图像中的该像素与第一图像中该像素的加权之和。
针对美白处理流程,可以计算各人脸区域的肤色亮度,根据各人脸区域的肤色亮度以及人种肤色信息,确定各人脸区域的美白强度,其中,所述人种肤色信息用于表征美颜对象的人种,所述人脸属性信息包括所述人种肤色信息。根据各人脸区域的人脸肤色模板,确定各人脸区域中的人脸肤色区域;根据确定的各人脸区域的美白强度分别对各人脸区域中的人脸肤色区域进行美白处理。基于人脸肤色模板对人脸区域进行美白处理时,可以有针对性第对人脸肤色区域进行美白处理,以避免对其他背景等非人脸肤色区域的美白处理,从而可以确保非肤色区域的亮度处于合适的范围内,避免出现非人脸肤色区域出现亮度过高导致的过曝以及不自然等。
在一些实施例中,美白强度可以采用亮度调整系数进行表征。根据确定的各人脸区域的美白强度分别对各人脸区域中的人脸肤色区域进行美白处理,也即根据各人脸区域的美白强度对应的亮度调整系数,分别对各人脸区域中的人脸色区域的像素亮度进行调整,通过提高人脸肤色区域的亮度实现对人脸区域的美白处理。
在一些实施例中,可以采用如下公式(7)得到美白处理之后的美颜图像。
imgDst bright=imgY*(1+mask n*bright level/bright max);   (7)
其中,imgDst bright为美白处理之后的美颜图像;imgY为待处理图像;mask n为人脸肤色模板;bright level为人脸区域的美白强度;bright max为最大美白强度。
基于人脸肤色模板可以有针对性地对待处理图像的各人脸肤色区域进行美白处理,可以有效地避免背景区域等非人脸肤色区域的亮度过高引起的过曝问题。
在具体实施中,当美颜处理包括磨皮处理以及美白处理时,磨皮处理以及美白处理可以有多种处理顺序。
在一些实施例中,可以先对待处理图像进行磨皮处理,得到磨皮处理之后的图像。对磨皮处理之后的图像进行美白处理,得到美颜图像。
若采用上述公式(7)进行美白处理,则将磨皮处理之后的图像作为公式(7)中的待处理图像。
在另一些实施例中,可以先对待处理图像进行美白处理,得到美白处理之后的图像。对美白处理之后的图像进行磨皮处理,得到美颜图像。
若采用上述公式(5)对所述待处理图像及所述第一图像进行融合,得到美颜图像时,将美白处理之后的图像作为公式(5)的待处 理图像。
在有一些实施例中,可以分别对待处理图像进行磨皮处理及美白处理,分别得到磨皮处理之后的图像以及美白处理之后的图像,对磨皮处理之后的图像以及美白处理之后的图像进行融合,得到美颜图像。
在具体实施中,在上述任一实施例的基础上,还可以对待处理图像中的人脸区域进行眼睛处理。其中,眼睛处理可以包括眼部放大或眼部缩小。
具体而言,对所述待处理图像中的各人脸区域进行人眼关键点对齐;根据各人脸区域对齐后的人眼关键点在所述待处理图像上的位置,计算各人脸区域的眼睛尺寸系数;根据所述各人脸区域的眼睛尺寸系数,确定各人脸区域的眼睛类型;确定与所述各人脸区域的眼睛类型相适配的眼部放大系数,并采用确定的眼部放大系数对相应的眼睛区域进行放大处理。
在一些实施例中,眼睛类型可以分为小眼型、标准眼型和大眼型等。不同眼型对应的眼睛尺寸系数不同。
在一些非限制性实施例中,可以为不同眼型配置相对应的眼睛尺寸系数范围,不同眼睛尺寸系数范围对应的眼部放大系数范围不同。通常眼睛尺寸系数与眼部放大系数呈反相关。
在一些实施例中,可以根据每只眼睛的高度和宽度,计算各人脸区域的眼睛尺寸系数。具体而言,针对每个人脸区域,通常包括两只眼睛,分别为左眼和右眼,分别计算左眼的尺寸系数和右眼的尺寸系数。根据左眼的尺寸系数及右眼的尺寸系数,确定该人脸的眼睛尺寸系数。
在一些实施例中,在确定眼部系数时,可以取左眼的尺寸系数和右眼的尺寸系数中的最大值作为眼部尺寸系数。也可以取左眼的尺寸系数和右眼的尺寸系数的平均值作为眼部尺寸系数。还可以为每只眼 睛设定相应的权重,根据设置的权重,以及左眼得尺寸系数和右眼的尺寸系数,计算眼睛尺寸系数。
其中,眼睛的高度为上眼皮中心与下眼皮中心之间的距离,眼睛的宽度为两个眼角之间的距离。
可以根据每只眼睛的高度与宽度的比值,计算每只眼睛的尺寸系数。
通常根据眼睛尺寸系数,也即根据眼睛的高度与宽度的比值可以确定眼睛的大小,进而根据眼睛尺寸系数所确定的眼睛放大系数对眼睛进行放大,即可实现对眼睛的放大处理。
在一些实施例中,还存在一些场景,如某些美颜对象的眼睛为小而圆的眼型,小而圆的眼型通常眼睛的高度与眼睛的宽度较为接近,从而计算得到的眼睛尺寸系数接近1,此时根据眼睛尺寸系数确定的眼睛放大系数较小,可能导致眼睛放大处理效果不够理想。
针对上述问题,为了提高眼睛放大处理效果,在本发明实施例中,当眼睛尺寸系数大于预设第一阈值时,则计算眼睛宽度与人脸宽度的比值,根据所述眼睛宽度与所述人脸宽度的比值,确定所述眼部放大系数。
具体而言,当眼睛尺寸系数大于预设第一阈值时,眼睛宽度与所述人脸宽度的比值和眼部放大系数反相关。也即眼睛宽度与人脸宽度的比值越小,眼部放大系数越大。
在一些实施例中,当眼睛尺寸系数大于预设第一阈值时,若眼睛宽度与所述人脸宽度的比值小于第二阈值,此时可以判定眼睛较小,则可以采用相对较大的眼部放大系数。当眼睛尺寸系数大于预设第一阈值时,若眼睛宽度与所述人脸宽度的比值大于第二阈值,可以判定眼睛相对较大,则可以采用相对较小的眼部放大系数。
例如,参照图2,左眼的眼部关键点包括上眼皮点pt 69、下眼皮点pt 73,眼角关键点pt 67和pt 71,根据左眼的上眼皮点pt 69和下眼皮 点pt 73的位置,计算pt 69和pt 73的距离,pt 69和pt 73的距离即为左眼的眼睛高度。计算眼角关键点pt 67和pt 71的距离,关键点pt 67和pt 71的距离即为左眼的宽度。采用如下公式(8)计算左眼的尺寸系数。
el coeff=Dist(pt 69,pt 73)/Dist(pt 67,pt 71);     (8)
相应地,右眼的眼部关键点包括上眼皮点pt 78、下眼皮点pt 82,眼角关键点pt 76和pt 80,根据右眼的上眼皮点pt 78和下眼皮点pt 82的位置,计算pt 78和pt 82的距离,pt 78和pt 82的距离即为右眼的眼睛高度。计算眼角关键点pt 76和pt 80的距离,关键点pt 76和pt 80的距离即为右眼的宽度。采用如下公式(9)计算右眼的尺寸系数。
er coeff=Dist(pt 78,pt 82)/Dist(pt 76,pt 80);   (9)
其中,el coeff表示左眼的尺寸系数,er coeff表示右眼的尺寸系数,Dist()返回图像中两个点的距离,pt n表示第n个人脸关键点。
在具体实施中,在上述任一实施例中的基础上,还可以对人脸区域中的脸部进行脸型调整。通过对脸型进行调整,可以实现对待处理图像的人脸区域进行瘦脸处理。
在一些实施例中,可以计算各人脸区域的脸型,分别确定与各脸型适配的瘦脸系数。采用各脸型适配的人脸系数分别对相应的人脸区域进行瘦脸处理。需要说明的是,根据实际需求(如在一些特效处理的场景中),通过配置对应的瘦脸系数,也可以人脸区域中的脸部进行变胖处理,关于变胖处理方案此处不再展开说明。
进一步地,可以分别为不同的脸型配置对应的瘦脸系数范围,在确定脸型之后,可以根据该脸型对应的瘦脸系数范围内,选取对应的瘦脸系数。
在具体实施例中,可以采用如下方式计算各人脸区域的脸型。对所述待处理图像中的各人脸区域进行下巴关键点对齐,根据位于下巴底部的关键点以及下巴两侧的关键点,计算下巴夹角,根据计算的下巴夹角确定脸型。其中,所述下巴夹角为位于下巴底部的关键点分别 与位于下巴底部的关键点的连线间的夹角。
具体而言,将下巴底部的关键点分别与位于下巴两侧的关键点的距离记为第一距离和第二距离,将位于下巴两侧的关键点之间的距离记为第三距离,根据第一距离、第二距离及第三距离,计算下巴夹角。
为便于理解,参照图2,下巴关键点记为pt 17,下巴两侧的关键点分别为pt 13和pt 21,可以采用如下公式(10)计算下巴夹角:
θ=arccos((a 2+b 2-c 2)/(2*a*b));       (10)
a=Dist(pt 17,pt 13);                   (11)
b=Dist(pt 17,pt 21);                   (12)
c=Dist(pt 13,pt 21);                   (13)
其中,θ为下巴夹角;a和b为下巴底部的关键点分别与位于下巴两侧的关键点的距离;c为下巴两侧的关键点之间的距离;arccos()表示的是反三角函数中的反余弦。
在一些实施例中,可以将脸型分为瓜子脸型、标准脸型以及圆脸型。不同脸型的下巴夹角范围不同,相应地对应的瘦脸系数不同。瘦脸系数与瘦脸处理程度正相关,也即瘦脸系数越小,瘦脸程度越小。通常,瓜子脸型对应的瘦脸系数小于标准脸型对应的瘦脸系数,标准脸型对应的瘦脸系数小于圆脸型对应的瘦脸系数,以实现针对不同的脸型分别采用不同的瘦脸系数,以提高不同脸型的瘦脸效果。当同一张待处理图像中有多个美颜对象时,此处对应多个人脸区域,分别针对每个人脸区域进行脸型计算,得到每个人脸区域对应的脸型,进而根据每个人脸区域的脸型确定适配的瘦脸系数,根据每个人脸区域对应的瘦脸系数对人脸区域进行瘦脸处理。由于每个人脸区域采用的瘦脸系数分别与该人脸区域的脸型相对应,从而可以提高整体的图像处理效果。有效地解决现有技术中,脸型较小的瓜子脸型的人脸在瘦脸之后,出现面部不协调等现象,以及圆脸脸型的瘦脸程度低,而出现的瘦脸效果不明显的问题。
需要说明的是,上述实施例可以在技术上可行的情况下,进行任意的组合,将一项或者多项从属权利要求的技术特征与独立权利要求的技术特征进行组合,并可通过任何适当的方式而不是仅通过权利要求书中所列举的特定组合来组合来自相应独立权利要求的技术特征。关于更多的组合方式,此处不再赘述。
本发明实施例还提供一种图像处理装置,参照图3,给出了本发明实施例中的一种图像处理装置的结构示意图,图像处理装置30可以包括:
人脸识别单元31,用于对待处理图像进行人脸识别,得到至少一个人脸区域;
人脸属性检测单元32,用于对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息;
处理单元33,用于基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,得到所述待处理图像对应的美颜图像。
在具体实施中,图像处理装置30的具体工作原理及流程可以参见本发明上述实施例中提供的图像处理方法中的描述,此处不再赘述。
在具体实施中,上述的图像处理装置30可以对应于终端中具有图像处理功能的芯片;或者对应于终端中包括具有图像处理功能芯片的芯片模组,或者对应于终端。
在具体实施中,关于上述实施例中描述的各个装置、产品包含的各个模块/单元,其可以是软件模块/单元,也可以是硬件模块/单元,或者也可以部分是软件模块/单元,部分是硬件模块/单元。
例如,对于应用于或集成于芯片的各个装置、产品,其包含的各个模块/单元可以都采用电路等硬件的方式实现,或者,至少部分模块/单元可以采用软件程序的方式实现,该软件程序运行于芯片内部集成的处理器,剩余的(如果有)部分模块/单元可以采用电路等硬 件方式实现;对于应用于或集成于芯片模组的各个装置、产品,其包含的各个模块/单元可以都采用电路等硬件的方式实现,不同的模块/单元可以位于芯片模组的同一组件(例如芯片、电路模块等)或者不同组件中,或者,至少部分模块/单元可以采用软件程序的方式实现,该软件程序运行于芯片模组内部集成的处理器,剩余的(如果有)部分模块/单元可以采用电路等硬件方式实现;对于应用于或集成于终端的各个装置、产品,其包含的各个模块/单元可以都采用电路等硬件的方式实现,不同的模块/单元可以位于终端内同一组件(例如,芯片、电路模块等)或者不同组件中,或者,至少部分模块/单元可以采用软件程序的方式实现,该软件程序运行于终端内部集成的处理器,剩余的(如果有)部分模块/单元可以采用电路等硬件方式实现。
本发明实施例还提供一种存储介质,所述存储介质为非易失性存储介质或非瞬态存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时执行上述任一实施例中提供的图像处理方法的步骤。
本发明实施例还提供一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序时执行上述任一实施例中提供的图像处理方法的步骤。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于任一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。
虽然本发明披露如上,但本发明并非限定于此。任何本领域技术人员,在不脱离本发明的精神和范围内,均可作各种更动与修改,因此本发明的保护范围应当以权利要求所限定的范围为准。

Claims (24)

  1. 一种图像处理方法,其特征在于,包括:
    对待处理图像进行人脸识别,得到至少一个人脸区域;
    分别对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息;
    基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,得到所述待处理图像对应的美颜图像。
  2. 如权利要求1所述图像处理方法,其特征在于,所述基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,包括以下至少一种:
    基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行磨皮处理;
    基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美白处理。
  3. 如权利要求2所述图像处理方法,其特征在于,所述人脸属性信息包括:人脸性别及年龄,所述基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行磨皮处理,包括:
    基于各人脸区域的人脸性别及年龄,确定各人脸区域的磨皮等级;
    采用各人脸区域的磨皮等级,分别对相应的人脸区域进行磨皮处理。
  4. 如权利要求2所述图像处理方法,其特征在于,所述人脸属性信息包括人种肤色信息,所述基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美白处理,包括:
    计算各人脸区域的肤色亮度;
    根据各人脸区域的肤色亮度以及人种肤色信息,确定各人脸区域的美白强度,其中,所述人种肤色信息用于表征美颜对象的人种;
    采用各人脸区域的美白强度,对相应的人脸区域进行美白处理。
  5. 如权利要求1所述图像处理方法,其特征在于,还包括:
    对所述待处理图像进行人脸肤色检测,得到各人脸区域的人脸肤色模板,所述人脸肤色模板用于表征各像素为人脸肤色的概率;
    基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美颜处理。
  6. 如权利要求5所述图像处理方法,其特征在于,所述基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美颜处理,包括以下至少一种:
    基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行磨皮处理;
    基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美白处理。
  7. 如权利要求6所述图像处理方法,其特征在于,所述基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行磨皮处理,包括:
    基于各人脸区域的人脸属性信息,确定各人脸区域的目标磨皮等级;
    基于各人脸区域的目标磨皮等级及人脸肤色模板,对相应的人脸区域进行磨皮处理,得到所述美颜图像。
  8. 如权利要求7所述的图像处理方法,其特征在于,所述基于各人脸区域的目标磨皮等级及人脸肤色模板,对相应的人脸区域进行磨皮处理,得到所述美颜图像,包括:
    采用各人脸区域的目标磨皮等级对相应的人脸区域进行磨皮处理,得到第一图像;
    基于所述人脸肤色模板、所述目标磨皮等级以及最大磨皮等级,得到第一融合系数;
    采用第一融合系数对所述待处理图像及所述第一图像进行融合,得到所述美颜图像。
  9. 如权利要求8所述的图像处理方法,其特征在于,所述采用第一融合系数对所述待处理图像及所述第一图像进行融合,得到所述美颜图像,包括:采用如下公式对所述待处理图像及所述第一图像进行融合:
    imgDst smooth=imgY*(1-k)+k*img smooth
    k=mask n*smooth level/smooth max,k∈[0,1];
    其中,imgDst smooth为所述美颜图像,imgY为所述待处理图像,k为第一融合系数,img smooth为所述第一图像,mask n为所述人脸肤色模板,smooth level为所述目标磨皮等级,smooth max为最大磨皮等级。
  10. 如权利要求6所述图像处理方法,其特征在于,所述基于各人脸区域的人脸属性信息以及人脸肤色模板,对相应的人脸区域进行美白处理,包括:
    计算各人脸区域的肤色亮度;
    根据各人脸区域的肤色亮度以及人种肤色信息,确定各人脸区域的美白强度,其中,所述人种肤色信息用于表征美颜对象的人种,所述人脸属性信息包括所述人种肤色信息;
    根据各人脸区域的人脸肤色模板,确定各人脸区域中的人脸肤色区域;
    根据确定的各人脸区域的美白强度分别对各人脸区域中的人脸 肤色区域进行美白处理。
  11. 如权利要求5所述图像处理方法,其特征在于,所述对所述待处理图像进行人脸肤色检测,得到各人脸区域的人脸肤色模板,包括:
    当处于视频模式时,对所述待处理图像进行人脸识别,分别对各人脸区域进行人脸关键点对齐,确定所述人脸关键点在所述待处理图像上的位置;
    从所述人脸关键点中选取人脸轮廓关键点;
    对所述人脸轮廓关键点进行三角化处理并渲染得到所述人脸肤色模板。
  12. 如权利要求11所述的图像处理方法,其特征在于,还包括:
    对所述人脸轮廓关键点进行三角化处理并渲染之后,得到第一中间模板;
    对所述第一中间模板进行滤波处理,得到所述人脸肤色模板。
  13. 如权利要求12所述的图像处理方法,其特征在于,对所述第一中间模板进行滤波处理,得到所述人脸肤色模板,包括:
    根据所述人脸关键点的位置,计算瞳距以及所述瞳距的中心点与嘴巴中心之间的相对距离;
    根据所述瞳距以及所述相对距离,确定滤波半径;
    基于所述滤波半径对所述第一中间模板进行滤波处理,得到所述人脸肤色模板。
  14. 如权利要求5所述的图像处理方法,其特征在于,所述对所述待处理图像进行人脸肤色检测,得到各人脸的人脸肤色模板,包括:
    当处于拍照模式时,对所述待处理图像进行人脸肤色分割;
    根据人脸肤色分割结果得到各人脸肤色模板。
  15. 如权利要求14所述的图像处理方法,其特征在于,还包括:
    在对所述待处理图像进行人脸肤色分割之后,得到第二中间模板;
    对所述第二中间模板进行滤波处理,得到所述人脸肤色模板。
  16. 如权利要求15所述的图像处理方法,其特征在于,所述对所述第二中间模板进行滤波处理,包括:
    采用自适应快速导向滤波对所述第二中间模板进行滤波处理,所述自适应快速导向滤波的滤波参数包括:界定平滑区域及边缘区域的阈值、滤波半径及降采样倍率,其中:所述滤波半径与瞳距及瞳距的中心点与嘴巴中心之间的相对距离相关,所述降采样倍率与所述待处理图像的尺寸及所述人脸肤色模板的尺寸相关。
  17. 如权利要求1所述的图像处理方法,其特征在于,还包括:
    对所述待处理图像中的各人脸区域进行人眼关键点对齐;
    根据各人脸区域对齐后的人眼关键点在所述待处理图像上的位置,计算各人脸区域的眼睛尺寸系数;
    根据所述各人脸区域的眼睛尺寸系数,确定各人脸区域的眼睛类型;
    确定与所述各人脸区域的眼睛类型相适配的眼部放大系数,并采用确定的眼部放大系数对相应的眼睛区域进行放大处理。
  18. 如权利要求17所述的图像处理方法,其特征在于,所述根据各人脸区域对齐后的人眼关键点在所述待处理图像上的位置,计算各人脸区域的眼睛尺寸系数,包括:
    计算每只眼睛的高度及宽度,其中,眼睛的高度为上眼皮中心与下眼皮中心之间的距离,眼睛的宽度为两眼角之间的距离;
    根据每只眼睛的高度与宽度的比值,得到每只眼睛的尺寸系数;
    根据每只眼睛的尺寸系数,确定所述眼睛尺寸系数。
  19. 如权利要求17所述的图像处理方法,其特征在于,还包括:
    当所述眼睛尺寸系数大于预设第一阈值时,则计算眼睛宽度与人脸宽度的比值;
    根据所述眼睛宽度与所述人脸宽度的比值,确定所述眼部放大系数。
  20. 如权利要求1所述的图像处理方法,其特征在于,还包括:
    计算各人脸区域的脸型;
    分别确定与各脸型适配的瘦脸系数;
    采用各脸型适配的人脸系数分别对相应的人脸区域进行瘦脸处理。
  21. 如权利要求20所述的图像处理方法,其特征在于,所述计算各人脸区域的脸型,包括:
    对所述待处理图像中的各人脸区域进行下巴关键点对齐;
    根据位于下巴底部的关键点以及下巴两侧的关键点,计算下巴夹角,所述下巴夹角为位于下巴底部的关键点分别与位于下巴底部的关键点的连线间的夹角;
    根据所述下巴夹角确定脸型。
  22. 一种图像处理装置,其特征在于,包括:
    人脸识别单元,用于对待处理图像进行人脸识别,得到至少一个人脸区域;
    人脸属性检测单元,用于对各人脸区域进行人脸属性检测,得到各人脸区域的人脸属性信息;
    处理单元,用于基于各人脸区域的人脸属性信息,分别对相应的人脸区域进行美颜处理,得到所述待处理图像对应的美颜图像。
  23. 一种存储介质,所述存储介质为非易失性存储介质或非瞬态存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器运行时执行权利要求1至21任一项所述的图像处理方法的步骤。
  24. 一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机程序,其特征在于,所述处理器运行所述计算机程序时执行权利要求1至21中任一项所述的图像处理方法的步骤。
PCT/CN2021/139036 2021-01-27 2021-12-17 图像处理方法及装置、存储介质、终端 WO2022161009A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110111649.0 2021-01-27
CN202110111649.0A CN112784773B (zh) 2021-01-27 2021-01-27 图像处理方法及装置、存储介质、终端

Publications (1)

Publication Number Publication Date
WO2022161009A1 true WO2022161009A1 (zh) 2022-08-04

Family

ID=75758264

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139036 WO2022161009A1 (zh) 2021-01-27 2021-12-17 图像处理方法及装置、存储介质、终端

Country Status (2)

Country Link
CN (1) CN112784773B (zh)
WO (1) WO2022161009A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784773B (zh) * 2021-01-27 2022-09-27 展讯通信(上海)有限公司 图像处理方法及装置、存储介质、终端
CN113327207B (zh) * 2021-06-03 2023-12-08 广州光锥元信息科技有限公司 应用于图像人脸优化的方法及装置
CN113421197B (zh) * 2021-06-10 2023-03-10 杭州海康威视数字技术股份有限公司 一种美颜图像的处理方法及其处理系统
CN113591562A (zh) * 2021-06-23 2021-11-02 北京旷视科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113344837B (zh) * 2021-06-28 2023-04-18 展讯通信(上海)有限公司 人脸图像处理方法及装置、计算机可读存储介质、终端
CN113610723B (zh) * 2021-08-03 2022-09-13 展讯通信(上海)有限公司 图像处理方法及相关装置
CN113743243A (zh) * 2021-08-13 2021-12-03 厦门大学 一种基于深度学习的人脸美颜方法
CN114581979A (zh) * 2022-03-01 2022-06-03 北京沃东天骏信息技术有限公司 图像处理方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100303345A1 (en) * 2009-06-01 2010-12-02 Apple, Inc. Red-eye reduction using facial detection
CN107274354A (zh) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 图像处理方法、装置和移动终端
CN107766831A (zh) * 2017-10-31 2018-03-06 广东欧珀移动通信有限公司 图像处理方法、装置、移动终端和计算机可读存储介质
CN112784773A (zh) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 图像处理方法及装置、存储介质、终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8582807B2 (en) * 2010-03-15 2013-11-12 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
CN107730446B (zh) * 2017-10-31 2022-02-18 Oppo广东移动通信有限公司 图像处理方法、装置、计算机设备及计算机可读存储介质
CN108171671B (zh) * 2018-01-09 2021-02-02 武汉斗鱼网络科技有限公司 一种放大眼睛的美型方法及装置
CN108876751A (zh) * 2018-07-05 2018-11-23 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及终端
CN111047619B (zh) * 2018-10-11 2022-09-30 展讯通信(上海)有限公司 人脸图像处理方法及装置、可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100303345A1 (en) * 2009-06-01 2010-12-02 Apple, Inc. Red-eye reduction using facial detection
CN107274354A (zh) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 图像处理方法、装置和移动终端
CN107766831A (zh) * 2017-10-31 2018-03-06 广东欧珀移动通信有限公司 图像处理方法、装置、移动终端和计算机可读存储介质
CN112784773A (zh) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 图像处理方法及装置、存储介质、终端

Also Published As

Publication number Publication date
CN112784773B (zh) 2022-09-27
CN112784773A (zh) 2021-05-11

Similar Documents

Publication Publication Date Title
WO2022161009A1 (zh) 图像处理方法及装置、存储介质、终端
US10304166B2 (en) Eye beautification under inaccurate localization
US8520089B2 (en) Eye beautification
US8681241B2 (en) Automatic face and skin beautification using face detection
CN108229278B (zh) 人脸图像处理方法、装置和电子设备
CN106780311B (zh) 一种结合皮肤粗糙度的快速人脸图像美化方法
US7587083B2 (en) Image processing device
US7539342B2 (en) Image correction apparatus
CN106326823B (zh) 一种获取图片中头像的方法和系统
JP2005293539A (ja) 表情認識装置
WO2022135574A1 (zh) 肤色检测方法、装置、移动终端和存储介质
WO2023273247A1 (zh) 人脸图像处理方法及装置、计算机可读存储介质、终端
WO2022052862A1 (zh) 图像的边缘增强处理方法及应用
CN114187166A (zh) 图像处理方法、智能终端及存储介质
WO2023010796A1 (zh) 图像处理方法及相关装置
CN114240743A (zh) 一种基于高反差磨皮的人脸图像的美肤方法
US10567670B2 (en) Image-processing device
CN114627003A (zh) 人脸图像的眼部脂肪去除方法、系统、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922599

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21922599

Country of ref document: EP

Kind code of ref document: A1