CN113421197A - Processing method and processing system of beautifying image - Google Patents

Processing method and processing system of beautifying image Download PDF

Info

Publication number
CN113421197A
CN113421197A CN202110650610.6A CN202110650610A CN113421197A CN 113421197 A CN113421197 A CN 113421197A CN 202110650610 A CN202110650610 A CN 202110650610A CN 113421197 A CN113421197 A CN 113421197A
Authority
CN
China
Prior art keywords
beauty
image
face
probability
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110650610.6A
Other languages
Chinese (zh)
Other versions
CN113421197B (en
Inventor
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110650610.6A priority Critical patent/CN113421197B/en
Publication of CN113421197A publication Critical patent/CN113421197A/en
Application granted granted Critical
Publication of CN113421197B publication Critical patent/CN113421197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a processing method and a processing system of a beauty image, wherein the processing method comprises the following steps: receiving an image to be processed containing a face area; inputting an image to be processed into a face beauty detection model to obtain a first face beauty probability map, wherein the first face beauty probability map comprises face beauty probability values of all pixel points in a face region; inputting the image to be processed into a facial feature segmentation model to obtain a segmentation map containing facial feature areas; obtaining a second beauty probability map according to the first beauty probability map and the segmentation map, wherein the second beauty probability map comprises beauty probability values of the face feature areas; and obtaining a face beautifying degree detection image according to the second beautifying probability image and the image to be processed so as to obtain more accurate beautifying region information and beautifying degree information and obtain difference information of the beautifying region and the non-beautifying region.

Description

Processing method and processing system of beautifying image
Technical Field
The invention relates to the technical field of face recognition, in particular to a processing method and a processing system of a beautifying image.
Background
With the rapid development of image processing technology and the rapid popularization and application of terminal devices such as computers, smart phones and the like, people can conveniently and rapidly modify and beautify the portrait in the photos by using the application programs, and the photos are published on the internet. In some scenes that need a real portrait as an input (such as face recognition, face verification, etc.), the modified pictures interfere with the stable operation of the system, so that the pictures need to be screened out first.
In the prior art, gradient statistics of skin pixels is performed on a high-frequency image of a skin region to judge the beauty degree of an image to be processed, and different processing schemes are used according to the beauty degrees of different images in the subsequent image processing process. In this way, only the beauty level of the whole skin area can be obtained, and the beauty level of each specific five sense organs area cannot be given, so that the specific beautiful area information in the portrait and the difference between the beautiful area and the non-beautiful area cannot be intuitively obtained.
Disclosure of Invention
The invention provides a processing method and a processing system of a beauty image, which enable a user to visually acquire specific beautified area information and beautified degree information in a portrait and also acquire difference information between a beautified area and a non-beautified area.
In a first aspect, the present invention provides a method for processing a beauty image, the method comprising: receiving an image to be processed containing a face area; inputting an image to be processed into a face beauty detection model to obtain a first face beauty probability map, wherein the first face beauty probability map comprises face beauty probability values of all pixel points in a face region; inputting the image to be processed into a facial feature segmentation model to obtain a segmentation map containing facial feature areas; obtaining a second beauty probability map according to the first beauty probability map and the segmentation map, wherein the second beauty probability map comprises beauty probability values of the face feature areas; and obtaining a face beauty degree detection image according to the second beauty probability image and the image to be processed.
In the scheme, the image to be processed is respectively input into the face beauty detection model and the face five-sense-organ segmentation model to obtain a first beauty probability map and a segmentation map containing a face feature region, so that the beauty probability values of all pixel points in the face region are reflected. And then, obtaining a second beautifying probability map according to the first beautifying probability map and the segmentation map so as to obtain more accurate beautifying region information and beautifying degree information and obtain difference information between the beautifying region and the non-beautifying region.
In a specific embodiment, before the image to be processed is input into the face beauty detection model and the face five sense organs segmentation model, the processing method further includes: and carrying out preprocessing operation on the image to be processed, wherein the preprocessing operation comprises face detection, face key point detection and face alignment.
In a particular embodiment, the facial feature region includes at least one of a forehead region, an eye region, an eyebrow region, a nose region, a mouth region, a chin region, and a cheek region. The face beautifying method has the advantages that the position information of key areas such as the forehead, the eyes, the eyebrows, the nose, the mouth, the chin and the cheeks in the face can be conveniently acquired, the face beautifying degree of a facial feature area which is wanted to be known can be selected, the information of the face beautifying degree which is not wanted to be known is omitted, and meanwhile, the specific face beautifying area and the non-beautifying area can be conveniently and pertinently compared.
In a specific embodiment, obtaining the second beauty probability map according to the first beauty probability map and the segmentation map includes: presetting a sensitivity coefficient, wherein the sensitivity coefficient has a corresponding relation with the facial feature region; and adjusting the probability values of all pixel points in the corresponding facial feature region in the first beauty probability map by adopting the sensitivity coefficient to obtain a second beauty probability map. Different sensitivity coefficients can be given to different facial feature areas, and different sensitivity coefficients can be given to different facial feature areas according to the importance degree by combining the segmentation graph containing the facial feature areas, so that the visual appearance of the whole beauty probability graph is adjusted.
In one specific embodiment, the preset sensitivity coefficient includes: the larger the area of the facial feature region is, the smaller the sensitivity coefficient preset for the facial feature region is. To more accurately reflect the degree of real beauty of different facial feature areas.
In a specific embodiment, obtaining the second beauty probability map according to the first beauty probability map and the segmentation map includes: and calculating the weighted average value of the probability values of all pixel points in each facial feature region in the first beauty probability map by combining the first beauty probability map and the segmentation map to obtain a second beauty probability map. So as to obtain the beautifying probability of each facial feature region as a whole and provide more beautifying reference information.
In a specific embodiment, inputting the image to be processed into a face beauty detection model, and obtaining the first beauty probability map includes: inputting an image to be processed into a face beautifying detection model to obtain a beautifying template and a first beautifying probability map; the beautifying template comprises coordinate displacement size information and color change amplitude information of each pixel point. So as to conveniently acquire the beauty template in the beauty process of the image to be processed and to conveniently transfer the beauty mode of the image to be processed to other images to be beautified subsequently.
In a specific embodiment, the number of the images to be processed is at least one, and each image to be processed correspondingly stores at least one beauty template. The processing method further comprises the following steps: selecting a beauty template from at least one beauty template as a target beauty template; receiving a face image to be beautified containing a face area to be beautified; performing beautifying processing on the image to be beautified according to the coordinate displacement size information and the color change amplitude information of each pixel point in the target beautifying template to obtain a beautified face area; covering the pixel points of the face area to be beautified in the image to be beautified with the pixel points of the face area to be beautified to obtain the image to be beautified. Selecting a target beautifying template from the beautifying templates obtained in the previous steps, and acting on the preprocessed other face area to be beautified to obtain a beautifying effect consistent with the original face area, so that the beautifying effect is transferred from one beautifying image to the other image to be beautified.
In a specific embodiment, obtaining a face beauty level detection map according to the second beauty probability map and the image to be processed includes: and weighting the values of pixel points one by one in the image to be processed and the values of corresponding pixel points in the second beauty probability map to obtain a face beauty degree detection map.
In a second aspect, the present invention also provides a system for processing a beauty image, the system comprising: the facial beautification system comprises a receiving module, a facial beautification detection model, a facial five-sense-organ segmentation model, a beautification degree adjusting module and a superposition module. The receiving module is used for receiving an image to be processed containing a face area; the face beautifying detection model is used for calculating an input image to be processed to obtain a first face beautifying probability map, wherein the first face beautifying probability map comprises face beautifying probability values of all pixel points in a face region; the human face five sense organ segmentation model is used for segmenting an input image to be processed to obtain a segmentation image containing a facial feature region; and the superposition module is used for obtaining a face beauty degree detection image according to the second beauty probability image and the image to be processed.
In the scheme, the image to be processed is respectively input into the face beauty detection model and the face five-sense-organ segmentation model to obtain a first beauty probability map and a segmentation map containing a face feature region, so that the beauty probability values of all pixel points in the face region are reflected. And then, obtaining a second beautifying probability map according to the first beautifying probability map and the segmentation map so as to obtain more accurate beautifying region information and beautifying degree information and obtain difference information between the beautifying region and the non-beautifying region.
In a specific embodiment, the processing system further includes a preprocessing operation module, before the image to be processed is input into the face beauty detection model and the face five sense organs segmentation model, the preprocessing operation module is configured to perform preprocessing operation on the image to be processed, where the preprocessing operation includes face detection, face keypoint detection, and face alignment.
In a specific embodiment, the facial feature segmentation model is used for segmenting an input image to be processed to obtain a segmentation map including facial feature regions, wherein the facial feature regions include at least one of a forehead region, an eye region, an eyebrow region, a nose region, a mouth region, a chin region and a cheek region.
In a specific embodiment, the beauty treatment degree adjusting module comprises a preset sensitivity coefficient module and a first calculating module. The preset sensitivity coefficient module is used for presetting a sensitivity coefficient, and the sensitivity coefficient and the facial feature region have a corresponding relation; the first calculation module is used for adjusting probability values of all pixel points in the corresponding facial feature region in the first beauty probability map by adopting the sensitivity coefficient to obtain a second beauty probability map.
In a specific embodiment, the sensitivity coefficient presetting module presets according to the following rule: the larger the area of the facial feature region is, the smaller the sensitivity coefficient preset for the facial feature region is.
In a specific embodiment, the beauty degree adjusting module is configured to calculate a weighted average of probability values of all pixel points in each facial feature region in the first beauty probability map by combining the first beauty probability map and the segmentation map, so as to obtain the second beauty probability map.
In a specific embodiment, the face beauty detection model is used for outputting a beauty template and a first beauty probability map according to an input image to be processed; the beautifying template comprises coordinate displacement size information and color change amplitude information of each pixel point.
In a specific embodiment, the number of the images to be processed is at least one, and each image to be processed correspondingly stores at least one beauty template. The processing system further comprises a beauty migration module, wherein the beauty migration module is used for selecting a beauty template from at least one beauty template as a target beauty template; the face beautifying device is also used for receiving a face image to be beautified containing a face area to be beautified; the face beautifying processing module is also used for carrying out face beautifying processing on the image to be beautified according to the coordinate displacement size information and the color change amplitude information of each pixel point in the target face beautifying template to obtain a face area after face beautifying; and the method is also used for covering the pixel points of the face area to be beautified in the image to be beautified with the pixel points of the face area to be beautified to obtain the image after the beautification.
Drawings
Fig. 1 is a flowchart of a method for processing a beauty image according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a process for beautifying an image according to an embodiment of the present invention;
fig. 3 is a flowchart of another method for processing a beauty image according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
To facilitate understanding of the method for processing a beauty image provided by the embodiment of the present invention, an application scenario of the method for processing a beauty image is first described below, where the method is applied to a process of processing a beauty image and is used to detect region information and degree information of a face image being beautified. The treatment method will be described in detail below with reference to the drawings.
Referring to fig. 1, a method for processing a beauty image according to an embodiment of the present invention includes:
step 10: receiving an image to be processed containing a face area;
step 20: inputting an image to be processed into a face beauty detection model to obtain a first face beauty probability map, wherein the first face beauty probability map comprises face beauty probability values of all pixel points in a face region; inputting the image to be processed into a facial feature segmentation model to obtain a segmentation map containing facial feature areas;
step 30: obtaining a second beauty probability map according to the first beauty probability map and the segmentation map, wherein the second beauty probability map comprises beauty probability values of the face feature areas;
step 40: and obtaining a face beauty degree detection image according to the second beauty probability image and the image to be processed.
In the scheme, the image to be processed is respectively input into the face beauty detection model and the face five-sense-organ segmentation model to obtain a first beauty probability map and a segmentation map containing a face feature region, so that the beauty probability values of all pixel points in the face region are reflected. And then, obtaining a second beautifying probability map according to the first beautifying probability map and the segmentation map so as to obtain more accurate beautifying region information and beautifying degree information and obtain difference information between the beautifying region and the non-beautifying region. The above steps will be described in detail with reference to the accompanying drawings.
First, referring to fig. 1 and 2, an image to be processed including a face region is received. The image to be processed may be captured by a camera or a terminal having an image capturing function such as a mobile phone. The image to be processed is subjected to image modification on the basis of the shot original image, and is beautified to a certain degree. When the face image is locally modified, for example, the cheek is adjusted down, the eyes are adjusted up, and the like, the overall consistency of the original image is damaged, and the beauty detection can use the point to detect the beauty degree.
Next, as shown in fig. 1, fig. 2, and fig. 3, the image to be processed is input into the face beauty detection model, and a first beauty probability map is obtained, where the first beauty probability map includes beauty probability values of all pixel points in the face region. And inputting the image to be processed into the facial feature segmentation model to obtain a segmentation image containing a facial feature region. Namely, the image to be processed is respectively input into a face beauty detection model and a facial features segmentation model to obtain a first beauty probability graph and a second beauty probability graph.
In the process of inputting the image to be processed into the face beauty detection model and the face five-sense-organ segmentation model, the received image to be processed can be directly input into the face beauty detection model and the face five-sense-organ segmentation model without processing. Of course, the received image to be processed may be preprocessed, and then the preprocessed image may be input into the face beauty detection model and the face five-sense organ segmentation model. The preprocessing operation may include face detection to obtain a face region from the image to be processed. It should be understood that the preprocessing operation is not limited to only the processing manner of face detection, and processing manners such as face key point detection and face alignment may also be adopted. The face key point detection means that an image to be processed is input into a face key point detection model, and coordinate information of each face key point in a face area in the image to be processed is obtained. The face alignment is to map the area where the key points of the input face are located to the fixed area of an artificially defined template face by using radiation change, wherein the template face is an artificially defined standard face and mainly specifies the coordinate information of each face key point in the image to be processed.
Referring to fig. 2, after the image to be processed is input to the face beauty detection model for calculation, the obtained second beauty probability map may be a pixel-level beauty probability map, or may be a feature-level beauty probability map, for example. The obtained second beauty probability graph is composed of beauty probability values of all pixel points in the face area, and the beauty probability value of each pixel point represents the beauty probability of the pixel point. Specifically, the beauty probability value of each pixel point is between 0 and 1, the probability that the closer to 1 the beauty of the pixel point is larger, and the probability that the closer to 0 the beauty of the pixel point is smaller, so that the beauty probability degree information of all the pixel points in the face area is embodied.
And inputting the image to be processed into a human face five sense organ detection model, and calculating to obtain a segmentation map containing a facial feature region. Referring to fig. 2, the above-described facial feature region may include at least one region of a forehead region, an eye region, an eyebrow region, a nose region, a mouth region, a chin region, and a cheek region. The face beautifying method has the advantages that the position information of key areas such as the forehead, the eyes, the eyebrows, the nose, the mouth, the chin and the cheeks in the face can be conveniently acquired, the face beautifying degree of a facial feature area which is wanted to be known can be selected, the information of the face beautifying degree which is not wanted to be known is omitted, and meanwhile, the specific face beautifying area and the non-beautifying area can be conveniently and pertinently compared.
Next, referring to fig. 1, fig. 2 and fig. 3, a second beauty probability map is obtained according to the first beauty probability map and the segmentation map, wherein the second beauty probability map includes beauty probability values of the facial feature regions. Namely, the first beauty probability map can be corrected and adjusted according to the obtained first beauty probability map and the segmentation map, so as to obtain a more accurate and more applicable second beauty probability map.
When the second beauty probability map is obtained by correcting, adjusting and the like according to the first beauty probability map and the segmentation map, a sensitivity coefficient can be preset, wherein the sensitivity coefficient has a corresponding relation with the facial feature region; and then, adjusting the probability values of all pixel points in the corresponding facial feature region in the first beauty probability map by adopting the sensitivity coefficient to obtain a second beauty probability map. Namely, a corresponding sensitivity coefficient is preset for each facial feature region, and then the sensitivity coefficient is adopted to adjust the probability values of all pixel points in each facial feature region in the first beauty probability map. Specifically, different sensitivity coefficients can be given to different facial feature regions according to the importance degree by combining a segmentation graph containing the facial feature regions, so that the beauty degrees of different facial feature regions can be more accurately reflected by the processed second beauty probability graph, and the visual impression of the whole beauty probability graph is adjusted.
When the sensitivity coefficient is preset for the facial feature region, a sensitivity coefficient matched with each facial feature region can be given to each facial feature region. Due to the different sizes of the facial feature regions, the degree of importance in visualization is also different, e.g., the same magnitude of modification is applied to the eyes and cheeks giving the user a different feel. Therefore, different sensitivity coefficients can be given to different facial feature areas during visualization, and different sensitivity coefficients can be given to different facial feature areas according to the importance degree by combining the segmentation graph containing the facial feature areas, so that the visualization impression of the whole beauty probability graph is adjusted. In addition, when each facial feature region is specifically assigned with a sensitivity coefficient matching with the facial feature region, the larger the area of the facial feature region is, the smaller the sensitivity coefficient assigned to the facial feature region is; correspondingly, the smaller the area of the facial feature region, the greater the sensitivity coefficient assigned to the facial feature region. To more accurately reflect the degree of real beauty of different facial feature areas. For example, the corresponding sensitivity coefficient preset for the nose region is 0.9, the corresponding sensitivity coefficient preset for the eye region is 2, the sensitivity coefficient preset for the cheek region is 0.4, and the like.
It should be understood that the manner of obtaining the second beauty probability map from the first beauty probability map and the segmentation map is not limited to the manner of assigning the sensitivity coefficients shown above, and other manners may be adopted. For example, a mean calculation may also be used. Specifically, after the first beauty probability map and the segmentation map are obtained, a weighted average of probability values of all pixel points in each facial feature region in the first beauty probability map can be calculated by combining the first beauty probability map and the segmentation map, and a second beauty probability map is obtained. So as to obtain the beautifying probability of each facial feature region as a whole and provide more beautifying reference information. It should be noted that the foregoing processing manners of calculating the mean value and assigning the sensitivity coefficient may be used in an overlapping manner to improve the accuracy of the face beauty degree detection image obtained finally reflecting the beauty degrees of different facial feature regions. During specific superposition, a processing mode of firstly adopting sensitivity coefficient adjustment and then adopting mean value calculation can be adopted; or a processing mode of firstly carrying out mean value calculation and then adopting sensitivity coefficient adjustment can be adopted.
Next, referring to fig. 1, fig. 2, and fig. 3, a face beauty degree detection diagram is obtained according to the second beauty probability map and the image to be processed. And processing the second beauty probability image and the image to be processed to obtain a face beauty degree detection image. During specific processing, the values of pixel points one by one in the image to be processed and the values of corresponding pixel points in the second beauty probability map can be weighted to obtain a face beauty degree detection map. When weighting is specifically performed, assuming that the value of a certain pixel point in the image to be processed is a, and the probability value in the first beauty probability map corresponding to the pixel point is P, the weighted value is B ═ f (a, P), where f () is a weighting function. The second beauty probability image can be converted into a pseudo-color image, the inverse transformation of face alignment is used, the pseudo-color image is mapped to the image to be processed by using the inverse transformation method of affine transformation aiming at the face area subjected to the face alignment pre-operation, the pseudo-color image is superposed on the input image to be processed, and a visual face beauty degree detection image is obtained. The resulting face beautification degree detection map may be an image as shown on the rightmost side of fig. 2, among them“PForehead head0 means that the probability value of the forehead area being beautified is 0; "PEye(s)0 means that the probability value that the eye region is beautified is 0; "PNose0.99 "means that the probability value of the nose region being beautified is 0.99; "PMouth bar0 means that the probability value of the mouth area being beautified is 0.78; "PCheek0.71 means that the probability value of the cheek area being beautified is 0.71.
In addition, when the image to be processed is input into the face beauty detection model to obtain the first beauty probability map, the face beauty detection model can be used for calculating to obtain the first beauty probability map and the beauty template of the image to be processed, wherein the beauty template comprises coordinate displacement size information and color change amplitude information of each pixel point. After the face beautifying detection model calculates the image to be processed, the obtained first beautifying probability map not only includes the image to be processed, but also includes a beautifying template corresponding to the image to be processed, so that the beautifying template in the beautifying process of the image to be processed can be conveniently obtained, and the beautifying mode of the image to be processed can be conveniently transferred to other images to be processed in the follow-up process. The beauty template comprises coordinate displacement size information and color change amplitude information of each pixel point, wherein the coordinate displacement size information and the color change amplitude information refer to: assuming that the coordinate of a certain pixel point in the image to be beautified is an X value, the pixel value of the pixel point is A, the operation of the beautification template on the pixel point is that delta A is added to the pixel value, and delta X is shifted, then the new value of the pixel point is A + delta A, and the coordinate is shifted to X + delta X.
In addition, the number of the images to be processed can be at least one, the face beauty detection model can calculate each image to be processed to obtain at least one first beauty probability graph and at least one beauty template, wherein each image to be processed corresponds to one first beauty probability graph and one beauty template. Specifically, the number of the images to be processed may be 3, the number of the corresponding first beauty probability maps is also 3, and the number of the beauty templates is also 3.
When the migration of the beauty effect is specifically realized, a beauty template can be selected from at least one beauty template as a target beauty template. For example, one beauty template may be selected from the aforementioned 3 beauty templates as the target beauty template. Referring to fig. 3, an image to be beautified containing a face area to be beautified is then received, and the source of the image to be beautified can be captured by a camera or a terminal with a camera function, such as a mobile phone. The image to be beautified may be an original image without any modification of beauty. And then, performing beautifying processing on the image to be beautified according to the coordinate displacement size information and the color change amplitude information of each pixel point in the target beautifying template to obtain a beautified face area. And then covering the pixel points of the face area to be beautified in the image to be beautified with the pixel points of the face area to be beautified to obtain the image after the beautification. Selecting a target beautifying template from at least the beautifying templates obtained in the previous steps, and acting the target beautifying template on another face area to be beautified, so that a beautifying effect consistent with the target beautifying template is obtained, and the beautifying effect is transferred from an original beautifying image to another image to be beautified.
In addition, after receiving the image to be beautified, a preprocessing operation may be performed on the image to be beautified, where the preprocessing operation may include face detection, face key point detection aligned with a face, and the like. The purpose of the face detection is to detect a face area to be beautified from an image to be beautified; the face key point detection is to input the image to be beautified into a face key point detection model to obtain the coordinate information of each face key point in the face area to be beautified in the image to be beautified; the face alignment is to map the area where the key points of the input face are located to the fixed area of an artificially defined template face by using radiation change, wherein the template face is an artificially defined standard face and mainly specifies the coordinate information of each face key point in the image to be beautified. And then, according to the coordinate information of each key point in the face area to be beautified in the image to be beautified, applying the target beautifying template to the face area to be beautified of the image to be beautified to obtain the beautifying effect consistent with the target beautifying template, thereby realizing the beautifying migration. The step of applying to the target face beautifying template is to perform face beautifying processing on the image to be beautified according to the coordinate displacement size information and the color change amplitude information of each pixel point in the target face beautifying template to obtain a face region after face beautifying.
The image to be processed is input into a face beauty detection model and a face five-sense-organ segmentation model respectively to obtain a first beauty probability graph and a segmentation graph containing a face feature region, so that the beauty probability values of all pixel points in the face region are embodied. And then, obtaining a second beautifying probability map according to the first beautifying probability map and the segmentation map so as to obtain more accurate beautifying region information and beautifying degree information and obtain difference information between the beautifying region and the non-beautifying region.
In addition, an embodiment of the present invention further provides a processing system for a beauty image, where the processing system includes: the facial beautification system comprises a receiving module, a facial beautification detection model, a facial five-sense-organ segmentation model, a beautification degree adjusting module and a superposition module. The receiving module is used for receiving an image to be processed containing a face area; the face beautifying detection model is used for calculating an input image to be processed to obtain a first face beautifying probability map, wherein the first face beautifying probability map comprises face beautifying probability values of all pixel points in a face region; the human face five sense organ segmentation model is used for segmenting an input image to be processed to obtain a segmentation image containing a facial feature region; and the superposition module is used for obtaining a face beauty degree detection image according to the second beauty probability image and the image to be processed.
In the scheme, the image to be processed is respectively input into the face beauty detection model and the face five-sense-organ segmentation model to obtain a first beauty probability map and a segmentation map containing a face feature region, so that the beauty probability values of all pixel points in the face region are reflected. And then, obtaining a second beautifying probability map according to the first beautifying probability map and the segmentation map so as to obtain more accurate beautifying region information and beautifying degree information and obtain difference information between the beautifying region and the non-beautifying region.
In addition, the processing system can further comprise a preprocessing operation module, before the image to be processed is input into the face beauty detection model and the face five-sense-organ segmentation model, the preprocessing operation module is used for preprocessing the image to be processed, wherein the preprocessing operation comprises face detection, face key point detection and face alignment.
The facial five sense organs segmentation model can be used for segmenting an input image to be processed to obtain a segmentation graph containing facial feature regions, wherein the facial feature regions comprise at least one region of a forehead region, an eye region, an eyebrow region, a nose region, a mouth region, a chin region and a cheek region.
The beautifying degree adjusting module can comprise a preset sensitivity coefficient module and a first calculating module. The preset sensitivity coefficient module is used for presetting a sensitivity coefficient, and the sensitivity coefficient and the facial feature region have a corresponding relation; the first calculation module is used for adjusting probability values of all pixel points in the corresponding facial feature region in the first beauty probability map by adopting the sensitivity coefficient to obtain a second beauty probability map.
The preset sensitivity coefficient module can be preset by adopting the following rules: the larger the area of the facial feature region is, the smaller the sensitivity coefficient preset for the facial feature region is.
In addition, the beauty degree adjusting module is used for calculating the weighted average value of the probability values of all pixel points in each face feature region in the first beauty probability map by combining the first beauty probability map and the segmentation map to obtain a second beauty probability map.
The face beautifying detection model can also be used for outputting a beautifying template and a first beautifying probability graph according to the input image to be processed; the beautifying template comprises coordinate displacement size information and color change amplitude information of each pixel point.
In addition, the number of the images to be processed can be at least one, and each image to be processed correspondingly stores at least one beauty template. The processing system further comprises a beauty migration module, wherein the beauty migration module is used for selecting a beauty template from at least one beauty template as a target beauty template; the face beautifying device is also used for receiving a face image to be beautified containing a face area to be beautified; the face beautifying processing module is also used for carrying out face beautifying processing on the image to be beautified according to the coordinate displacement size information and the color change amplitude information of each pixel point in the target face beautifying template to obtain a face area after face beautifying; and the method is also used for covering the pixel points of the face area to be beautified in the image to be beautified with the pixel points of the face area to be beautified to obtain the image after the beautification.
It should be noted that the functional modules described above include not only hardware having logic operation and storage functions, but also software programs stored or operated thereon to implement the functional steps illustrated in the foregoing. Among them, the aforementioned hardware having logical operation and storage functions may include a processor such as a central processing unit, an image processor, and the like, and various storage media.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for processing a beauty image, comprising:
receiving an image to be processed containing a face area;
inputting the image to be processed into a face beauty detection model to obtain a first beauty probability graph; the first beauty probability graph comprises beauty probability values of all pixel points in the face region;
inputting the image to be processed into a facial feature segmentation model to obtain a segmentation map containing facial feature areas;
obtaining a second beautifying probability map according to the first beautifying probability map and the segmentation map; wherein the second beauty probability map comprises beauty probability values for the facial feature regions;
and obtaining a face beauty degree detection image according to the second beauty probability image and the image to be processed.
2. The processing method of claim 1, wherein before inputting the image to be processed into the face beauty detection model and the face five-sense-organ segmentation model, the processing method further comprises:
and preprocessing the image to be processed, wherein the preprocessing operation comprises face detection, face key point detection and face alignment.
3. The processing method according to claim 1, wherein the facial feature region includes at least one of a forehead region, an eye region, an eyebrow region, a nose region, a mouth region, a chin region, and a cheek region.
4. The processing method of claim 1, wherein said deriving a second beauty probability map based on the first beauty probability map and the segmentation map comprises:
presetting a sensitivity coefficient, wherein the sensitivity coefficient has a corresponding relation with the facial feature region;
and adjusting the probability values of all pixel points in the corresponding facial feature region in the first beauty probability map by adopting the sensitivity coefficient to obtain the second beauty probability map.
5. The processing method of claim 4, wherein the preset sensitivity factor comprises:
the larger the area of the facial feature region is, the smaller the preset sensitivity coefficient of the facial feature region is.
6. The processing method of claim 1, wherein said deriving a second beauty probability map based on said first beauty probability map and said segmentation map further comprises:
and calculating the weighted average value of the probability values of all pixel points in each facial feature region in the first facial feature map by combining the first facial feature map and the segmentation map to obtain the second facial feature map.
7. The processing method as claimed in claim 1, wherein said inputting the image to be processed into a face beauty detection model to obtain a first beauty probability map comprises:
inputting the image to be processed into the face beauty detection model to obtain a beauty template and a first beauty probability graph; the beautifying template comprises coordinate displacement size information and color change amplitude information of each pixel point.
8. The processing method according to claim 7, wherein the number of the images to be processed is at least one, and at least one beauty template is correspondingly stored in each image to be processed;
the processing method further comprises the following steps:
selecting a beauty template from the at least one beauty template as a target beauty template;
receiving a face image to be beautified containing a face area to be beautified;
performing beautifying processing on the image to be beautified according to the coordinate displacement size information and the color change amplitude information of each pixel point in the target beautifying template to obtain a beautified face area;
and covering the pixel points of the face area to be beautified in the image to be beautified with the pixel points of the face area to be beautified to obtain the image to be beautified.
9. The processing method of claim 1, wherein the obtaining a face beautification degree detection map according to the second beautification probability map and the image to be processed comprises:
and weighting the values of pixel points one by one in the image to be processed and the values of corresponding pixel points in the second beauty probability map to obtain the face beauty degree detection map.
10. A system for processing a cosmetic image, comprising:
the receiving module is used for receiving an image to be processed containing a face area;
the face beautifying detection model is used for calculating the input image to be processed to obtain a first beautifying probability map; the first beauty probability graph comprises beauty probability values of all pixel points in the face region;
the human face five sense organ segmentation model is used for segmenting the input image to be processed to obtain a segmentation image containing a facial feature region;
the beautifying degree adjusting module is used for obtaining a second beautifying probability map according to the first beautifying map and the segmentation map; wherein the second beauty probability map comprises beauty probability values for the facial feature regions;
and the superposition module is used for obtaining a face beauty degree detection image according to the second beauty probability image and the image to be processed.
CN202110650610.6A 2021-06-10 2021-06-10 Processing method and processing system of beautifying image Active CN113421197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110650610.6A CN113421197B (en) 2021-06-10 2021-06-10 Processing method and processing system of beautifying image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110650610.6A CN113421197B (en) 2021-06-10 2021-06-10 Processing method and processing system of beautifying image

Publications (2)

Publication Number Publication Date
CN113421197A true CN113421197A (en) 2021-09-21
CN113421197B CN113421197B (en) 2023-03-10

Family

ID=77788278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110650610.6A Active CN113421197B (en) 2021-06-10 2021-06-10 Processing method and processing system of beautifying image

Country Status (1)

Country Link
CN (1) CN113421197B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685542A (en) * 2008-09-24 2010-03-31 索尼株式会社 Electronic device, fuzzy image sorting method and program
CN107392110A (en) * 2017-06-27 2017-11-24 五邑大学 Beautifying faces system based on internet
CN108257097A (en) * 2017-12-29 2018-07-06 努比亚技术有限公司 U.S. face effect method of adjustment, terminal and computer readable storage medium
CN109447031A (en) * 2018-11-12 2019-03-08 北京旷视科技有限公司 Image processing method, device, equipment and storage medium
CN112001285A (en) * 2020-08-14 2020-11-27 深圳世间乐见科技有限公司 Method, device, terminal and medium for processing beautifying image
CN112508777A (en) * 2020-12-18 2021-03-16 咪咕文化科技有限公司 Beautifying method, electronic equipment and storage medium
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685542A (en) * 2008-09-24 2010-03-31 索尼株式会社 Electronic device, fuzzy image sorting method and program
US20110116726A1 (en) * 2008-09-24 2011-05-19 Sony Corporation Electronic apparatus, blur image sorting method, and program
CN107392110A (en) * 2017-06-27 2017-11-24 五邑大学 Beautifying faces system based on internet
CN108257097A (en) * 2017-12-29 2018-07-06 努比亚技术有限公司 U.S. face effect method of adjustment, terminal and computer readable storage medium
CN109447031A (en) * 2018-11-12 2019-03-08 北京旷视科技有限公司 Image processing method, device, equipment and storage medium
CN112001285A (en) * 2020-08-14 2020-11-27 深圳世间乐见科技有限公司 Method, device, terminal and medium for processing beautifying image
CN112508777A (en) * 2020-12-18 2021-03-16 咪咕文化科技有限公司 Beautifying method, electronic equipment and storage medium
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN113421197B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
CN108229278B (en) Face image processing method and device and electronic equipment
CN108229279B (en) Face image processing method and device and electronic equipment
CN109952594B (en) Image processing method, device, terminal and storage medium
WO2019228473A1 (en) Method and apparatus for beautifying face image
CN106056064B (en) A kind of face identification method and face identification device
CN106682632B (en) Method and device for processing face image
JP4862955B1 (en) Image processing apparatus, image processing method, and control program
KR101455950B1 (en) Image-processing device, image-processing method, and recording medium for control program
CN107818305A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN108447017A (en) Face virtual face-lifting method and device
KR101141643B1 (en) Apparatus and Method for caricature function in mobile terminal using basis of detection feature-point
CN108428214B (en) Image processing method and device
WO2018096661A1 (en) Image generation device, face verification device, image generation method, and recording medium in which program is stored
CN112214773B (en) Image processing method and device based on privacy protection and electronic equipment
CN111062891A (en) Image processing method, device, terminal and computer readable storage medium
CN108682050B (en) Three-dimensional model-based beautifying method and device
CN109272579B (en) Three-dimensional model-based makeup method and device, electronic equipment and storage medium
CN109242760B (en) Face image processing method and device and electronic equipment
CN111369478B (en) Face image enhancement method and device, computer equipment and storage medium
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
CN114187166A (en) Image processing method, intelligent terminal and storage medium
CN113421197B (en) Processing method and processing system of beautifying image
WO2022258013A1 (en) Image processing method and apparatus, electronic device and readable storage medium
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant