WO2019052449A1 - 肤色识别方法及装置、存储介质 - Google Patents

肤色识别方法及装置、存储介质 Download PDF

Info

Publication number
WO2019052449A1
WO2019052449A1 PCT/CN2018/105103 CN2018105103W WO2019052449A1 WO 2019052449 A1 WO2019052449 A1 WO 2019052449A1 CN 2018105103 W CN2018105103 W CN 2018105103W WO 2019052449 A1 WO2019052449 A1 WO 2019052449A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
color
pixel
face image
skin color
Prior art date
Application number
PCT/CN2018/105103
Other languages
English (en)
French (fr)
Inventor
杜凌霄
Original Assignee
广州市百果园信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市百果园信息技术有限公司 filed Critical 广州市百果园信息技术有限公司
Priority to US16/646,765 priority Critical patent/US11348365B2/en
Publication of WO2019052449A1 publication Critical patent/WO2019052449A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of image processing, and in particular, to a skin color recognition method and apparatus, and a storage medium.
  • the terminal can perform skincare on the captured face images and target the target skin color.
  • the face image and the face image of the non-target skin color are different in the skin beauty scheme used by the terminal in performing the skincare.
  • the terminal can use the black skin beauty program (the skin coloring scheme for the black skin color) to skin the face image of the black skin color (target skin color), and adopt the non-black skin beauty program (the beauty of the human body against the non-black skin color) Skin Protocol) Skin images of non-black skin (non-target skin) faces. Therefore, before performing skincare on the face image, the terminal needs to determine whether the skin color of the face in the face image is the target skin color.
  • the method for determining whether the skin color of the face is the target skin color includes: the terminal collecting the face image of the RGB color mode, and acquiring the intensity value (also called the color value) of the red color component of the face image, and the green color.
  • the intensity value of the color component and the intensity value of the blue color component and then comparing the intensity value of the red color component with the red color intensity value range corresponding to the target skin color, and the intensity value of the green color component and the green color intensity corresponding to the target skin color
  • the range of values is compared, and the intensity value of the blue color component is compared with the range of the blue color intensity value corresponding to the target skin color.
  • the terminal determines that the skin color of the face is the target skin color, otherwise, the terminal determines the face color.
  • Skin color is non-target skin tone.
  • the intensity value of each color component in the face image of the RGB color mode is related to the brightness of the color, so the face image of the RGB color mode is easily affected by the illumination, resulting in the intensity of each color component of the face image determined by the terminal.
  • the accuracy of the value is low, which in turn leads to a lower accuracy of the terminal determining the target skin color.
  • the present application provides a skin color recognition method and device, and a storage medium, which can solve the problem that the terminal has low accuracy in determining the target skin color.
  • the technical solution is as follows:
  • a skin color recognition method comprising: acquiring a face image; determining a target color gamut difference of each pixel in the face image, the pixel of each pixel a target gamut difference is a difference between intensity values of two color components specified in each of the pixels; determining a face in the face image according to a target gamut difference of all pixels in the face image
  • the skin color belongs to the skin color confidence of the target skin color, and the skin color confidence reflects the probability that the skin color of the face in the face image is the target skin color.
  • a skin color recognition device includes: an acquisition module, configured to acquire a face image; and a first determining module, configured to determine each pixel in the face image a target gamut difference, the target gamut difference of each pixel is a difference between the intensity values of the two color components specified in each pixel; and a second determining module, configured to be used according to the face image a target color gamut of all the pixels, determining that the skin color of the face in the face image belongs to the skin color confidence of the target skin color, the skin color confidence reflecting the skin color of the face in the face image being the target skin color The probability.
  • a skin color identifying apparatus comprising: a processor; a memory in which the processor executable instructions are stored; wherein the processor is configured to perform the above executable Directly implementing the steps of: acquiring a face image; determining a target color gamut difference of each pixel in the face image, the target color gamut difference of each pixel being two colors specified in each pixel a difference between the intensity values of the components; determining, according to the target color gamut difference of all the pixels in the face image, the skin color of the face in the face image belongs to the skin color confidence of the target skin color, and the skin color confidence reflects The skin color of the face in the face image is the probability of the target skin color.
  • a computer readable storage medium having stored therein instructions that, when executed on a processing component, cause the processing component to perform a first aspect Or the skin color recognition method according to any of the alternative aspects of the first aspect.
  • the skin color recognition method and device and the storage medium provided by the embodiment of the present application after the terminal acquires the face image, determine the target color gamut difference of each pixel in the face image, according to the target color gamut difference of all the pixels in the face image. And determining that the skin color of the face in the face image belongs to the skin color confidence of the target skin color. Since the target gamut difference can eliminate the brightness factor of the face image and avoid the influence of the illumination on the face image, the problem that the terminal determines the target skin color with low accuracy can be solved, and the terminal determines the accuracy of the target skin color.
  • FIG. 1 is a flowchart of a method for identifying a skin color according to an embodiment of the present application
  • FIG. 2 is a flowchart of a method for another skin color recognition method provided by an embodiment of the present application
  • FIG. 3 is a flowchart of a method for determining a target gamut difference of each pixel in a face image according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a target image area provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another target image area provided by an embodiment of the present application.
  • FIG. 6 is a flowchart of a method for determining a target gamut difference of each pixel in a target image region according to an embodiment of the present application
  • FIG. 7 is a flowchart of a method for determining a skin color confidence of a skin color of a face in a face image according to an embodiment of the present application
  • FIG. 8 is a flowchart of another method for determining the skin color confidence of a skin color of a face in a face image according to an embodiment of the present application
  • FIG. 9 is a relationship diagram of a skin color confidence level and a target color gamut difference of a face image according to an embodiment of the present application.
  • FIG. 10 is a block diagram of a skin color recognition device according to an embodiment of the present application.
  • FIG. 11 is a block diagram of a first determining module according to an embodiment of the present application.
  • FIG. 12 is a block diagram of a second determining module according to an embodiment of the present disclosure.
  • FIG. 13 is a block diagram of another first determining module according to an embodiment of the present disclosure.
  • FIG. 14 is a block diagram of a skin color recognition device according to an embodiment of the present application.
  • the YUV color mode is a color coding mode adopted by the European television system. It is a Parse system (English: Phase Alteration Line; PAL) and Secon (Sequentiel Couleur A Memoire; SECAM). The color mode adopted by the TV system. In a modern color television system, a three-tube color camera or a color-charged coupling device (English: Charge-coupled Device: CCD) camera is usually used for image acquisition, and then the obtained color image signal is subjected to color separation and amplification correction.
  • PAL Phase Alteration Line
  • SECAM Secon
  • CCD Charge-coupled Device
  • the image of the RGB color mode is processed by the matrix conversion circuit to obtain the luminance signal Y, the color difference signal B-Y and the color difference signal R-Y, and then the luminance signal Y, the color difference signal B-Y and the color difference signal.
  • R-Y is separately encoded to obtain the encoded luminance signal Y, the encoded color difference signal B-Y and the encoded color difference signal R-Y, and finally the encoded luminance signal Y and the encoded color difference signal are used on the same channel.
  • B-Y and the encoded color difference signal R-Y are transmitted.
  • the color difference signal B-Y is also the signal of the blue chrominance component U
  • the color difference signal R-Y is also the signal of the red chrominance component V
  • the signal of the luminance signal Y and the chrominance component in the image of the YUV color mode are separated.
  • the skin-beautifying scheme In the process of video call or video live broadcast, it is often necessary to use the skin-beautifying scheme to process the collected face image to eliminate the small flaws in the face image.
  • the same skin treatment scheme can be used to treat different effects, especially the black skin color and the yellow skin color difference. If the same skin beauty scheme is used for the black skin color face image and the yellow skin color Face image processing may make it difficult to achieve a skin-beautifying effect. Therefore, face images of different skin colors are usually processed by different skin-beautning schemes, so that it is necessary to process the face image using the skin-beautifying scheme. The skin color of the face in the face image is recognized.
  • the skin color recognition method provided by the embodiment of the present application can identify the target skin color.
  • the skin color recognition method provided by the embodiment of the present application can be performed by the terminal, and the terminal can be a smart phone, a tablet computer, a smart TV, a smart watch, and a laptop portable device. Computers and desktop computers, etc.
  • FIG. 1 is a flowchart of a method for identifying a skin color according to an embodiment of the present application.
  • the skin color recognition method may be performed by a terminal.
  • the skin color recognition method includes:
  • Step 101 The terminal acquires a face image.
  • Step 102 The terminal determines a target color gamut difference of each pixel in the face image, and the target color gamut difference of each pixel is a difference between the intensity values of the two color components specified in each pixel.
  • Step 103 The terminal determines, according to the target color gamut difference of all the pixels in the face image, that the skin color of the face in the face image belongs to the skin color confidence of the target skin color, and the skin color confidence reflects the skin color in the face image as the target skin color. Probability.
  • the terminal determines the target color gamut difference of each pixel in the face image, according to the target color gamut difference of all the pixels in the face image. And determining that the skin color of the face in the face image belongs to the skin color confidence of the target skin color. Since the target gamut difference can eliminate the brightness factor of the face image and avoid the influence of the illumination on the face image, the problem that the terminal determines the target skin color with low accuracy can be solved, and the terminal determines the accuracy of the target skin color.
  • FIG. 2 is a flowchart of a method for identifying another skin color according to an embodiment of the present application.
  • the skin color recognition method may be performed by a terminal.
  • the skin color recognition method includes:
  • Step 201 The terminal acquires a face image.
  • the terminal may obtain the face image when the video is broadcasted, or the terminal may obtain the face image during the video call, and the terminal may also determine the face image from the video or the image stored by the terminal.
  • the face image refers to the image of the human face.
  • a camera is disposed in the terminal, and the terminal obtains a face image by using a camera.
  • the face image may be a face image of the target skin color
  • the target skin color may be the skin color recognized by the skin color recognition method provided by the embodiment of the present application.
  • the target skin tone is a black skin tone.
  • the camera is used as an example of the camera in the terminal. In an actual application, the camera may also be an independent camera. The embodiment of the present application is not limited herein.
  • Step 202 The terminal determines a target color gamut difference of each pixel in the face image, and the target color gamut difference of each pixel is a difference between the intensity values of the two color components specified in each pixel.
  • the terminal may determine a target color gamut difference of each pixel in the face image, or select a target image region from the face image, and determine a target color of each pixel in the target image region.
  • the domain is poor.
  • the embodiment of the present application is described by taking an example of determining a target color gamut difference of each pixel in a target image region.
  • FIG. 3 is a flowchart of a method for determining a target gamut difference of each pixel in a face image according to an embodiment of the present application. Referring to FIG. 3, the method includes:
  • Sub-step 2021 the terminal determines a target image area from the face image.
  • the terminal may determine the target image region from the face image.
  • the target image area may be an area including a face in the face image, or an area including a majority of the area of the face, that is, most of the image of the face is located in the target image area.
  • the target image area may be a circle area of the face frame; or the target image area may be a quarter area of the center of the circle area of the face frame, and the target image area is at least 10 ⁇ for the face image center.
  • a 10-pixel area when the number of pixels included in the center of the circled area of the face frame is greater than 10 ⁇ 10, the target image area may be a quarter of the center of the area circled by the face frame, when the person When the number of pixels included in the quarter of the center of the circled area of the face frame is not more than 10 ⁇ 10, the target image area is an area in which the center of the face image includes 10 ⁇ 10 pixels.
  • the terminal may determine the target image region Q1 from the face image F.
  • the target image area Q1 is an area circled by a face frame (not shown in FIG. 4) in the face image F, and the circled area of the face frame includes 40 ⁇ 60 In the case of a pixel, the target image area Q1 includes 40 ⁇ 60 pixels; or, as shown in FIG. 5, the target image area Q1 is a quarter area of the center of the area circled by the face frame X in the face image F.
  • the target image area Q1 includes 20 ⁇ 30 pixels.
  • the face frame is a rectangular virtual frame displayed when the terminal detects the face by using the face detection technology
  • the face detection technology may be a Viola-Jones face detection technology or a face detection technology based on deep learning.
  • the development of face detection technology makes it possible to detect faces quickly and reliably from complex backgrounds.
  • the implementation process of the terminal to detect the face by using the face detection technology may be referred to the related art, and details are not repeatedly described herein.
  • Sub-step 2022 the terminal determines a target color gamut difference for each pixel in the target image region.
  • the target gamut difference of each pixel in the target image region may be determined, and the target gamut difference of each pixel is the intensity of the two color components specified in each pixel.
  • the terminal may determine the target color gamut difference of each pixel in the target image region by using different methods.
  • the RGB color mode is obtained by changing the three color channels of red light, green light and blue light and superimposing them on each other, and therefore, the color component of the face image of the RGB color mode. Includes red color component, green color component, and blue color component.
  • the terminal can directly subtract the intensity values of the two color components specified in each pixel to obtain a target color gamut difference of each pixel, when the face image is YUV In the face image of the color mode, the terminal can calculate the target gamut difference of each pixel according to the chromaticity value of the chrominance component of each pixel.
  • the terminal determining the target gamut difference of each pixel in the target image region may include the following two aspects:
  • the terminal subtracts the intensity values of the two color components specified in each pixel to obtain a target color gamut difference for each pixel.
  • the intensity values of at least two color components of each pixel of the face image may be determined, and the at least two color components may include the specified two color components, and the terminal may specify the two colors.
  • the intensity values of the components are subtracted to obtain the difference between the intensity values of the specified two color components, and the difference between the intensity values of the specified two color components of each pixel is the target gamut difference for each pixel.
  • the target skin color may be a black skin color
  • the specified two color components may include a red color component R and a green color component G.
  • the terminal may refer to the related art, and details are not repeatedly described herein.
  • the intensity value of the red color component R and the intensity value of the green color component G of each pixel in the target image region Q1 may be determined, and each pixel is The intensity value of the red color component R is subtracted from the intensity value of the green color component G to obtain a target color gamut difference for each pixel. For example, assuming that the intensity value of the red color component R of the pixel 1 in the target image region Q1 is 200, and the intensity value of the green color component G is 60, the terminal selects the intensity value and the green color component of the red color component R of the pixel 1. The intensity value of G is subtracted to obtain a target gamut difference of 140 for the pixel 1.
  • the second aspect the terminal calculates the target gamut difference of each pixel according to the chrominance value of the chrominance component of each pixel.
  • FIG. 6 is a flowchart of a method for determining a target gamut difference of each pixel in a target image region according to an embodiment of the present application.
  • the method includes:
  • Sub-step 20221 the terminal determines a chrominance value for each chrominance component of each pixel.
  • the terminal may determine a chrominance value for each chrominance component of each pixel in the target image region.
  • the chrominance component of each pixel may include: a blue chrominance component U and a red chrominance component V, and therefore, the terminal may determine the blue chrominance component U and the red chrominance component V of each pixel.
  • the terminal determines the chrominance value of the blue chrominance component U and the chrominance value of the red chrominance component V of each pixel in the target image region Q.
  • the implementation process of determining the chrominance value of each chrominance component of each pixel in the target image area may be referred to the related art, and details are not described herein again.
  • Sub-step 20222 The terminal determines a target color gamut difference of each pixel according to a chrominance value of each chrominance component of each pixel.
  • the terminal determines a target gamut difference for each pixel based on the blue chrominance component U and the red chrominance component V of each pixel. Specifically, the terminal calculates the target color gamut difference of each pixel by using the gamut difference formula according to the chrominance value of the blue chrominance component U of each pixel and the chromaticity value of the red chrominance component V;
  • the values of a and b can be set according to actual conditions.
  • the target gamut difference of each pixel when calculating the target gamut difference of each pixel according to the chrominance value of each chrominance component of each pixel, it may be determined according to the system of the terminal, which of the above four formulas is used to calculate each pixel.
  • the target gamut is poor.
  • the target gamut difference of each pixel can be calculated by using the above formula (1) or (3), and the terminal of the Apple mobile device operating system (English: iOS) can be used. Any one of the above four formulas calculates the target gamut difference for each pixel.
  • the terminal calculates the target color gamut of the pixel 1 by using the above formula (1).
  • the difference can be 140.15.
  • the terminal selects a target image region from the face image, and determines a target color gamut difference of each pixel in the target image region as an example.
  • the terminal may also determine The target gamut difference of each pixel in the face image, the process of determining the target gamut difference of each pixel in the face image is the same as the process of determining the target gamut difference of each pixel in the target image region or Similarly, the embodiments of the present application are not described herein again.
  • the calculation amount of the target color gamut difference of each pixel in the target image area is determined to be small, and the embodiment of the present application determines each of the target image areas.
  • the target gamut difference of the pixel can reduce the amount of calculation.
  • Step 203 The terminal determines, according to the target color gamut difference of all the pixels in the face image, that the skin color of the face in the face image belongs to the skin color confidence of the target skin color, and the skin color confidence reflects the skin color of the face in the face image. The probability of the target skin tone.
  • the skin color confidence of the face color in the face image belongs to the target skin color according to the target color gamut difference of all the pixels in the face image.
  • the skin color confidence level reflects the probability that the skin color of the face in the face image is the target skin color, that is, the magnitude of the possibility that the skin color of the face in the face image is the target skin color.
  • FIG. 7 is a flowchart of a method for determining the skin color confidence of a skin color of a face in a face image according to an embodiment of the present application.
  • the method includes:
  • Step 2031A The terminal determines a color confidence of each pixel according to a target gamut difference of each pixel.
  • the terminal may determine the color confidence of each pixel according to the target gamut difference of each pixel.
  • the terminal calculates a color confidence of each pixel according to a target gamut difference of each pixel by using a color confidence formula;
  • the color confidence formula is:
  • C denotes the target gamut difference of each pixel
  • C min denotes the minimum value of the target gamut difference
  • C max denotes the maximum value of the target gamut difference
  • the terminal may determine a difference between the relationship between C max and C min of each pixel in the target gamut, the target gamut based on the relationship with the difference between C min and C max of each pixel of the target gamut color difference is substituted into the above formula confidence, Calculate the color confidence of each pixel. For example, the terminal substitutes the target gamut difference of the pixel 1 into the above-described color confidence formula, and calculates the color confidence of the pixel 1. Assuming that the target gamut difference C of the pixel 1 is 140, the minimum value C min of the target gamut difference is 80, and the maximum value C max of the target gamut difference is 160, the terminal can determine the color confidence of the pixel 1 according to the above formula. .
  • the minimum value C min of the target gamut difference and the maximum value C max of the target gamut difference may be obtained by: specifically, in a face image of a large amount of target skin color
  • the target gamut difference of the pixels is counted, and a scatter plot is drawn according to the statistical result.
  • Each scatter point in the scatter plot represents a target gamut difference, and the pixels in the face image of the target skin color are determined according to the scatter plot.
  • the distribution of the target gamut difference is the range of the target gamut difference in which the scatter distribution is dense on the scatter plot, and the minimum value range of the target gamut difference is the target.
  • the minimum value of the gamut difference C min , the maximum value range of the target gamut difference is the maximum value C max of the target gamut difference.
  • the process of obtaining the minimum value C min of the target gamut difference and the maximum value C max of the target gamut difference may be manually implemented, or may be implemented by the terminal, when the minimum value C min of the target gamut difference is obtained.
  • the process of the maximum value C max of the target gamut difference may be manually implemented. After manually determining the minimum value C min of the target gamut difference and the maximum value C max of the target gamut difference, the target gamut difference may be minimized.
  • the value C min and the maximum value C max of the target gamut difference are stored in the terminal.
  • the target skin color may be a black skin color
  • the target color gamut difference of the pixels in the face image of the large black skin color may be counted
  • the scattergram S may be drawn according to the target color gamut difference of the pixels in the black skin color face image.
  • Each point in the scattergram S represents a target gamut difference
  • the distribution of the target gamut difference of the pixels in the face image of the black skin color is determined according to the scattergram S, and the scattergram S is scattered.
  • the interval-dense interval is taken as the range of the target gamut difference
  • the minimum value range of the target gamut difference is the minimum value C min of the target gamut difference
  • the target gamut difference has the largest value range.
  • the value is the maximum value C max of the target gamut difference. For example, assuming that the interval of the target gamut difference in which the scatter distribution is dense on the scattergram S is [80, 160], the minimum value C min of the target gamut difference is 80, and the maximum value C max of the target gamut difference is 160.
  • the target color gamut difference formula may be used first. Obtaining a target color gamut difference of pixels in the face image, and in calculating the target color gamut difference of the pixels in the face image, the gamut difference formula adopted by the terminal and the terminal according to each pixel in the above sub-step 20222 The chrominance values of the respective chrominance components are the same as the formula for determining the target gamut difference for each pixel.
  • step 203 it is determined.
  • Step 2032A The terminal determines, according to the color confidence of all the pixels in the face image, that the skin color of the face in the face image belongs to the skin color confidence of the target skin color.
  • the terminal may determine that the skin color of the face in the face image belongs to the skin color confidence of the target skin color according to the color confidence of all the pixels in the face image.
  • the terminal may average the color confidence of all the pixels in the face image to obtain an average color confidence of the color confidence of all the pixels in the face image, and determine the average color confidence as the face image.
  • the skin color of the face belongs to the skin color confidence of the target skin color.
  • the terminal can pair P 1 , P 2 , P 3 , P 4 , P 5 , . . . , P n are averaged to obtain an average color confidence, and the average color confidence is determined as the skin color confidence of the skin color of the face in the face image belonging to the target skin color, and the average color confidence may be for:
  • the terminal determines that the average value of the color confidence of the n pixels in the face image F is 0.7
  • the terminal determines that the skin color of the face in the face image F belongs to the skin color of the black skin color.
  • the degree is 0.7.
  • the average color confidence of the color confidence of all the pixels in the face image is obtained by averaging the color confidence of all the pixels in the face image, and the average color is trusted.
  • the degree is determined as the skin color confidence of the face skin in the face image as an example.
  • the terminal may perform weighting operation on the color confidence of all pixels in the face image to obtain a face image.
  • the weighted value of the color confidence of all the pixels, and the weighted value is determined as the skin color confidence of the skin color of the face in the face image belonging to the target skin color.
  • the terminal may also determine the face in the face image by other methods.
  • the skin color of the target skin color is the confidence level of the skin color of the target skin, and details are not described herein again.
  • Step 204 The terminal performs skincare on the face image according to the skin color confidence of the skin color of the face in the face image.
  • the terminal may perform skin beauty on the face image according to the skin color confidence.
  • the terminal may store a skincare solution for the target skin color and a skincare solution for the non-target skin color, and each skincare solution includes a skinning parameter, and the terminal may be based on the skin color confidence level for the skin coloring solution for the target skin color.
  • the skin skin parameters and the skin skin parameters of the skin skin plan for non-target skin color are processed, and the skin skin parameters for the face image are obtained according to the processing result, and the face image is skin-feeled according to the skin skin parameters for the face image. .
  • the skin color of the face in the face image belongs to the skin color of the target skin color
  • the skin color parameter of the skin skin plan for the target skin color is e
  • the skin-beautifying solution includes a plurality of skin-behaving parameters.
  • the skin-behaving parameter e represents all the skin-behaving parameters in the skin-beautifying scheme for the target skin color
  • the skin-behaving parameter f represents All skin parameters in the skincare regimen for non-target skin tones.
  • the color confidence of the color of the face in the face image belongs to the skin color of the target skin color according to the color confidence of all the pixels in the face image, and the actual application is performed.
  • the terminal may further determine, according to the target gamut difference of the face image, the skin color of the face in the face image belongs to the skin color confidence of the target skin color.
  • FIG. 8 is a flowchart of another method for determining the skin color confidence of a skin color of a face in a face image according to an embodiment of the present application. Referring to FIG. 8 , the method includes :
  • Sub-step 2031B the terminal determines a target gamut difference of the face image according to the target gamut difference of all the pixels in the face image.
  • the terminal may determine the target gamut difference of the face image according to the target gamut difference of all the pixels in the face image.
  • the terminal may average target gamut differences of all the pixels in the face image to obtain an average target gamut difference of the target gamut difference of all the pixels in the face image, and determine the average target gamut difference.
  • the target gamut is the difference of the face image.
  • the terminal can be on C 1 , C 2 , C 3 , C 4 , C 5 ... C n are averaged to obtain an average target color gamut difference, and the average target color gamut difference is determined as a target color gamut difference of the face image, and the average target color gamut difference may be:
  • the terminal determines that the average value of the target gamut differences of the n pixels in the face image F is 140, and therefore, the terminal determines that the target gamut difference of the face image F is 140.
  • the average gamut difference of the target gamut difference of all the pixels is obtained by averaging the target gamut differences of all the pixels in the face image, and the average target gamut difference is obtained.
  • the target gamut difference determined as the face image is taken as an example.
  • the terminal can perform weighting operation on the target gamut difference of all the pixels in the face image to obtain the target gamut difference of all the pixels in the face image.
  • the weighting value is determined, and the weighted value is determined as the target color gamut difference of the face image.
  • the terminal may also determine the target color gamut difference of the face image by using other methods, which is not repeatedly described herein.
  • Sub-step 2032B The terminal determines, according to the target color gamut difference of the face image, that the skin color of the face in the face image belongs to the skin color confidence of the target skin color.
  • the terminal may calculate, according to the target gamut difference of the face image, a skin color confidence degree of the face color in the face image calculated by the skin color confidence formula;
  • the skin color confidence formula can be:
  • C represents the target color gamut difference of the face image
  • C min represents the minimum value of the target gamut difference
  • C max represents the maximum value of the target gamut difference.
  • the target color gamut difference C of the face image may be linearly related to the skin color confidence P, for example, as shown in FIG. 9 , at C min and C max . Between, the target gamut difference C of the face image can be linearly related to the skin color confidence P.
  • the terminal determines the target color gamut difference of each pixel in the face image, according to the target color gamut difference of all the pixels in the face image. And determining that the skin color of the face in the face image belongs to the skin color confidence of the target skin color. Since the target gamut difference can eliminate the brightness factor of the face image and avoid the influence of the illumination on the face image, the problem that the terminal determines the target skin color with low accuracy can be solved, and the terminal determines the accuracy of the target skin color.
  • FIG. 10 is a block diagram of a skin color recognizing device 100, which may be implemented as part or all of a terminal by software, hardware, or a combination of both, which may be a smart phone, according to an exemplary embodiment.
  • the skin color recognition device 100 can include:
  • the obtaining module 110 is configured to acquire a face image.
  • the first determining module 120 is configured to determine a target color gamut difference of each pixel in the face image, and the target color gamut difference of each pixel is a difference between the intensity values of the two color components specified in each pixel.
  • the second determining module 130 is configured to determine, according to the target color gamut difference of all the pixels in the face image, the skin color of the face in the face image belongs to the skin color confidence of the target skin color, and the skin color confidence reflects the person in the face image
  • the skin color of the face is the probability of the target skin color.
  • the terminal determines the target color gamut difference of each pixel in the face image, and according to the target color gamut difference of all the pixels in the face image. And determining that the skin color of the face in the face image belongs to the skin color confidence of the target skin color. Since the target gamut difference can eliminate the brightness factor of the face image and avoid the influence of the illumination on the face image, the problem that the terminal determines the target skin color with low accuracy can be solved, and the terminal determines the accuracy of the target skin color.
  • FIG. 11 is a block diagram of a first determining module 120 provided by an embodiment of the present application.
  • the first determining module 120 includes:
  • the first determining sub-module 121 is configured to determine a chrominance value of each chrominance component of each pixel.
  • the second determining sub-module 122 is configured to determine a target color gamut difference of each pixel according to a chrominance value of each chrominance component of each pixel.
  • the second determining sub-module 122 is configured to calculate a target color gamut of each pixel by using a gamut difference formula according to a chrominance value of a blue chrominance component of each pixel and a chromaticity value of a red chrominance component. difference;
  • FIG. 12 is a block diagram of a second determining module 130 according to an embodiment of the present application.
  • the second determining module 130 includes:
  • the third determining sub-module 131 is configured to determine a color confidence of each pixel according to a target color gamut difference of each pixel.
  • the fourth determining sub-module 132 is configured to determine, according to the color confidence of all the pixels in the face image, the skin color of the face in the face image belongs to the skin color confidence of the target skin color.
  • the fourth determining sub-module 132 is configured to calculate, according to the target gamut difference of each pixel, a color confidence degree of each pixel by using a color confidence formula;
  • the color confidence formula is:
  • C denotes the target gamut difference of each pixel
  • C min denotes the minimum value of the target gamut difference
  • C max denotes the maximum value of the target gamut difference
  • FIG. 13 is a block diagram of another first determining module 120 provided by the embodiment of the present application.
  • the first determining module 120 includes:
  • the fifth determining sub-module 123 is configured to determine a target image region from the face image.
  • the sixth determining sub-module 124 is configured to determine a target color gamut difference of each pixel in the target image region.
  • the specified two color components include a red color component and a green color component
  • the target skin color is a black skin color
  • the terminal determines the target color gamut difference of each pixel in the face image, and according to the target color gamut difference of all the pixels in the face image. And determining that the skin color of the face in the face image belongs to the skin color confidence of the target skin color. Since the target gamut difference can eliminate the brightness factor of the face image and avoid the influence of the illumination on the face image, the problem that the terminal determines the target skin color with low accuracy can be solved, and the terminal determines the accuracy of the target skin color.
  • FIG. 14 is a block diagram of a skin color recognition device 200, according to an exemplary embodiment.
  • device 200 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • apparatus 200 can include one or more of the following components: processing component 202, memory 204, power component 206, multimedia component 208, audio component 210, input/output (I/O) interface 212, sensor component 214, and communication Component 216.
  • Processing component 202 typically controls the overall operation of device 200, such as operations associated with display, telephone calls, data communications, positioning, camera operations, and recording operations.
  • Processing component 202 can include one or more processors 220 to execute instructions to perform all or part of the steps of the skin tone recognition method described above.
  • processing component 202 can include one or more modules to facilitate interaction between component 202 and other components.
  • processing component 202 can include a multimedia module to facilitate interaction between multimedia component 208 and processing component 202.
  • Memory 204 is configured to store various types of data to support operations on device 200. Examples of such data include instructions for any application or method operating on device 200, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 204 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as a static random access memory (English: Dynamic Random Access Memory; SRAM), electrically erasable programmable read only Memory (English: Electrically Erasable Programmable Read-Only Memory; EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (English: Programmable Read Only) Memory; referred to as: PROM), read-only memory (English: Read-Only Memory; referred to as: ROM), magnetic memory, flash memory, disk or optical disk.
  • SRAM Dynamic Random Access Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • PROM Read
  • Power component 206 provides power to various components of device 200.
  • Power component 206 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 200.
  • the multimedia component 208 includes a screen that provides an output interface between the device 200 and the user.
  • the screen may include a liquid crystal display (English: Liquid Crystal Display; abbreviated as: LCD) and a touch panel (English: Touch Panle; abbreviation: TP).
  • LCD Liquid Crystal Display
  • TP Touch Panle
  • the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel.
  • the touch sensor can sense not only the boundaries of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 208 includes a front camera and/or a rear camera. When the device 200 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data.
  • Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 210 is configured to output and/or input an audio signal.
  • the audio component 210 includes a microphone (English: Microphone; MIC for short) that is configured to receive an external audio signal when the device 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 204 or transmitted via communication component 216.
  • audio component 210 also includes a speaker for outputting an audio signal.
  • the I/O interface 212 provides an interface between the processing component 202 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 214 includes one or more sensors for providing status assessment of various aspects to device 200.
  • sensor component 214 can detect an open/closed state of device 200, a relative positioning of components, such as a display and a keypad of device 200, and sensor component 214 can also detect a change in position of one component of device 200 or device 200, user The presence or absence of contact with the device 200, the orientation or acceleration/deceleration of the device 200 and the temperature change of the device 200.
  • Sensor assembly 214 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 214 may further include a photo sensor, such as a complementary metal oxide semiconductor (English: Complementary Metal OXide Semiconductor; CMOS) or a charge-coupled device (English: Charge-coupled Device; CCD) image sensor for imaging Used in the app.
  • CMOS Complementary Metal OXide Semiconductor
  • CCD Charge-coupled Device
  • the sensor assembly 214 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 216 is configured to facilitate wired or wireless communication between device 200 and other devices.
  • the device 200 can access a wireless network based on a communication standard, such as Wireless Fidelity (WIF), 2G, 3G, or a combination thereof.
  • WIF Wireless Fidelity
  • communication component 216 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • communication component 216 also includes a Near Field Communication (NFC) module to facilitate short range communication.
  • NFC Near Field Communication
  • the NFC module can be based on radio frequency identification (English: Radio Frequency Identification; RFID) technology, Infrared Data Association (English: Infrared Data Association; IrDA) technology, ultra-wideband (English: Ultra Wideband; referred to as: UWB) Technology, Bluetooth (English: Bluetooth; referred to as: BT) technology and other technologies to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth English: Bluetooth; referred to as: BT
  • the device 200 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (English: Digital Signal Processing; DSP), digital signals. Processing equipment (English: Digital Signal Processing Device; referred to as: DSPD), programmable logic device (English: Programable Logic Device; referred to as: PLD), Field Programmable Gate Array (English: Field-Programmable Gate Array; referred to as: FPGA), Implemented by a controller, microcontroller, microprocessor or other electronic component for performing the skin color recognition method described above.
  • ASICs application specific integrated circuits
  • DSP Digital Signal Processing
  • DSP Digital Signal Processing Device
  • DSPD Digital Signal Processing Device
  • PLD Programable Logic Device
  • FPGA Field Programmable Gate Array
  • non-transitory computer readable storage medium comprising instructions, such as a memory 204 comprising instructions executable by processor 220 of apparatus 200 to perform the skin color recognition method described above.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (English: Random Access Memory; RAM), a compact disk read-only memory (English: Compact Disk Read-Only Memory; abbreviation: CD-ROM) ), tape, floppy disk and optical data storage devices.
  • a non-transitory computer readable storage medium when instructions in a storage medium are executed by a processor of apparatus 200, to enable apparatus 200 to perform a skin tone recognition method, the method comprising:
  • the target gamut difference of each pixel being the difference between the intensity values of the two specified color components in each pixel
  • Determining the skin color confidence of the face in the face image belongs to the skin color confidence of the target skin color according to the target color gamut difference of all the pixels in the face image, and the skin color confidence degree reflects the probability that the skin color of the face in the face image is the target skin color .
  • the terminal determines the target color gamut difference of each pixel in the face image, and according to the target color gamut difference of all the pixels in the face image. And determining that the skin color of the face in the face image belongs to the skin color confidence of the target skin color. Since the target gamut difference can eliminate the brightness factor of the face image and avoid the influence of the illumination on the face image, the problem that the terminal determines the target skin color with low accuracy can be solved, and the terminal determines the accuracy of the target skin color.
  • the embodiment of the present application further provides a skin color recognition device, where the skin color recognition device includes:
  • a memory for storing executable instructions of the processor
  • processor is configured to:
  • the target gamut difference of each pixel being the difference between the intensity values of the two specified color components in each pixel
  • Determining the skin color confidence of the face in the face image belongs to the skin color confidence of the target skin color according to the target color gamut difference of all the pixels in the face image, and the skin color confidence degree reflects the probability that the skin color of the face in the face image is the target skin color .
  • the embodiment of the present application further provides a computer readable storage medium, wherein the computer readable storage medium stores instructions for causing the processing component to perform the skin color recognition shown in FIG. 1 or FIG. 2 when the instruction is run on the processing component. method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

一种肤色识别方法及装置、存储介质,属于图像处理领域。该方法包括:获取人脸图像(101);确定人脸图像中的每个像素的目标色域差,每个像素的目标色域差为每个像素中指定的两个颜色分量的强度值之差(102);根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度,肤色置信度反映人脸图像中的人脸的肤色为目标肤色的概率(103)。该方法用于肤色识别,解决了终端确定目标肤色的准确性较低的问题,提高终端确定目标肤色的准确性。

Description

肤色识别方法及装置、存储介质 技术领域
本申请涉及图像处理领域,特别涉及一种肤色识别方法及装置、存储介质。
背景技术
随着手机等终端的普及,越来越多的人喜欢利用终端进行视频通话或直播,在视频通话或直播的过程中,终端可以对采集的人脸图像进行美肤,且针对目标肤色的人脸图像和非目标肤色的人脸图像,终端在进行美肤时采用的美肤方案不同。例如,终端可以采用黑人美肤方案(针对黑色肤色的人体的美肤方案)对黑色肤色(目标肤色)的人脸图像进行美肤,采用非黑人美肤方案(针对非黑色肤色的人体的美肤方案)对非黑色肤色(非目标肤色)的人脸图像进行美肤。因此,在对人脸图像进行美肤之前,终端需要确定人脸图像中的人脸的肤色是否为目标肤色。
现有技术中,终端确定人脸的肤色是否为目标肤色的方法包括:终端采集RGB色彩模式的人脸图像,并获取该人脸图像的红色颜色分量的强度值(也称颜色值)、绿色颜色分量的强度值和蓝色颜色分量的强度值,然后将红色颜色分量的强度值与目标肤色对应的红色颜色强度值范围进行比较,将绿色颜色分量的强度值与目标肤色对应的绿色颜色强度值范围进行比较,将蓝色颜色分量的强度值与目标肤色对应的蓝色颜色强度值范围进行比较,当红色颜色分量的强度值属于目标肤色对应的红色颜色强度值范围,绿色颜色分量的强度值属于目标肤色对应的绿色颜色强度值范围,且蓝色颜色分量的强度值属于目标肤色对应的蓝色颜色强度值范围时,终端确定人脸的肤色为目标肤色,否则,终端确定人脸的肤色为非目标肤色。
在实现本申请的过程中,发明人发现现有技术至少存在以下问题:
RGB色彩模式的人脸图像中的每个颜色分量的强度值与该颜色的亮度相关,因此RGB色彩模式的人脸图像容易受光照的影响,导致终端确定的人脸图像的各个颜色分量的强度值的准确性较低,进而导致终端确定目标肤色的准确性较低。
发明内容
本申请提供了一种肤色识别方法及装置、存储介质,可以解决终端确定目标肤色的准确性较低的问题。所述技术方案如下:
根据本申请实施例的第一方面,提供一种肤色识别方法,所述方法包括:获取人脸图像;确定所述人脸图像中的每个像素的目标色域差,所述每个像素的目标色域差为所述每个像素中指定的两个颜色分量的强度值之差;根据所述人脸图像中的所有像素的目标色域差,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度,所述肤色置信度反映所述人脸图像中的人脸的肤色为所述目标肤色的概率。
根据本申请实施例的第二方面,提供一种肤色识别装置,所述装置包括:获取模块,用于获取人脸图像;第一确定模块,用于确定所述人脸图像中的每个像素的目标色域差,所述每个像素的目标色域差为所述每个像素中指定的两个颜色分量的强度值之差;第二确定模块,用于根据所述人脸图像中的所有像素的目标色域差,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度,所述肤色置信度反映所述人脸 图像中的人脸的肤色为所述目标肤色的概率。
根据本申请实施例的第三方面,提供一种肤色识别装置,包括:处理器;其内存储有所述处理器可执行指令的存储器;其中,所述处理器被配置为在执行上述可执行指令时实现以下步骤:获取人脸图像;确定所述人脸图像中的每个像素的目标色域差,所述每个像素的目标色域差为所述每个像素中指定的两个颜色分量的强度值之差;根据所述人脸图像中的所有像素的目标色域差,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度,所述肤色置信度反映所述人脸图像中的人脸的肤色为所述目标肤色的概率。
根据本申请实施例的第四方面,提供一种计算机可读存储介质,所述可读存储介质中存储有指令,当所述指令在处理组件上运行时,使得所述处理组件执行第一方面或第一方面的任一可选方式所述的肤色识别方法。
本申请提供的技术方案带来的有益效果是:
本申请实施例提供的肤色识别方法及装置、存储介质,终端获取人脸图像后,确定人脸图像中的每个像素的目标色域差,根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。由于目标色域差可以消除人脸图像的亮度因素,避免光照对人脸图像产生影响,因此,可以解决终端确定目标肤色的准确性较低的问题,提高终端确定目标肤色的准确性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种肤色识别方法的方法流程图;
图2是本申请实施例提供的另一种肤色识别方法的方法流程图;
图3是本申请实施例提供的一种确定人脸图像中的每个像素的目标色域差的方法流程图;
图4是本申请实施例提供的一种目标图像区域的示意图;
图5是本申请实施例提供的另一种目标图像区域的示意图;
图6是本申请实施例提供的一种确定目标图像区域中的每个像素的目标色域差的方法流程图;
图7是本申请实施例提供的一种确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度的方法流程图;
图8是本申请实施例提供的另一种确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度的方法流程图;
图9是本申请实施例提供的肤色置信度与人脸图像的目标色域差的关系图;
图10是本申请实施例提供的一种肤色识别装置的框图;
图11是本申请实施例提供的一种第一确定模块的框图;
图12是本申请实施例提供的一种第二确定模块的框图;
图13是本申请实施例提供的另一种第一确定模块的框图;
图14是本申请实施例提供的一种肤色识别装置的框图。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在对本申请实施例提供的方法进行详细描述之前,先对本申请实施例所涉及的YUV色彩模式和RGB色彩模式进行介绍。
YUV色彩模式是被欧洲电视系统所采用的一种颜色编码模式,是帕尔制(英文:Phase Alteration Line;简称:PAL)和塞康制(法文:Sequentiel Couleur A Memoire;简称:SECAM)模拟彩色电视制式采用的颜色模式。在现代彩色电视系统中,通常采用三管彩色摄影机或彩色电荷耦合元件(英文:Charge-coupled Device;简称:CCD)摄影机进行取像,然后把取得的彩色图像信号经分色、放大校正后得到RGB色彩模式的图像,再通过矩阵变换电路对RGB色彩模式的图像进行处理得到亮度信号Y、色差信号B-Y和色差信号R-Y,然后对亮度信号Y、色差信号B-Y和色差信号R-Y分别进行编码,得到编码后的亮度信号Y、编码后的色差信号B-Y和编码后的色差信号R-Y,最后采用同一信道将编码后的亮度信号Y、编码后的色差信号B-Y和编码后的色差信号R-Y发送出去。其中,色差信号B-Y也即是蓝色色度分量U的信号,色差信号R-Y也即是红色色度分量V的信号,YUV色彩模式的图像中,亮度信号Y和色度分量的信号(例如,蓝色色度分量U的信号和红色色度分量V的信号)是分离的。
在视频通话或视频直播的过程中,经常需要采用美肤方案对采集到的人脸图像进行处理,消除人脸图像中人脸的小瑕疵。对于不同肤色的人脸图像,采用同样的美肤方案进行处理得到的效果不同,特别是黑色肤色和黄色肤色差别较大,如果采用同样的美肤方案对黑色肤色的人脸图像和黄色肤色的人脸图像进行处理,可能难以实现美肤效果,因此,对于不同肤色的人脸图像,通常采用不同的美肤方案进行处理,从而在对采用美肤方案对人脸图像进行处理之前,需要事识别出人脸图像中的人脸的肤色。
本申请实施例提供的肤色识别方法可以对目标肤色进行识别,本申请实施例提供的肤色识别方法可以由终端执行,该终端可以为智能手机、平板电脑、智能电视、智能手表、膝上型便携计算机和台式计算机等等。
请参考图1,其示出了本申请实施例提供的一种肤色识别方法的方法流程图,该肤色识别方法可以由终端执行。参见图1,该肤色识别方法包括:
步骤101、终端获取人脸图像。
步骤102、终端确定人脸图像中的每个像素的目标色域差,每个像素的目标色域差为每个像素中指定的两个颜色分量的强度值之差。
步骤103、终端根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度,肤色置信度反映人脸图像中的肤色为目标肤色的 概率。
综上所述,本申请实施例提供的肤色识别方法,终端获取人脸图像后,确定人脸图像中的每个像素的目标色域差,根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。由于目标色域差可以消除人脸图像的亮度因素,避免光照对人脸图像产生影响,因此,可以解决终端确定目标肤色的准确性较低的问题,提高终端确定目标肤色的准确性。
请参考图2,其示出了本申请实施例提供的另一种肤色识别方法的方法流程图,该肤色识别方法可以由终端执行。参见图2,该肤色识别方法包括:
步骤201、终端获取人脸图像。
在本申请实施例中,终端可以在视频直播时获取人脸图像,或者终端在视频通话时获取人脸图像,终端也可以从自身存储的视频或者图片中确定人脸图像,本申请实施例在此不作限定。其中,人脸图像指的是人体脸部的图像。
可选地,终端中设置有摄像头,终端通过摄像头获取人脸图像,该人脸图像可以为目标肤色的人脸图像,该目标肤色可以是本申请实施例提供的肤色识别方法所识别的肤色,例如,该目标肤色为黑色肤色。需要说明的是,本申请实施例是以摄像头为终端中的摄像头为例进行说明的,实际应用中,该摄像头也可以为独立的摄像头,本申请实施例在此不作限定。
步骤202、终端确定人脸图像中的每个像素的目标色域差,每个像素的目标色域差为每个像素中指定的两个颜色分量的强度值之差。
在本申请实施例中,终端可以确定人脸图像中的每个像素的目标色域差,也可以从人脸图像中选择目标图像区域,并确定该目标图像区域中的每个像素的目标色域差。本申请实施例以确定目标图像区域中的每个像素的目标色域差为例进行说明。示例地,请参考图3,其示出了本申请实施例提供的一种确定人脸图像中的每个像素的目标色域差的方法流程图,参见图3,该方法包括:
子步骤2021、终端从人脸图像中确定目标图像区域。
在本申请实施例中,终端在获取到人脸图像后,可以从人脸图像中确定目标图像区域。其中,该目标图像区域可以为人脸图像中包括人脸的区域,或者,包括人脸的大部分区域的区域,也即是,人脸的大部分图像位于该目标图像区域中。
可选地,该目标图像区域可以为人脸框所圈区域;或者,该目标图像区域可以为人脸框所圈区域中心的四分之一区域,且该目标图像区域为人脸图像中心包括至少10×10个像素的区域。可选地,当人脸框所圈区域中心的四分之一区域包括的像素的数量大于10×10时,该目标图像区域可以为人脸框所圈区域中心的四分之一区域,当人脸框所圈区域中心的四分之一区域包括的像素的数量不大于10×10时,该目标图像区域为人脸图像中心包括10×10个像素的区域。
示例地,终端在获取到人脸图像F后,可以从人脸图像F中确定目标图像区域Q1。可选地,如图4所示,该目标图像区域Q1为人脸图像F中,人脸框(图4中未标出)所圈的区域,当该人脸框所圈区域包括40×60个像素时,该目标图像区域Q1包括40×60个像素;或者,如图5所示,该目标图像区域Q1为人脸图像F中,人脸框X所圈区域中心的四分之一区域,当该人脸框X所圈区域包括40×60个像素时,该人脸框X所圈区域中心的四分之一区域包括20×30个像素,因此,目标图像区域Q1包括20 ×30个像素。
需要说明的是,人脸框为终端采用人脸检测技术检测到人脸时所显示的矩形虚拟框,人脸检测技术可以为Viola-Jones人脸检测技术或者基于深度学习的人脸检测技术,人脸检测技术的发展使得从复杂背景中快速可靠地检测出人脸成为可能。在本申请实施例中,终端采用人脸检测技术检测人脸的实现过程可以参考相关技术,本申请实施例在此不再赘述。
子步骤2022、终端确定目标图像区域中的每个像素的目标色域差。
终端从人脸图像中确定目标图像区域之后,可以确定目标图像区域中的每个像素的目标色域差,该每个像素的目标色域差为每个像素中指定的两个颜色分量的强度值之差。在本申请实施例中,根据人脸图像的色彩模式的不同,终端可以采用不同的方法确定目标图像区域中的每个像素的目标色域差。其中,RGB色彩模式是通过红色光、绿色光和蓝色光三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的,因此,RGB色彩模式的人脸图像的颜色分量包括红色颜色分量、绿色颜色分量和蓝色颜色分量。从而当人脸图像为RGB色彩模式的人脸图像时,终端可以直接将每个像素中指定的两个颜色分量的强度值相减得到每个像素的目标色域差,当人脸图像为YUV色彩模式的人脸图像时,终端可以根据每个像素的色度分量的色度值计算得到每个像素的目标色域差。在发明实施例中,终端确定目标图像区域中的每个像素的目标色域差可以包括以下两个方面:
第一方面:终端将每个像素中指定的两个颜色分量的强度值相减得到每个像素的目标色域差。
终端获取人脸图像后,可以确定人脸图像的每个像素的至少两个颜色分量的强度值,该至少两个颜色分量可以包括指定的两个颜色分量,终端可以将该指定的两个颜色分量的强度值相减得到该指定的两个颜色分量的强度值的差值,该每个像素的指定的两个颜色分量的强度值的差值即为该每个像素的目标色域差。在本申请实施例中,目标肤色可以为黑色肤色,该指定的两个颜色分量可以包括红色颜色分量R和绿色颜色分量G。其中,终端确定人脸图像的每个像素的至少两个颜色分量的强度值的实现过程可以参考相关技术,本申请实施例在此不再赘述。
可选地,终端从人脸图像F中确定目标图像区域Q1后,可以确定目标图像区域Q1中的每个像素的红色颜色分量R的强度值和绿色颜色分量G的强度值,将每个像素的红色颜色分量R的强度值与绿色颜色分量G的强度值相减得到该每个像素的目标色域差。示例地,假设目标图像区域Q1中像素1的红色颜色分量R的强度值为200,绿色颜色分量G的强度值为60,则终端将该像素1的红色颜色分量R的强度值与绿色颜色分量G的强度值相减可以得到该像素1的目标色域差为140。
第二方面:终端根据每个像素的色度分量的色度值计算得到每个像素的目标色域差。
请参考图6,其示出了本申请实施例提供的一种确定目标图像区域中的每个像素的目标色域差的方法流程图,参见图6,该方法包括:
子步骤20221、终端确定每个像素的各个色度分量的色度值。
终端可以确定目标图像区域中的每个像素的各个色度分量的色度值。其中,每个像素的色度分量可以包括:蓝色色度分量U和红色色度分量V,因此,终端可以确定 每个像素的蓝色色度分量U和红色色度分量V。示例地,终端确定目标图像区域Q中的每个像素的蓝色色度分量U的色度值和红色色度分量V的色度值。其中,终端确定目标图像区域中的每个像素的各个色度分量的色度值的实现过程可以参考相关技术,本申请实施例在此不再赘述。
子步骤20222、终端根据每个像素的各个色度分量的色度值,确定每个像素的目标色域差。
可选地,终端根据每个像素的蓝色色度分量U和红色色度分量V确定每个像素的目标色域差。具体地,终端根据每个像素的蓝色色度分量U的色度值和红色色度分量V的色度值,采用色域差公式计算得到每个像素的目标色域差;
其中,色域差公式可以为C=a×(U-128)+b×(V-128),C表示每个像素的目标色域差,U表示每个像素的蓝色色度分量,V表示每个像素的红色色度分量,a和b均为常数。
在本申请实施例中,a和b的取值可以根据实际情况设置,在实际应用中,a和b的数值可以是BT.601(英文:Studio encoding parameters of digital television for standard 4:3 and wide screen 16:9 aspect ratios;中文:标准4:3宽高比和宽屏幕16:9宽高比的演播室数字电视编码的参数值),包括:当a=0.334时,b=2.116,当a=0.392时,b=2.409,或者,a和b的数值可以是BT.709(英文:Parameter values for the HDTV standards for production and international programme exchange;中文:高清晰度电视标准的参数值),包括:当a=0.1873时,b=2.0429,当a=0.2132时,b=2.3256,因此,上述色域差公式可以为以下公式中的任意一种。
(1)当a=0.334,b=2.116时,该色域差公式为:
C=0.344×(U-128)+2.116×(V-128);
(2)当a=0.392,b=2.409时,该色域差公式为:
C=0.392×(U-128)+2.409×(V-128);
(3)当a=0.1873,b=2.0429时,该色域差公式为:
C=0.1873×(U-128)+2.0429×(V-128);
(4)当a=0.2132,b=2.3256时,该色域差公式为:
C=0.2132×(U-128)+2.3256×(V-128)。
实际应用中,在根据每个像素的各个色度分量的色度值,计算每个像素的目标色域差时,可以根据终端的系统确定采用上述4个公式中的哪个公式计算每个像素的目标色域差。例如,对于安卓(英文:Android)系统的终端,可以采用上述公式(1)或(3)计算每个像素的目标色域差,对于苹果移动设备操作系统(英文:iOS)的终端,可以采用上述4个公式中的任意一个公式计算每个像素的目标色域差。
示例地,假设目标图像区域Q1中像素1的蓝色色度分量的色度值为150,红色色度分量的色度值为191,则终端采用上述公式(1)计算得到像素1的目标色域差可以为140.15。
需要说明的是,本申请实施例是以终端从人脸图像中选择目标图像区域,确定目标图像区域中的每个像素的目标色域差为例进行说明的,实际应用中,终端还可以确定人脸图像中的每个像素的目标色域差,确定人脸图像中的每个像素的目标色域差的实现过程与确定目标图像区域中的每个像素的目标色域差的过程相同或类似,本申请 实施例在此不再赘述。由于目标图像区域的像素数量小于人脸图像的像素数量,因此,确定目标图像区域中的每个像素的目标色域差的计算量较小,本申请实施例通过确定目标图像区域中的每个像素的目标色域差,可以减小计算量。
步骤203、终端根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度,肤色置信度反映人脸图像中的人脸的肤色为目标肤色的概率。
终端确定人脸图像中的每个像素的目标色域差之后,可以根据人脸图像中的所有像素的目标色域差,确定该人脸图像中的人脸的肤色属于目标肤色的肤色置信度,其中,该肤色置信度反映人脸图像中的人脸的肤色为目标肤色的概率,也即是,反映人脸图像中的人脸的肤色为目标肤色的可能性的大小。
请参考图7,其示出了本申请实施例提供的一种确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度的方法流程图,参见图7,该方法包括:
步骤2031A、终端根据每个像素的目标色域差,确定每个像素的颜色置信度。
在本申请实施例中,终端可以根据每个像素的目标色域差,确定每个像素的颜色置信度。可选地,终端根据每个像素的目标色域差,采用颜色置信度公式计算得到每个像素的颜色置信度;
其中,颜色置信度公式为:
Figure PCTCN2018105103-appb-000001
P表示每个像素的颜色置信度,C表示每个像素的目标色域差,C min表示目标色域差的最小值,C max表示目标色域差的最大值。
终端可以确定每个像素的目标色域差与C min和C max的关系,根据目标色域差与C min和C max的关系,将每个像素的目标色域差代入上述颜色置信度公式,计算得到每个像素的颜色置信度。示例地,终端将像素1的目标色域差代入上述颜色置信度公式,计算得到像素1的颜色置信度。假设像素1的目标色域差C为140,目标色域差的最小值C min为80,目标色域差的最大值C max为160,则终端根据上述公式可以确定像素1的颜色置信度0.75。
需要说明的是,在本申请实施例中,目标色域差的最小值C min和目标色域差的最大值C max可以通过以下方式得到:具体地,对大量的目标肤色的人脸图像中的像素的目标色域差进行统计,根据统计结果绘制散点图,散点图中的每个散点表示一个目标色域差,根据该散点图确定目标肤色的人脸图像中的像素的目标色域差的分布情况,将散点图上散点分布密集的目标色域差的区间作为该目标色域差的取值范围,该目标色域差的取值范围的最小值即为目标色域差的最小值C min,目标色域差的取值范围的最大值即为目标色域差的最大值C max。其中,该得到目标色域差的最小值C min和目标色域差的最大值C max的过程可以是人工实现的,也可以是终端实现的,当得到目标色域差的最小值C min和目标色域差的最大值C max的过程可以是人工实现的时,人工确定该目标色域差的最小值C min和目标色域差的最大值C max之后,可以将目标色域差的最小值C min和目标色域差的最大值C max存储在终端中。
可选地,目标肤色可以为黑色肤色,可以统计大量的黑色肤色的人脸图像中的像素的目标色域差,根据黑色肤色的人脸图像中的像素的目标色域差绘制散点图S,该散点图S中的每个点表示一个目标色域差,根据该散点图S确定黑色肤色的人脸图像中的像素的目标色域差的分布情况,将散点图S上散点分布密集的区间作为该目标色域差的取值范围,该目标色域差的取值范围的最小值即为目标色域差的最小值C min,目标色域差的取值范围的最大值即为目标色域差的最大值C max。示例地,假设散点图S上散点分布密集的目标色域差的区间为[80,160],则目标色域差的最小值C min为80,目标色域差的最大值C max为160。
需要说明的是,当人脸图像为YUV色彩模式的人脸图像时,在对大量的目标肤色的人脸图像中的像素的目标色域差进行统计之前,可以先采用目标色域差公式计算得到人脸图像中的像素的目标色域差,且在计算人脸图像中的像素的目标色域差的过程中,采用的色域差公式与终端在上述子步骤20222中根据每个像素的各个色度分量的色度值,确定每个像素的目标色域差的公式相同。
示例地,若在上述步骤20222中确定每个像素的目标色域差的色域差公式为C=0.344×(U-128)+2.116×(V-128),则在该步骤203中,确定目标色域差的最大值C max和目标色域差的最小值C min时采用的色域差公式也为C=0.344×(U-128)+2.116×(V-128)。
步骤2032A、终端根据人脸图像中的所有像素的颜色置信度,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。
终端在确定人脸图像中的每个像素的颜色置信度之后,可以根据人脸图像中的所有像素的颜色置信度,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。可选地,终端可以将人脸图像中的所有像素的颜色置信度进行平均,得到人脸图像中的所有像素的颜色置信度的平均颜色置信度,将该平均颜色置信度确定为人脸图像中的人脸的肤色属于目标肤色的肤色置信度。
假设人脸图像中共有n个像素,该n个像素的颜色置信度分别为:P 1、P 2、P 3、P 4、P 5……P n,则终端可以对P 1、P 2、P 3、P 4、P 5……P n进行平均得到平均颜色置信度,将该平均颜色置信度确定为人脸图像中的人脸的肤色属于目标肤色的肤色置信度,该平均颜色置信度可以为:
Figure PCTCN2018105103-appb-000002
示例地,假设目标肤色为黑色肤色,终端确定人脸图像F中的n个像素的颜色置信度的平均值为0.7,则终端确定人脸图像F中的人脸的肤色属于黑色肤色的肤色置信度为0.7。
需要说明的是,本申请实施例是以终端将人脸图像中的所有像素的颜色置信度进行平均得到人脸图像中的所有像素的颜色置信度的平均颜色置信度,并将该平均颜色置信度确定为人脸图像中的人脸的肤色属于目标肤色的肤色置信度为例进行说明,实际应用中,终端可以对人脸图像中的所有像素的颜色置信度进行加权运算,得到人脸图像中的所有像素的颜色置信度的加权值,并将该加权值确定为人脸图像中的人脸的肤色属于目标肤色的肤色置信度,当然,终端还可以采用其他方法确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度,本申请实施例在此不再赘述。
步骤204、终端根据人脸图像中的人脸的肤色属于目标肤色的肤色置信度,对人脸图像进行美肤。
终端在确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度之后,可以根据该肤色置信度对人脸图像进行美肤。可选地,终端可以存储针对目标肤色的美肤方案和针对非目标肤色的美肤方案,且每种美肤方案包括美肤参数,终端可以根据肤色置信度对针对目标肤色的美肤方案的美肤参数和针对非目标肤色的美肤方案的美肤参数进行处理,根据处理结果得到针对人脸图像的美肤参数,并根据该针对人脸图像的美肤参数对人脸图像进行美肤。
可选地,假设人脸图像中的人脸的肤色属于目标肤色的肤色置信度为P,终端存储的针对目标肤色的美肤方案的美肤参数为e,针对非目标肤色的美肤方案的美肤参数为f,则终端可以将S=e×P+f×(1-P)确定为针对人脸图像的美肤参数,并根据该美肤参数对人脸图像进行美肤。需要说明的是,实际应用中,美肤方案包括多种美肤参数,在本申请实施例中,美肤参数e表示针对目标肤色的美肤方案中的所有美肤参数,美肤参数f表示针对非目标肤色的美肤方案中的所有美肤参数。
需要说明的是,本申请实施例是以根据人脸图像中的所有像素的颜色置信度,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度为例进行说明的,实际应用中,终端还可以根据人脸图像的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。具体地,请参考图8,其示出了本申请实施例提供的另一种确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度的方法流程图,参见图8,该方法包括:
子步骤2031B、终端根据人脸图像中的所有像素的目标色域差,确定人脸图像的目标色域差。
在本申请实施例中,终端可以根据人脸图像中的所有像素的目标色域差,确定人脸图像的目标色域差。可选地,终端可以将人脸图像中的所有像素的目标色域差进行平均,得到人脸图像中的所有像素的目标色域差的平均目标色域差,将该平均目标色域差确定为人脸图像的目标色域差。
假设人脸图像中共有n个像素,该n个像素的目标色域差分别为:C 1、C 2、C 3、C 4、C 5……C n,则终端可以对C 1、C 2、C 3、C 4、C 5……C n进行平均,得到平均目标色域差,将该平均目标色域差确定为人脸图像的目标色域差,该平均目标色域差可以为:
Figure PCTCN2018105103-appb-000003
示例地,终端确定人脸图像F中的n个像素的目标色域差的平均值为140,因此,终端确定人脸图像F的目标色域差为140。
需要说明的是,本申请实施例是以终端将人脸图像中的所有像素的目标色域差进行平均得到所有像素的目标色域差的平均目标色域差,并将该平均目标色域差确定为人脸图像的目标色域差为例进行说明,实际应用中,终端可以对人脸图像中的所有像素的目标色域差进行加权运算,得到人脸图像中的所有像素的目标色域差的加权值,并将该加权值确定为人脸图像的目标色域差,当然,终端还可以采用其他方法确定人脸图像的目标色域差,本申请实施例在此不再赘述。
子步骤2032B、终端根据人脸图像的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。
可选地,终端可以根据人脸图像的目标色域差,采用肤色置信度公式计算得到人脸图像中的人脸的肤色属于目标肤色的肤色置信度;
其中,肤色置信度公式可以为:
Figure PCTCN2018105103-appb-000004
P表示肤色置信度,C表示人脸图像的目标色域差,C min表示目标色域差的最小值,C max表示目标色域差的最大值。其中,C min和C max的确定过程可以参考上述子步骤2031A,本申请实施例在此不再赘述。需要说明的是,在本申请实施例中,在一定范围内,人脸图像的目标色域差C可以与肤色置信度P呈线性关系,例如,如图9所示,在C min与C max之间,人脸图像的目标色域差C可以与肤色置信度P呈线性关系。
综上所述,本申请实施例提供的肤色识别方法,终端获取人脸图像后,确定人脸图像中的每个像素的目标色域差,根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。由于目标色域差可以消除人脸图像的亮度因素,避免光照对人脸图像产生影响,因此,可以解决终端确定目标肤色的准确性较低的问题,提高终端确定目标肤色的准确性。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
图10是根据一示例性实施例示出的一种肤色识别装置100的框图,该肤色识别装置100可以通过软件、硬件或者两者的结合实现成为终端的部分或者全部,该终端可以为智能手机、平板电脑、智能电视、智能手表、膝上型便携计算机、台式计算机等等,参见图10,该肤色识别装置100可以包括:
获取模块110,用于获取人脸图像。
第一确定模块120,用于确定人脸图像中的每个像素的目标色域差,每个像素的目标色域差为每个像素中指定的两个颜色分量的强度值之差。
第二确定模块130,用于根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度,肤色置信度反映人脸图像中的人脸的肤色为目标肤色的概率。
综上所述,本申请实施例提供的肤色识别装置,终端获取人脸图像后,确定人脸图像中的每个像素的目标色域差,根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。由于目标色域差可以消除人脸图像的亮度因素,避免光照对人脸图像产生影响,因此,可以解决终端确定目标肤色的准确性较低的问题,提高终端确定目标肤色的准确性。
可选地,请参考图11,其示出了本申请实施例提供的一种第一确定模块120的框图,参见图11,该第一确定模块120包括:
第一确定子模块121,用于确定每个像素的各个色度分量的色度值。
第二确定子模块122,用于根据每个像素的各个色度分量的色度值,确定每个像素的目标色域差。
可选地,第二确定子模块122,用于根据每个像素的蓝色色度分量的色度值和红色 色度分量的色度值,采用色域差公式计算得到每个像素的目标色域差;
其中,色域差公式为C=a×(U-128)+b×(V-128),C表示每个像素的目标色域差,U表示每个像素的蓝色色度分量,V表示每个像素的红色色度分量,a和b均为常数。
可选地,请参考图12,其示出了本申请实施例提供的一种第二确定模块130的框图,参见图12,该第二确定模块130包括:
第三确定子模块131,用于根据每个像素的目标色域差,确定每个像素的颜色置信度。
第四确定子模块132,用于根据人脸图像中的所有像素的颜色置信度,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。
可选地,第四确定子模块132,用于根据每个像素的目标色域差,采用颜色置信度公式计算得到每个像素的颜色置信度;
其中,颜色置信度公式为:
Figure PCTCN2018105103-appb-000005
P表示每个像素的颜色置信度,C表示每个像素的目标色域差,C min表示目标色域差的最小值,C max表示目标色域差的最大值。
可选地,请参考图13,其示出了本申请实施例提供的另一种第一确定模块120的框图,参见图13,该第一确定模块120包括:
第五确定子模块123,用于从人脸图像中确定目标图像区域。
第六确定子模块124,用于确定目标图像区域中的每个像素的目标色域差。
可选地,指定的两个颜色分量包括红色颜色分量和绿色颜色分量,目标肤色为黑色肤色。
综上所述,本申请实施例提供的肤色识别装置,终端获取人脸图像后,确定人脸图像中的每个像素的目标色域差,根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。由于目标色域差可以消除人脸图像的亮度因素,避免光照对人脸图像产生影响,因此,可以解决终端确定目标肤色的准确性较低的问题,提高终端确定目标肤色的准确性。
关于上述实施例中的装置,其中各个模块执行操作的方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图14是根据一示例性实施例示出的一种肤色识别装置200的框图。例如,装置200可以是移动电话、计算机、数字广播终端、消息收发设备、游戏控制台、平板设备、医疗设备、健身设备、个人数字助理等。
参照图14,装置200可以包括以下一个或多个组件:处理组件202、存储器204、电源组件206、多媒体组件208、音频组件210、输入/输出(I/O)接口212、传感器组件214以及通信组件216。
处理组件202通常控制装置200的整体操作,诸如与显示、电话呼叫、数据通信、定位、相机操作和记录操作相关联的操作。处理组件202可以包括一个或多个处理器 220来执行指令,以完成上述肤色识别方法的全部或部分步骤。此外,处理组件202可以包括一个或多个模块,便于处理组件202和其他组件之间的交互。例如,处理组件202可以包括多媒体模块,以方便多媒体组件208和处理组件202之间的交互。
存储器204被配置为存储各种类型的数据以支持在装置200上的操作。这些数据的示例包括用于在装置200上操作的任何应用或方法的指令、联系人数据、电话簿数据、消息、图片、视频等。存储器204可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随即存取存储器(英文:Dynamic Random Access Memory;简称:SRAM)、电可擦除可编程只读存储器(英文:Electrically Erasable Programmable Read-Only Memory;简称:EEPROM)、可擦除可编程只读存储器(英文:Erasable Programmable Read Only Memory;简称:EPROM)、可编程只读存储器(英文:Programmable Read Only Memory;简称:PROM)、只读存储器(英文:Read-Only Memory;简称:ROM)、磁存储器、快闪存储器,磁盘或光盘。
电源组件206为装置200的各种组件提供电力。电源组件206可以包括电源管理系统、一个或多个电源及其他与为装置200生成、管理和分配电力相关联的组件。
多媒体组件208包括在装置200和用户之间提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(英文:Liquid Crystal Display;简称:LCD)和触摸面板(英文:Touch Panle;简称:TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件208包括一个前置摄像头和/或后置摄像头。当装置200处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件210被配置为输出和/或输入音频信号。例如,音频组件210包括一个麦克风(英文:Microphone;简称:MIC),当装置200处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器204或经由通信组件216发送。在一些实施例中,音频组件210还包括一个扬声器,用于输出音频信号。
I/O接口212为处理组件202和外围接口模块之间提供接口,上述外围接口模块可以是键盘、点击轮、按钮等。这些按钮可以包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件214包括一个或多个传感器,用于为装置200提供各个方面的状态评估。例如,传感器组件214可以检测到装置200的打开/关闭状态,组件的相对定位,例如组件为装置200的显示器和小键盘,传感器组件214还可以检测装置200或装置200一个组件的位置改变,用户与装置200接触的存在或不存在,装置200方位或加速/减速和装置200的温度变化。传感器组件214可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件214还可以包括光传感器,如互补金属氧化物半导体(英文:Complementary Metal OXide Semiconductor;简称:CMOS)或电荷耦合元件(英文:Charge-coupled Device;简称:CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件214还可以包括加速度传感 器、陀螺仪传感器、磁传感器、压力传感器或温度传感器。
通信组件216被配置为便于装置200和其他设备之间有线或无线方式的通信。装置200可以接入基于通信标准的无线网络,如无线保真(英文:WIreless FIdelity;简称:WIFI)、2G、3G或它们的组合。在一个示例性实施例中,通信组件216经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件216还包括近场通信(英文:Near Field Communication;简称:NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(英文:Radio Frequency Identification;简称:RFID)技术,红外数据协会(英文:Infrared Data Association;简称:IrDA)技术,超宽带(英文:Ultra Wideband;简称:UWB)技术,蓝牙(英文:Bluetooth;简称:BT)技术和其他技术来实现。
在示例性实施例中,装置200可以被一个或多个应用专用集成电路(英文:Application Specific Integrated Circuit;简称:ASIC)、数字信号处理器(英文:Digital Signal Processing;简称:DSP)、数字信号处理设备(英文:Digital Signal Processing Device;简称:DSPD)、可编程逻辑器件(英文:Programable Logic Device;简称:PLD)、现场可编程门阵列(英文:Field-Programmable Gate Array;简称:FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述肤色识别方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器204,上述指令可由装置200的处理器220执行以完成上述肤色识别方法。例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(英文:Random Access Memory;简称:RAM)、激光唱片只读存储器(英文:Compact Disk Read-Only Memory;简称:CD-ROM)、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当存储介质中的指令由装置200的处理器执行时,使得装置200能够执行一种肤色识别方法,该方法包括:
获取人脸图像;
确定人脸图像中的每个像素的目标色域差,每个像素的目标色域差为每个像素中指定的两个颜色分量的强度值之差;
根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度,肤色置信度反映人脸图像中的人脸的肤色为目标肤色的概率。
综上所述,本申请实施例提供的肤色识别装置,终端获取人脸图像后,确定人脸图像中的每个像素的目标色域差,根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度。由于目标色域差可以消除人脸图像的亮度因素,避免光照对人脸图像产生影响,因此,可以解决终端确定目标肤色的准确性较低的问题,提高终端确定目标肤色的准确性。
本申请实施例还提供了一种肤色识别装置,该肤色识别装置包括:
处理器;
用于存储处理器的可执行指令的存储器;
其中,处理器被配置为:
获取人脸图像;
确定人脸图像中的每个像素的目标色域差,每个像素的目标色域差为每个像素中指定的两个颜色分量的强度值之差;
根据人脸图像中的所有像素的目标色域差,确定人脸图像中的人脸的肤色属于目标肤色的肤色置信度,肤色置信度反映人脸图像中的人脸的肤色为目标肤色的概率。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当该指令在处理组件上运行时,使得处理组件执行图1或图2所示的肤色识别方法。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种肤色识别方法,其特征在于,所述方法包括:
    获取人脸图像;
    确定所述人脸图像中的每个像素的目标色域差,所述每个像素的目标色域差为所述每个像素中指定的两个颜色分量的强度值之差;
    根据所述人脸图像中的所有像素的目标色域差,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度,所述肤色置信度反映所述人脸图像中的人脸的肤色为所述目标肤色的概率。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述人脸图像中的每个像素的目标色域差,包括:
    确定所述每个像素的各个色度分量的色度值;
    根据所述每个像素的各个色度分量的色度值,确定所述每个像素的目标色域差。
  3. 根据权利要求2所述的方法,其特征在于,所述每个像素的色度分量包括蓝色色度分量和红色色度分量,所述根据所述每个像素的各个色度分量的色度值,确定所述每个像素的目标色域差,包括:
    根据所述每个像素的蓝色色度分量的色度值和红色色度分量的色度值,采用色域差公式计算得到所述每个像素的目标色域差;
    其中,所述色域差公式为C=a×(U-128)+b×(V-128),所述C表示所述每个像素的目标色域差,所述U表示所述每个像素的蓝色色度分量,所述V表示所述每个像素的红色色度分量,所述a和所述b均为常数。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述人脸图像中的所有像素的目标色域差,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度,包括:
    根据所述每个像素的目标色域差,确定所述每个像素的颜色置信度;
    根据所述人脸图像中的所有像素的颜色置信度,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述每个像素的目标色域差,确定所述每个像素的颜色置信度,包括:
    根据所述每个像素的目标色域差,采用颜色置信度公式计算得到所述每个像素的颜色置信度;
    其中,所述颜色置信度公式为:
    Figure PCTCN2018105103-appb-100001
    所述P表示所述每个像素的颜色置信度,所述C表示所述每个像素的目标色域差,所述C min表示所述目标色域差的最小值,所述C max表示所述目标色域差的最大值。
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述确定所述人脸图像中 的每个像素的目标色域差,包括:
    从所述人脸图像中确定目标图像区域;
    确定所述目标图像区域中的每个像素的目标色域差。
  7. 根据权利要求1至5任一所述的方法,其特征在于,所述指定的两个颜色分量包括红色颜色分量和绿色颜色分量,所述目标肤色为黑色肤色。
  8. 一种肤色识别装置,其特征在于,所述装置包括:
    获取模块,用于获取人脸图像;
    第一确定模块,用于确定所述人脸图像中的每个像素的目标色域差,所述每个像素的目标色域差为所述每个像素中指定的两个颜色分量的强度值之差;
    第二确定模块,用于根据所述人脸图像中的所有像素的目标色域差,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度,所述肤色置信度反映所述人脸图像中的人脸的肤色为所述目标肤色的概率。
  9. 根据权利要求8所述的装置,其特征在于,所述第一确定模块,包括:
    第一确定子模块,用于确定所述每个像素的各个色度分量的色度值;
    第二确定子模块,用于根据所述每个像素的各个色度分量的色度值,确定所述每个像素的目标色域差。
  10. 根据权利要求9所述的装置,其特征在于,所述第二确定子模块,用于根据所述每个像素的蓝色色度分量的色度值和红色色度分量的色度值,采用色域差公式计算得到所述每个像素的目标色域差;
    其中,所述色域差公式为C=a×(U-128)+b×(V-128),所述C表示所述每个像素的目标色域差,所述U表示所述每个像素的蓝色色度分量,所述V表示所述每个像素的红色色度分量,所述a和所述b均为常数。
  11. 根据权利要求10所述的装置,其特征在于,所述第二确定模块,包括:
    第三确定子模块,用于根据所述每个像素的目标色域差,确定所述每个像素的颜色置信度;
    第四确定子模块,用于根据所述人脸图像中的所有像素的颜色置信度,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度。
  12. 根据权利要求11所述的装置,其特征在于,所述第四确定子模块,用于根据所述每个像素的目标色域差,采用颜色置信度公式计算得到所述每个像素的颜色置信度;
    其中,所述颜色置信度公式为:
    Figure PCTCN2018105103-appb-100002
    所述P表示所述每个像素的颜色置信度,所述C表示所述每个像素的目标色域差,所述C min表示所述目标色域差的最小值,所述C max表示所述目标色域差的最大值。
  13. 根据权利要求8至12任一所述的装置,其特征在于,所述第一确定模块包括:
    第五确定子模块,用于从所述人脸图像中确定目标图像区域;
    第六确定子模块,用于确定所述目标图像区域中的每个像素的目标色域差。
  14. 根据权利要求8至12任一所述的装置,其特征在于,所述指定的两个颜色分量包括红色颜色分量和绿色颜色分量,所述目标肤色为黑色肤色。
  15. 一种肤色识别装置,其特征在于,包括:
    处理器;
    其内存储有所述处理器可执行指令的存储器;
    其中,所述处理器被配置为在执行上述可执行指令时实现以下步骤:
    获取人脸图像;
    确定所述人脸图像中的每个像素的目标色域差,所述每个像素的目标色域差为所述每个像素中指定的两个颜色分量的强度值之差;
    根据所述人脸图像中的所有像素的目标色域差,确定所述人脸图像中的人脸的肤色属于目标肤色的肤色置信度,所述肤色置信度反映所述人脸图像中的人脸的肤色为所述目标肤色的概率。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令在处理组件上运行时,使得所述处理组件执行权利要求1至7任一所述的肤色识别方法。
PCT/CN2018/105103 2017-09-14 2018-09-11 肤色识别方法及装置、存储介质 WO2019052449A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/646,765 US11348365B2 (en) 2017-09-14 2018-09-11 Skin color identification method, skin color identification apparatus and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710828760.5A CN107564073B (zh) 2017-09-14 2017-09-14 肤色识别方法及装置、存储介质
CN201710828760.5 2017-09-14

Publications (1)

Publication Number Publication Date
WO2019052449A1 true WO2019052449A1 (zh) 2019-03-21

Family

ID=60980916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105103 WO2019052449A1 (zh) 2017-09-14 2018-09-11 肤色识别方法及装置、存储介质

Country Status (3)

Country Link
US (1) US11348365B2 (zh)
CN (1) CN107564073B (zh)
WO (1) WO2019052449A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107564073B (zh) 2017-09-14 2021-03-16 广州市百果园信息技术有限公司 肤色识别方法及装置、存储介质
CN112686965A (zh) * 2020-12-25 2021-04-20 百果园技术(新加坡)有限公司 一种肤色检测方法、装置、移动终端和存储介质
CN113674366A (zh) * 2021-07-08 2021-11-19 北京旷视科技有限公司 皮肤颜色的识别方法、装置和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100158363A1 (en) * 2008-12-19 2010-06-24 Qualcomm Incorporated System and method to detect skin color in an image
CN106096588A (zh) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 一种图像数据的处理方法、装置和移动终端
CN107025441A (zh) * 2017-03-29 2017-08-08 北京小米移动软件有限公司 肤色检测方法及装置
CN107564073A (zh) * 2017-09-14 2018-01-09 广州市百果园信息技术有限公司 肤色识别方法及装置、存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2396504B (en) * 2002-12-20 2005-11-23 Canon Kk Image processing
US7426296B2 (en) * 2004-03-18 2008-09-16 Sony Corporation Human skin tone detection in YCbCr space
US8040389B2 (en) * 2006-07-25 2011-10-18 Nikon Corporation Image processing method, image processing program and image processing apparatus for detecting object of an image
CN101251890B (zh) * 2008-03-13 2010-04-21 西安交通大学 基于多色域选择性形态学处理的视频图像肤色检测方法
US8295557B2 (en) * 2009-01-12 2012-10-23 Arcsoft Hangzhou Co., Ltd. Face image processing method
US8244003B2 (en) * 2010-01-25 2012-08-14 Apple Inc. Image preprocessing
US8588309B2 (en) * 2010-04-07 2013-11-19 Apple Inc. Skin tone and feature detection for video conferencing compression
CN103577791B (zh) * 2012-07-26 2018-02-23 阿里巴巴集团控股有限公司 一种红眼检测方法和系统
WO2015030705A1 (en) * 2013-08-26 2015-03-05 Intel Corporation Automatic white balancing with skin tone correction for image processing
WO2016165060A1 (en) * 2015-04-14 2016-10-20 Intel Corporation Skin detection based on online discriminative modeling
CN105279487B (zh) * 2015-10-15 2022-03-15 Oppo广东移动通信有限公司 美颜工具筛选方法和系统
CN106097261B (zh) * 2016-06-01 2019-10-18 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及终端设备
CN106210521A (zh) * 2016-07-15 2016-12-07 深圳市金立通信设备有限公司 一种拍照方法及终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100158363A1 (en) * 2008-12-19 2010-06-24 Qualcomm Incorporated System and method to detect skin color in an image
CN106096588A (zh) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 一种图像数据的处理方法、装置和移动终端
CN107025441A (zh) * 2017-03-29 2017-08-08 北京小米移动软件有限公司 肤色检测方法及装置
CN107564073A (zh) * 2017-09-14 2018-01-09 广州市百果园信息技术有限公司 肤色识别方法及装置、存储介质

Also Published As

Publication number Publication date
US20200210682A1 (en) 2020-07-02
CN107564073A (zh) 2018-01-09
US11348365B2 (en) 2022-05-31
CN107564073B (zh) 2021-03-16

Similar Documents

Publication Publication Date Title
CN109345485B (zh) 一种图像增强方法、装置、电子设备及存储介质
CN110958401B (zh) 一种超级夜景图像颜色校正方法、装置和电子设备
WO2016011747A1 (zh) 肤色调整方法和装置
CN109146814A (zh) 图像处理方法、装置、存储介质及电子设备
CN108932696B (zh) 信号灯的光晕抑制方法及装置
JP6328275B2 (ja) 画像タイプ識別方法、装置、プログラム及び記録媒体
CN109714582B (zh) 白平衡调整方法、装置、存储介质及终端
WO2019052449A1 (zh) 肤色识别方法及装置、存储介质
CN107025441B (zh) 肤色检测方法及装置
US10951816B2 (en) Method and apparatus for processing image, electronic device and storage medium
CN105791790A (zh) 图像处理方法及装置
CN107527072B (zh) 确定相似头像的方法及装置、电子设备
CN107730443B (zh) 图像处理方法、装置及用户设备
CN113472997B (zh) 图像处理方法及装置、移动终端及存储介质
CN106375787B (zh) 视频播放的方法及装置
CN112905141A (zh) 屏幕显示方法及装置、计算机存储介质
US10068151B2 (en) Method, device and computer-readable medium for enhancing readability
CN107025638B (zh) 图像处理的方法及装置
CN111310600B (zh) 一种图像处理方法、装置、设备及介质
US20220405896A1 (en) Image processing method and apparatus, model training method and apparatus, and storage medium
CN113254118B (zh) 肤色显示及装置
CN114339187B (zh) 图像处理方法、图像处理装置及存储介质
US20230328159A1 (en) Image processing method and apparatus, device and storage medium
CN116934607A (zh) 图像白平衡处理方法、装置、电子设备及存储介质
CN118317203A (zh) 图像处理方法、装置、电子设备及芯片

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18855852

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18855852

Country of ref document: EP

Kind code of ref document: A1