WO2019100282A1 - Procédé et dispositif de reconnaissance de couleur de peau de visage, et terminal intelligent - Google Patents

Procédé et dispositif de reconnaissance de couleur de peau de visage, et terminal intelligent Download PDF

Info

Publication number
WO2019100282A1
WO2019100282A1 PCT/CN2017/112533 CN2017112533W WO2019100282A1 WO 2019100282 A1 WO2019100282 A1 WO 2019100282A1 CN 2017112533 W CN2017112533 W CN 2017112533W WO 2019100282 A1 WO2019100282 A1 WO 2019100282A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
skin color
face
avgcr
Prior art date
Application number
PCT/CN2017/112533
Other languages
English (en)
Chinese (zh)
Inventor
林丽梅
Original Assignee
深圳和而泰智能控制股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳和而泰智能控制股份有限公司 filed Critical 深圳和而泰智能控制股份有限公司
Priority to PCT/CN2017/112533 priority Critical patent/WO2019100282A1/fr
Priority to CN201780009028.3A priority patent/CN108701217A/zh
Publication of WO2019100282A1 publication Critical patent/WO2019100282A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Definitions

  • the present application relates to the field of face recognition technologies, and in particular, to a face color recognition method, apparatus, and smart terminal.
  • Face recognition technology is a technology for identifying and comparing facial visual feature information. Its research fields include: identification, expression recognition, gender recognition, nationality recognition, and skin care.
  • mainstream skin color detection methods include fixed skin color distribution detection methods, and joint detection methods of skin color probability distribution and Bayesian decision.
  • these methods can only determine which areas of the image belong to the skin area, and cannot accurately identify the specific color of the human face skin, thus making it difficult to provide an effective reference for people's personal image design.
  • the embodiment of the present application provides a method, a device, and a smart terminal for recognizing facial skin color, which can solve the problem of how to accurately identify the specific color of the human face skin.
  • an embodiment of the present application provides a method for recognizing a facial skin color, including:
  • avgY represents the mean of the image of the area in the Y color channel
  • avgCr represents the mean of the area image in the Cr color channel
  • avgCb represents the mean of the area image in the Cb color channel
  • a facial skin color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, wherein the skin color template includes a plurality of skin color patches, the human skin color is One of a variety of skin color blocks.
  • the method before the step of acquiring the mean vector [avgY, avgCr, avgCb] of the region image under the YCrCb color space, the method further includes:
  • the color space of the area image is converted to the YCrCb color space.
  • the method before the step of converting the color space of the area image to the YCrCb color space, the method further includes:
  • the area image includes n, the n is a positive integer, and the obtaining the mean vector [avgY, avgCr, avgCb] of the area image in the YCrCb color space includes:
  • the mean vector [avgY, avgCr, avgCb] of the n region images in the YCrCb color space is obtained by the following formula:
  • sumY_i represents the sum of the pixel values of the i-th region image in the Y color channel
  • sumCr_i represents the sum of the pixel values of the i-th region image in the Cr color channel
  • sumCb_i represents the i-th region image in the Cb color
  • S_i represents the area of the image of the i-th region
  • the determining according to the mean vector [avgY, avgCr, avgCb] and a preset skin color template, a face color of the face image, including:
  • avgY_j represents the mean value of the jth skin color patch in the skin color template in the Y color channel
  • avgCr_j represents the mean value of the jth skin color patch in the skin color template in the Cr color channel
  • avgCb_j represents the skin color template The mean value of the j skin patches in the Cb color channel.
  • the area image includes any one or more of a left cheek area image, a nose area image, and a right cheek area image.
  • an embodiment of the present application provides a face color recognition device, including:
  • a face image obtaining unit configured to acquire a face image
  • An intercepting unit configured to intercept an image of the area to be detected from the face image
  • a data processing unit configured to obtain an average vector [avgY, avgCr, avgCb] of the region image under the YCrCb color space, wherein avgY represents an average value of the region image in the Y color channel, and avgCr represents that the region image is The mean value of the Cr color channel, avgCb represents the mean value of the area image in the Cb color channel;
  • An analyzing unit configured to determine a face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, wherein the skin color template includes a plurality of skin color blocks,
  • the human skin color is one of the plurality of skin color patches.
  • the area image includes n, and the n is a positive integer greater than 0, and the data processing unit is specifically configured to:
  • the mean vector [avgY, avgCr, avgCb] of the n region images in the YCrCb color space is obtained by the following formula:
  • sumY_i represents the sum of the pixel values of the i-th region image in the Y color channel
  • sumCr_i represents the sum of the pixel values of the i-th region image in the Cr color channel
  • sumCb_i represents the i-th region image in the Cb color
  • S_i represents the area of the image of the i-th region
  • the analyzing unit is specifically configured to:
  • avgY_j represents the mean value of the jth skin color patch in the skin color template in the Y color channel
  • avgCr_j represents the mean value of the jth skin color patch in the Cr color channel in the skin color template
  • avgCb_j table The mean value of the jth color patch in the Cb color channel in the skin color template is shown.
  • the area image includes any one or more of a left cheek area image, a nose area image, and a right cheek area image.
  • an intelligent terminal including:
  • At least one processor and,
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform a facial skin color recognition method as described above.
  • the embodiment of the present application provides a storage medium, where the storage medium stores executable instructions, and when the executable instructions are executed by the smart terminal, the smart terminal is executed as described above. Face color recognition method.
  • the embodiment of the present application further provides a program product, where the program product includes a program stored on a storage medium, where the program includes program instructions, when the program instruction is used by the smart terminal.
  • the smart terminal When executed, the smart terminal is caused to perform the face color recognition method as described above.
  • the beneficial effects of the embodiment of the present application are as follows: the method, device, and smart terminal for recognizing a facial skin color provided by the embodiment of the present application, when the facial image is acquired, the image of the area to be detected is intercepted from the facial image; Then, the color space of the area image is converted into a YCrCb color space, and the mean vector of the area image is acquired [avgY, avgCr, avgCb]; finally, one of the preset skin color templates including a plurality of skin color blocks is selected.
  • the skin color patch matching the mean vector [avgY, avgCr, avgCb] is used as the face color of the face image, and the color of the face skin can be accurately recognized, which is convenient for providing an effective reference for people's personal image design.
  • the face color recognition method determines the face color of the face image based on the mean vector [avgY, avgCr, avgCb] and the preset skin color template, and does not require a large amount of training data, and the recognition process is simple and feasible.
  • FIG. 1 is a schematic flowchart diagram of a face color recognition method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for obtaining an average vector [avgY, avgCr, avgCb] of a region image in a YCrCb color space according to an embodiment of the present application;
  • FIG. 3 is a diagram showing an example of gray scale of a skin color template according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for determining a face color of a face image based on a mean vector [avgY, avgCr, avgCb] and a preset skin color template according to an embodiment of the present application;
  • FIG. 5 is a schematic flowchart diagram of another method for recognizing facial skin color according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a face color recognition device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of hardware of an intelligent terminal according to an embodiment of the present application.
  • Embodiments of the present application provide a method, an apparatus, a smart terminal, and a storage medium for recognizing facial skin color.
  • the face color recognition method is a recognition scheme for face skin color matching based on a preset skin color template, and the image of the area to be detected is intercepted from the face image when the face image is acquired; Then, obtaining the mean vector [avgY, avgCr, avgCb] of the region image in the YCrCb color space; finally selecting one of the preset skin color templates including the plurality of skin patches and the mean vector [avgY, avgCr, avgCb] matching skin color patch as the face color of the face image, can be accurate Identifying the color of the human face to provide an effective reference for people's personal image design.
  • the facial skin color recognition method determines the above based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template.
  • the face color of the face image does not require a large amount of training data, and the recognition process is simple and feasible.
  • the face color recognition method, the smart terminal, and the storage medium provided by the embodiments of the present application can be applied to any technical field related to face recognition, for example, portrait nationality recognition, and the like, and is particularly suitable for the fields of beauty application, personal image design, and the like.
  • a beauty-like application can be developed based on the inventive concept of the face color recognition method provided by the embodiment of the present application, and the application can automatically recognize the face color of the face image when the user inputs the face image, and further Based on the face color, the user recommends a suitable foundation color, makeup, accessories, skin care products, and the like.
  • the face color recognition method provided by the embodiment of the present application may be performed by any type of smart terminal having an image processing function, and the smart terminal may include any suitable type of storage medium for storing data, such as a magnetic disk or a compact disk (CD). -ROM), read-only memory or random storage memory.
  • the smart terminal may also include one or more logical computing modules that perform any suitable type of function or operation in parallel, such as viewing a database, image processing, etc., in a single thread or multiple threads.
  • the logic operation module may be any suitable type of electronic circuit or chip-type electronic device capable of performing logical operation operations, such as a single core processor, a multi-core processor, a graphics processing unit (GPU), or the like.
  • the smart terminal may include, but is not limited to, a beauty authentication instrument, a personal computer, a tablet computer, a smart phone, a server, and the like.
  • FIG. 1 is a schematic flowchart of a face color recognition method according to an embodiment of the present application. Referring to FIG. 1 , the method includes but is not limited to the following steps:
  • Step 110 Acquire a face image.
  • the "face image” refers to an image including a face of a detected person, by which the face feature of the detected person can be acquired.
  • the specific implementation manner of acquiring the face image may be: collecting the face image of the detected person in real time; or: directly acquiring the existing included in the smart terminal locally or in the cloud. Detects an image of a person's face. For different application scenarios or the selection of the detected person, different ways of acquiring the face image may be selected. For example, it is assumed that a smart terminal for recommending suitable cosmetics for a user is provided in a cosmetics store, and the smart terminal is recommended in order to timely recommend a color of cosmetics such as foundation, concealer, lipstick, etc. based on the skin color of the user.
  • the manner of acquiring the face image may be that the face image of the detected person is collected in real time by the camera device.
  • the user wants to design a suitable makeup for himself through his own smart terminal, for example, a smart phone. Since the smart terminal generally stores a personal face image, the smart terminal acquires the face in the application scenario. The image may also be obtained by directly capturing the existing image of the face including the detected person directly in the smart terminal or in the cloud.
  • the manner of obtaining the face image is not limited to the above description, and the comparison of the embodiments of the present application is not specifically limited.
  • Step 120 Intercept the image of the area to be detected from the face image.
  • the "area image” refers to an image as a reference area for detecting a skin color of a human face, and thus, in the present embodiment, the skin color represented by the image of the area represents the skin color of the entire face.
  • the area image may be an image of any one or more areas located in the contour of the face of the face image, such as a forehead area image, a nose area image, a left cheek area image, a right cheek area image, a chin area image, etc. Wait. Among them, considering some areas of the face may have some noise points, for example, there may be bangs in the forehead area of the woman, there may be a beard in the chin area of the male, etc.
  • the left cheek may be The image corresponding to the three relatively "clean" regions of the right cheek or the nose as the image of the region to be detected, that is, in this embodiment, the region image includes the image of the left cheek region, the image of the nose region, and the right cheek Any one or more of the area images. Wherein, when the area image includes a plurality of pieces, the reliability of the recognition result can be enhanced.
  • a specific implementation manner of extracting an image of the area to be detected from the face image may be: when acquiring a face image, first performing a face key point on the face image. Positioning, for example, applying a third-party toolkit dlib or face++ to perform face key point positioning on the face image; and then, based on the position of the positioned face key point, the image of the area to be detected is intercepted, for example, based on the positioned The coordinates of the face key point intercept the left cheek region image in the face image, and/or the nose region image, and/or the right cheek region image.
  • Step 130 Acquire an average vector [avgY, avgCr, avgCb] of the area image in the YCrCb color space.
  • the color space of the acquired face image/area image is also the RGB color space.
  • the RGB color space is a color space that is not intuitive and perceptually very uneven.
  • the change of the illumination environment easily causes the RGB value to change, which in turn causes a large error in the face color recognition result. Therefore, in the present embodiment, in order to reduce the influence of the illumination environment on the recognition result, the color represented by the captured region image is represented by the mean vector under the YCrCb color space.
  • the Y color channel is used to characterize the brightness of the pixel point, that is, the gray scale value can be obtained by superimposing a specific portion of the RGB signal on the pixel point; and the Cr color channel and the Cb color are obtained.
  • the channel is used to characterize the chromaticity of the pixel, describe the hue and saturation of the pixel, and is used to specify the color of the pixel.
  • Cr reflects the difference between the red portion of the RGB input signal and the luminance value Y of the RGB signal.
  • Cb reflects the difference between the blue portion of the RGB input signal and the luminance value Y of the RGB signal, used to characterize the saturation of the color.
  • the "mean vector" is composed of the mean values avgY, avgCr, and avgCb of the three color channels of Y, Cr, and Cb, respectively, and is denoted as [avgY, avgCr, avgCb], and is used to represent the area image.
  • the color of the skin presented in it where avgY represents the mean of the region image in the Y color channel, avgCr represents the mean of the region image in the Cr color channel, and avgCb represents the mean of the region image in the Cb color channel.
  • the area image may include n, and n may be any positive integer greater than 0. Then, after n areas of the area to be detected are intercepted, the area may be acquired by using the method shown in FIG.
  • the method may include, but is not limited to, the following steps:
  • Step 131 Calculate the sum of the pixel values sumY_i of the Y-color channel of each area image, the sum of the pixel values sumCr_i of the Cr color channel, the sum of the pixel values sumCb_i of the Cb color channel, and the area S_i of each area image.
  • Y, Cr, and Cb color channel division is performed for each area image, that is, for each of the area images.
  • the pixel points are divided into three color channels of Y, Cr and Cb.
  • the sum of the pixel values of the image of each area in the Y color channel sumY_i, the sum of the pixel values of the Cr color channel sumCr_i, and the sum of the pixel values of the Cb color channel sumCb_i are calculated.
  • sumY_i represents the sum of the pixel values of the i-th area image in the Y color channel
  • sumCr_i represents the sum of the pixel values of the i-th area image in the Cr color channel
  • sumCb_i represents the sum of the pixel values of the i-th area image in the Cb color channel
  • S_i represents the area of the i-th area image.
  • the “area” is the area of the image space, and refers to the total number of pixels of a single color channel in the area image.
  • the region image extracted from the acquired face image includes a left cheek region image, a nose region image, and a right cheek region image.
  • the sum of the pixel values of the left cheek region image in the three color channels of Y, Cr, and Cb is: sumY_1 , sumCr_1 and sumCb_1, the area is S_1; calculate the sum of the pixel values of the nose area image in the three color channels of Y, Cr and Cb: sumY_2, sumCr_2 and sumCb_2, the area is S_2; calculate the image of the right cheek area in Y, Cr and
  • the sum of the pixel values of the Cb three color channels are: sumY_3, sumCr_3, and sumCb_3, and the area is S_3, thereby obtaining parameters: sumY_1, sumY_2, sumY_3,
  • Step 132 Acquire an average vector [avgY, avgCr, avgCb] of the n area images in the YCrCb color space.
  • the mean vector [avgY, avgCr, avgCb] of the n region images in the YCrCb color space may be obtained by the following formula:
  • the images of the three regions can be obtained according to the above formula.
  • the mean vector [avgY, avgCr, avgCb] of the n area images may also be acquired in other manners.
  • the mean vector [avgY_i, avgCr_i, avgCb_i] of each area image in the YCrCb color space wherein avgY_i represents the ith The average of the area image in the Y color channel, avgCr_i represents the mean of the i-th area image in the Cr color channel, avgCb_i represents the mean of the i-th area image in the Cb color channel; then the n mean vectors [avgY_i, avgCr_i, avgC_i] averages (or, weighted average) to obtain the mean vector [avgY, avgCr_i]
  • the method further comprises: converting the color space of the area image to the YCrCb color space.
  • the color space of the acquired face image is an RGB color space
  • the color space of the area image extracted from the face image is also an RGB color space, and thus, according to the RGB color space and the YCrCb color space.
  • the conversion algorithm converts the color space of the area image into a YCrCb color space.
  • the conversion algorithm of the RGB color space and the YCrCb color space is not specifically limited.
  • the color space of the acquired face image is another color space, such as an HSV color space or a CMY color space, etc.
  • the area may also be converted by a corresponding conversion algorithm.
  • the color space of the image is converted to the YCrCb color space.
  • Step 140 Determine a face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template.
  • the “pre-set skin color template” may be any skin color template commonly used in people's daily life, and the skin color template includes a plurality of skin color patches, and each skin color patch represents a person. Face color.
  • an example of grayscale of a skin color template provided by an embodiment of the present application includes 66 skin color patches in the skin color template.
  • a color template is generally set corresponding to the color number of the foundation (each skin color patch on the skin color template has a corresponding foundation color number) for the purchaser to compare Observing the skin color of the customer and the skin color template to determine the foundation color number suitable for the customer. Therefore, in the present embodiment, the mean vector [avgY, avgCr, avgCb] and the preset of the skin color representing the face image actually obtained may be used.
  • the skin color template determines the face skin color of the face image, that is, selects the skin color patch that best matches the mean vector [avgY, avgCr, avgCb] from the plurality of skin color patches of the skin color template as the face The face color of the image. Therefore, in the embodiment, the face color recognition can be performed only by selecting an appropriate skin color template, and it is not necessary to train a large amount of sample data, thereby saving time and cost of face color recognition.
  • the face skin color of the face image can be determined based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template by the method as shown in FIG. 4.
  • the method may include but is not limited to the following steps:
  • Step 141 Acquire a standard vector [avgY_j, avgCr_j, avgCb_j] of each skin color patch in the preset skin color template.
  • the “standard vector” refers to the mean vector of the skin color patch in the skin color template, which is a standard for matching the skin color of the face image actually acquired, and one skin color patch corresponds to a standard vector.
  • avgY_j, avgCr_j, avgCb_j represents the mean value of the jth skin color patch in the skin color template in the Y color channel
  • avgCr_j represents the mean value of the jth skin color patch in the color channel template in the Cr color channel
  • avgCb_j represents the mean value of the jth skin color patch in the Cb color channel in the skin color template.
  • the specific implementation manner of obtaining the standard vector [avgY_j, avgCr_j, avgCb_j] of each skin color patch in the preset skin color template may be: first converting the color space of each skin color block into the YCrCb color space, and then performing The division of the three color channels of Y, Cr and Cb, and obtaining the mean value of each color channel, thereby calculating the standard vector corresponding to each skin color patch [avgY_j, avgCr_j, avgCb_j].
  • the standard vectors [avgY_j, avgCr_j, avgCb_j] corresponding to each of the known skin color patches may be stored on the smart terminal, so that the skin color template is directly retrieved from the smart terminal when performing face color recognition.
  • the standard vector [avgY_j, avgCr_j, avgCb_j] for each skin patch saves time and data throughput.
  • Step 142 Select, as the face image, a skin color patch corresponding to the standard vector [avgY_j, avgCr_j, avgCb_j] having the smallest Euclidean distance of the mean vector [avgY, avgCr, avgCb] among the skin color templates. Face color.
  • the Euclidean distance of [avgY_j, avgCr_j, avgCb_j] represents the degree of similarity between the face skin color of the actually acquired face image and the skin color block in the skin color template, and the smaller the Euclidean distance, the greater the degree of similarity.
  • the skin color patch corresponding to the standard vector [avgY_j, avgCr_j, avgCb_j] having the smallest Euclidean distance of the mean vector [avgY, avgCr, avgCb] can be selected as the face in the skin color template.
  • the face color of the image can be selected as the face in the skin color template.
  • the face color recognition method provided by the embodiment of the present application may be expanded according to an actual application scenario, for example, after determining the face color of the face image, Cosmetic color numbers, decorations, and the like that match the facial skin color can be recommended to the user. I will not list them here.
  • the method for recognizing the face color in the embodiment of the present application is to intercept the image of the area to be detected from the face image when the face image is acquired; Then, the color space of the area image is converted into a YCrCb color space, and the mean vector of the area image is acquired [avgY, avgCr, avgCb]; finally, one of the preset skin color templates including a plurality of skin color blocks is selected.
  • the skin color patch matching the mean vector [avgY, avgCr, avgCb] is used as the face color of the face image, and can accurately recognize the specific situation of the face color, and is convenient for providing effective image design for people.
  • the face color recognition method determines the face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, and does not require a large amount of training data, and the recognition process is simple and feasible. .
  • the color of the face image is greatly affected by the illumination environment when the face image is acquired, that is, the face image acquired under different illumination environments, especially under different color sources. Different degrees of color shift will occur, and the color shift of the face image will cause a large error in the final skin color recognition result.
  • the second embodiment of the present application proposes another method for recognizing facial skin color based on the first embodiment.
  • the method for recognizing the facial skin color is different from that of the first embodiment in the present embodiment.
  • the color shift of the image of these regions is first eliminated.
  • FIG. 5 a flow chart of another method for recognizing facial skin color according to an embodiment of the present application is shown in FIG. 5.
  • the method for recognizing facial skin color includes but is not limited to the following steps:
  • Step 210 Acquire a face image.
  • Step 220 Intercept the image of the area to be detected from the face image.
  • Step 230 Eliminate the color shift of the area image.
  • the color shift of the area image to be detected is first eliminated before the color space conversion of the area image to be detected is performed.
  • any color equalization method such as a Gray World Algorithm or a White Patch Retinex algorithm, may be used to eliminate the color shift of the image of the area.
  • Step 240 Convert the color space of the area image after the color shift is removed to the YCrCb color space, and obtain the mean vector [avgY', avgCr', avgCb'] of the area image after the color shift is eliminated.
  • Step 250 Determine a face color of the face image based on the mean vector [avgY', avgCr', avgCb'] and a preset skin color template.
  • the steps 210, 220, 240, and 250 have the same or similar technical features as the steps 110, 120, 130, and 140 described in the first embodiment. Therefore, the specific implementation manners can refer to the foregoing steps. Corresponding descriptions of 110, 120, 130, and 140 are not described in detail in this embodiment.
  • the image of the area to be detected is first taken out from the face image, and the color shift of the area image is removed, in order to reduce the data processing of the system. the amount.
  • the color offset of the face image may be first removed when the face image is acquired, and then the area image to be detected is intercepted from the face image after the color offset is removed.
  • the human skin color recognition method provided by the embodiment of the present application first removes the image of the regional image by converting the color space of the segmented region image into the YCrCb color space.
  • the color shift can reduce the influence of the illumination environment on the color of the face image, thereby further improving the accuracy of the face color recognition.
  • the facial skin color recognition device 6 includes, but is not limited to, a face image acquisition unit 61, an intercepting unit 62, and data processing. Unit 63 and analysis unit 64.
  • the face image obtaining unit 61 is configured to acquire a face image.
  • the intercepting unit 62 is configured to extract an area image to be detected from the face image, wherein, in some embodiments, the area image includes any one of a left cheek area image, a nose area image, and a right cheek area image. Or multiple.
  • the data processing unit 63 is configured to acquire an average vector [avgY, avgCr, avgCb] of the region image under the YCrCb color space, wherein avgY represents the mean value of the region image in the Y color channel, and avgCr represents that the region image is The mean of the Cr color channels, avgCb, represents the mean of the area image in the Cb color channel.
  • the analyzing unit 64 is configured to determine a face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, wherein the skin color template includes a plurality of skin color patches, the person The skin color of the face is one of the plurality of skin color patches.
  • the face image acquiring unit 61 acquires the face image
  • the image of the area to be detected is intercepted from the face image by the intercepting unit 62; and the area is acquired by the data processing unit 63.
  • the analysis unit 64 determines the face skin color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, wherein the skin color template includes A plurality of skin color patches, the human skin color being one of the plurality of skin color patches.
  • the facial skin color recognition device further includes:
  • the converting unit 65 is configured to convert the color space of the area image into the YCrCb color space.
  • the area image includes n, and the n is a positive integer greater than 0.
  • the data processing unit 63 is specifically configured to: calculate a sum of pixel values of the Y-color channel of each region image sumY_i, a sum of pixel values sumCr_i in the Cr color channel, a sum of pixel values sumCb_i in the Cb color channel, and each region
  • the area of the image S_i; the mean vector [avgY, avgCr, avgCb] of the n area images in the YCrCb color space is obtained by the following formula:
  • sumY_i represents the sum of the pixel values of the i-th region image in the Y color channel
  • sumCr_i represents the sum of the pixel values of the i-th region image in the Cr color channel
  • sumCb_i represents the i-th region image in the Cb color
  • S_i represents the area of the image of the i-th region
  • the analyzing unit 64 is specifically configured to: acquire a standard vector [avgY_j, avgCr_j, avgCb_j] of each skin color patch in the preset skin color template; and select the average value in the skin color template.
  • avgCr_j represents the mean of the j-th skin patch in the skin color template in the Cr color channel
  • avgCb_j represents the j-th color patch in the skin color template in the Cb color channel Mean.
  • the human face color recognition device 6 further includes an image pre-processing unit 66.
  • the image pre-processing unit 66 eliminates the color shift of the area image, and can reduce the influence of the illumination environment on the color of the face image, thereby further improving the accuracy of the face color recognition.
  • the human skin color recognition device uses the intercepting unit 62 to obtain the human face image by acquiring the human face image by the facial image acquiring unit 61.
  • the image of the area to be detected is cut out in the image; then, the mean vector [avgY, avgCr, avgCb] of the area image in the YCrCb color space is acquired by the data processing unit 63; finally, the plurality of presets are included by the analyzing unit 64.
  • the analyzing unit 64 determines the face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, without requiring a large amount of Training data, the identification process is simple and feasible.
  • FIG. 7 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present application.
  • the smart terminal 700 can be any type of smart terminal, such as a mobile phone, a tablet computer, a beauty authentication device, etc., and can execute any of the embodiments provided in this application.
  • a method for recognizing facial skin color can be any type of smart terminal, such as a mobile phone, a tablet computer, a beauty authentication device, etc., and can execute any of the embodiments provided in this application.
  • a method for recognizing facial skin color can be any type of smart terminal, such as a mobile phone, a tablet computer, a beauty authentication device, etc.
  • the smart terminal 700 includes:
  • processors 701 and a memory 702 are exemplified by a processor 701 in FIG.
  • the processor 701 and the memory 702 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 702 is used as a non-transitory computer readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions corresponding to the face color recognition method in the embodiment of the present application.
  • / Module for example, the face image acquisition unit 61, the clipping unit 62, the data processing unit 63, the analysis unit 64, the conversion unit 65, and the image pre-processing unit 66 shown in Fig. 6).
  • the processor 701 executes various functional applications and data processing of the facial skin color recognition device by running the non-transitory software programs, instructions, and modules stored in the memory 702, that is, the face color recognition of any of the above method embodiments is implemented. method.
  • the memory 702 can include a storage program area and a storage data area, wherein the storage program area can be stored The operating system, an application required for at least one function; the storage data area may store data created according to the use of the smart terminal 700, and the like.
  • memory 702 can include high speed random access memory, and can also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
  • memory 702 can optionally include memory remotely located relative to processor 701 that can be connected to smart terminal 700 over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 702, and when executed by the one or more processors 701, perform a facial skin color recognition method in any of the above method embodiments, for example, performing the above described map Method steps 110 to 140 in 1 , method steps 131 to 132 in FIG. 2, method steps 141 to 142 in FIG. 4, and method steps 210 to 250 in FIG. 5 implement the functions of units 61-66 in FIG. .
  • the embodiment of the present application further provides a storage medium storing executable instructions executed by one or more processors, for example, by one processor 701 in FIG. 7, which may be
  • the one or more processors described above perform the face color recognition method in any of the above method embodiments, for example, performing the method steps 110 to 140 in FIG. 1 described above, the method steps 131 to 132 in FIG. 2, in FIG. Method steps 141 through 142, method steps 210 through 250 of FIG. 5, implement the functions of units 61-66 of FIG.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the various embodiments can be implemented by means of software plus a general hardware platform, and of course, by hardware.
  • a person skilled in the art can understand that all or part of the process of implementing the above embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a non-transitory computer readable storage medium.
  • the program when executed, may include the flow of an embodiment of the methods as described above. Its
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne, selon un mode de réalisation, un procédé et un dispositif de reconnaissance de couleur de peau de visage, et un terminal intelligent. Le procédé de reconnaissance de couleur de peau de visage consiste : à acquérir une image de visage ; à capturer à partir de l'image de visage une image de région pour subir une détection ; à acquérir un vecteur moyen [avgY, avgCr, avgCb] de l'image de région dans un espace de couleur YCrCb ; et à déterminer, d'après le vecteur moyen [avgY, avgCr, avgCb] et un modèle de couleur de peau prédéfini, une couleur de peau de visage de l'image de visage, le modèle de couleur de peau comprenant de multiples types de blocs de couleur de la couleur de peau, et la couleur de peau de visage étant l'un des multiples types de blocs de couleur de la couleur de peau. Le mode de réalisation de la présente invention peut être utilisé pour reconnaître avec précision une couleur de la peau de visage au moyen de la solution technique décrite, ce qui permet de fournir une référence efficace pour la conception d'image d'utilisateur.
PCT/CN2017/112533 2017-11-23 2017-11-23 Procédé et dispositif de reconnaissance de couleur de peau de visage, et terminal intelligent WO2019100282A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/112533 WO2019100282A1 (fr) 2017-11-23 2017-11-23 Procédé et dispositif de reconnaissance de couleur de peau de visage, et terminal intelligent
CN201780009028.3A CN108701217A (zh) 2017-11-23 2017-11-23 一种人脸肤色识别方法、装置和智能终端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/112533 WO2019100282A1 (fr) 2017-11-23 2017-11-23 Procédé et dispositif de reconnaissance de couleur de peau de visage, et terminal intelligent

Publications (1)

Publication Number Publication Date
WO2019100282A1 true WO2019100282A1 (fr) 2019-05-31

Family

ID=63844123

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112533 WO2019100282A1 (fr) 2017-11-23 2017-11-23 Procédé et dispositif de reconnaissance de couleur de peau de visage, et terminal intelligent

Country Status (2)

Country Link
CN (1) CN108701217A (fr)
WO (1) WO2019100282A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599554A (zh) * 2019-09-16 2019-12-20 腾讯科技(深圳)有限公司 人脸肤色的识别方法和装置、存储介质及电子装置
CN111815651A (zh) * 2020-07-08 2020-10-23 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、系统及设备
CN111815653A (zh) * 2020-07-08 2020-10-23 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、系统和设备
CN111950390A (zh) * 2020-07-22 2020-11-17 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN112102154A (zh) * 2020-08-20 2020-12-18 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质
CN113505674A (zh) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 人脸图像处理方法及装置、电子设备和存储介质
CN113749642A (zh) * 2021-07-07 2021-12-07 上海耐欣科技有限公司 量化皮肤潮红反应程度的方法、系统、介质及终端
CN113762010A (zh) * 2020-11-18 2021-12-07 北京沃东天骏信息技术有限公司 图像处理方法、装置、设备和存储介质
CN113938672A (zh) * 2021-09-16 2022-01-14 青岛信芯微电子科技股份有限公司 信号源的信号识别方法及终端设备

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109413508A (zh) * 2018-10-26 2019-03-01 广州虎牙信息科技有限公司 图像混合的方法、装置、设备、推流方法及直播系统
CN109712090A (zh) * 2018-12-18 2019-05-03 维沃移动通信有限公司 一种图像处理方法、装置和移动终端
CN109934092A (zh) * 2019-01-18 2019-06-25 深圳壹账通智能科技有限公司 识别色号方法、装置、计算机设备及存储介质
CN114174783A (zh) * 2019-04-09 2022-03-11 资生堂株式会社 用于创建具有改进的图像捕获的局部制剂的系统和方法
CN110245590B (zh) * 2019-05-29 2023-04-28 广东技术师范大学 一种基于皮肤图像检测的产品推荐方法及系统
CN111507944B (zh) * 2020-03-31 2023-07-04 北京百度网讯科技有限公司 皮肤光滑度的确定方法、装置和电子设备
CN113642358B (zh) * 2020-04-27 2023-10-10 华为技术有限公司 肤色检测方法、装置、终端和存储介质
CN111881789A (zh) * 2020-07-14 2020-11-03 深圳数联天下智能科技有限公司 肤色识别方法、装置、计算设备及计算机存储介质
CN112102349B (zh) * 2020-08-21 2023-12-08 深圳数联天下智能科技有限公司 肤色识别的方法、装置及计算机可读存储介质
CN113115085A (zh) * 2021-04-16 2021-07-13 海信电子科技(武汉)有限公司 一种视频播放方法及显示设备
CN113128416A (zh) * 2021-04-23 2021-07-16 领途智造科技(北京)有限公司 一种能识别肤色的面部识别方法及装置
CN113933293A (zh) * 2021-11-08 2022-01-14 中国联合网络通信集团有限公司 浓度检测方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706874A (zh) * 2009-12-25 2010-05-12 青岛朗讯科技通讯设备有限公司 基于肤色特征的人脸检测方法
CN104050455A (zh) * 2014-06-24 2014-09-17 深圳先进技术研究院 一种肤色检测方法及系统
CN104732200A (zh) * 2015-01-28 2015-06-24 广州远信网络科技发展有限公司 一种皮肤类型和皮肤问题的识别方法
CN105496414A (zh) * 2014-10-13 2016-04-20 株式会社爱茉莉太平洋 通过肤色定制的妆色诊断方法及通过肤色定制的妆色诊断装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7190829B2 (en) * 2003-06-30 2007-03-13 Microsoft Corporation Speedup of face detection in digital images
CN104156915A (zh) * 2014-07-23 2014-11-19 小米科技有限责任公司 肤色调整方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706874A (zh) * 2009-12-25 2010-05-12 青岛朗讯科技通讯设备有限公司 基于肤色特征的人脸检测方法
CN104050455A (zh) * 2014-06-24 2014-09-17 深圳先进技术研究院 一种肤色检测方法及系统
CN105496414A (zh) * 2014-10-13 2016-04-20 株式会社爱茉莉太平洋 通过肤色定制的妆色诊断方法及通过肤色定制的妆色诊断装置
CN104732200A (zh) * 2015-01-28 2015-06-24 广州远信网络科技发展有限公司 一种皮肤类型和皮肤问题的识别方法

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599554A (zh) * 2019-09-16 2019-12-20 腾讯科技(深圳)有限公司 人脸肤色的识别方法和装置、存储介质及电子装置
CN111815653B (zh) * 2020-07-08 2024-01-30 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、系统和设备
CN111815651A (zh) * 2020-07-08 2020-10-23 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、系统及设备
CN111815653A (zh) * 2020-07-08 2020-10-23 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、系统和设备
CN111815651B (zh) * 2020-07-08 2024-01-30 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、系统及设备
CN111950390A (zh) * 2020-07-22 2020-11-17 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN111950390B (zh) * 2020-07-22 2024-04-26 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN112102154A (zh) * 2020-08-20 2020-12-18 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质
CN112102154B (zh) * 2020-08-20 2024-04-26 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质
CN113762010A (zh) * 2020-11-18 2021-12-07 北京沃东天骏信息技术有限公司 图像处理方法、装置、设备和存储介质
CN113505674B (zh) * 2021-06-30 2023-04-18 上海商汤临港智能科技有限公司 人脸图像处理方法及装置、电子设备和存储介质
CN113505674A (zh) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 人脸图像处理方法及装置、电子设备和存储介质
CN113749642A (zh) * 2021-07-07 2021-12-07 上海耐欣科技有限公司 量化皮肤潮红反应程度的方法、系统、介质及终端
CN113938672A (zh) * 2021-09-16 2022-01-14 青岛信芯微电子科技股份有限公司 信号源的信号识别方法及终端设备
CN113938672B (zh) * 2021-09-16 2024-05-10 青岛信芯微电子科技股份有限公司 信号源的信号识别方法及终端设备

Also Published As

Publication number Publication date
CN108701217A (zh) 2018-10-23

Similar Documents

Publication Publication Date Title
WO2019100282A1 (fr) Procédé et dispositif de reconnaissance de couleur de peau de visage, et terminal intelligent
JP7413400B2 (ja) 肌質測定方法、肌質等級分類方法、肌質測定装置、電子機器及び記憶媒体
CN108701216B (zh) 一种人脸脸型识别方法、装置和智能终端
Kumar et al. Face detection in still images under occlusion and non-uniform illumination
Marciniak et al. Influence of low resolution of images on reliability of face detection and recognition
CN111881913A (zh) 图像识别方法及装置、存储介质和处理器
Aiping et al. Face detection technology based on skin color segmentation and template matching
CN107507144B (zh) 肤色增强的处理方法、装置及图像处理装置
US11010894B1 (en) Deriving a skin profile from an image
Emeršič et al. Pixel-wise ear detection with convolutional encoder-decoder networks
KR20190076288A (ko) 중요도 맵을 이용한 지능형 주관적 화질 평가 시스템, 방법, 및 상기 방법을 실행시키기 위한 컴퓨터 판독 가능한 프로그램을 기록한 기록 매체
Gritzman et al. Comparison of colour transforms used in lip segmentation algorithms
CN110598574A (zh) 智能人脸监控识别方法及系统
Paul et al. PCA based geometric modeling for automatic face detection
WO2023273247A1 (fr) Procédé et dispositif de traitement d'image de visage, support de stockage lisible par ordinateur, terminal
Yadav et al. A novel approach for face detection using hybrid skin color model
Rahman et al. An automatic face detection and gender classification from color images using support vector machine
US20190347469A1 (en) Method of improving image analysis
CN111814738A (zh) 基于人工智能的人脸识别方法、装置、计算机设备及介质
Gangopadhyay et al. FACE DETECTION AND RECOGNITION USING HAAR CLASSIFIER AND LBP HISTOGRAM.
CN112102348A (zh) 图像处理设备
Shih et al. Multiskin color segmentation through morphological model refinement
CN113298753A (zh) 敏感肌的检测方法、图像处理方法、装置及设备
Hsiao et al. An intelligent skin‐color capture method based on fuzzy C‐means with applications
Kryszczuk et al. Color correction for face detection based on human visual perception metaphor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933104

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17933104

Country of ref document: EP

Kind code of ref document: A1