WO2017071219A1 - 检测皮肤区域的方法和检测皮肤区域的装置 - Google Patents

检测皮肤区域的方法和检测皮肤区域的装置 Download PDF

Info

Publication number
WO2017071219A1
WO2017071219A1 PCT/CN2016/084528 CN2016084528W WO2017071219A1 WO 2017071219 A1 WO2017071219 A1 WO 2017071219A1 CN 2016084528 W CN2016084528 W CN 2016084528W WO 2017071219 A1 WO2017071219 A1 WO 2017071219A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
probability
skin
preset
target image
Prior art date
Application number
PCT/CN2016/084528
Other languages
English (en)
French (fr)
Inventor
谭国富
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP16858663.4A priority Critical patent/EP3370204B1/en
Publication of WO2017071219A1 publication Critical patent/WO2017071219A1/zh
Priority to US15/705,102 priority patent/US10489635B2/en
Priority to US16/666,075 priority patent/US10783353B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of computer technology, and in particular, to a method for detecting a skin region and a device for detecting a skin region.
  • Determining the skin area is a difficult problem during image processing.
  • One method currently used is to select a skin area by a professional using a special selection tool in image processing software such as Photoshop.
  • image processing software such as Photoshop.
  • this method has high learning cost and is cumbersome to operate, which is difficult for general users to grasp.
  • Embodiments of the present invention provide a method for detecting a skin region and a device for detecting a skin region, which are capable of automatically detecting a skin region in a picture.
  • Embodiments of the present invention provide a method for detecting a skin area, including:
  • a probability that each pixel in the target image is a skin, wherein a probability that the pixel is a skin is an arithmetic mean of a first probability and a second probability of the pixel.
  • the preset mean value of the skin in the RGB color space includes a preset mean value m r of the red light component, a preset mean value g g of the green light component, and a preset mean value m b of the blue light component, and a preset standard deviation including preset standard red difference component n r, a green component, the difference between the preset standard n g and the preset standard differential blue component n b;
  • Calculating the first probability of each pixel in the target image according to the preset mean value and the preset standard deviation in the RGB color space including:
  • the preset mean and preset standard deviation in the YUV color space include preset averaging values m u and m v of chromin , and the preset standard deviation includes preset standard deviations n u and n v of chrominance ;
  • Calculating a second probability of each pixel in the target image according to a preset mean value and a preset standard deviation in the YUV color space including:
  • the method further includes:
  • a gray value of each pixel in the target image is a product of a probability that the pixel is skin and a product of 255;
  • the particular profile is a profile that is longer and/or wider than 5 pixels.
  • the method further includes:
  • a gray value of each pixel in the target image is a product of a probability that the pixel is skin and a product of 255;
  • each pixel in the target graphic is a skin, wherein the pixel is a skin
  • the probability is the gray value of the pixel after Gaussian blur is divided by 255.
  • An embodiment of the present invention further provides an apparatus for detecting a skin area, including:
  • a first acquiring module configured to acquire a target image
  • a second obtaining module configured to acquire a preset mean value and a preset standard deviation of the skin in the RGB color space, and a preset mean value and a preset standard deviation in the YUV color space;
  • a first calculating module configured to calculate a first probability of each pixel in the target image according to a preset mean value and a preset standard deviation in the RGB color space, where the first probability is that the pixel is in the RGB color The probability of skin in space;
  • a second calculating module configured to calculate a second probability of each pixel in the target image according to a preset mean value and a preset standard deviation in the YUV color space, where the second probability is that the pixel is in the YUV color The probability of skin in space;
  • the first determining module is configured to determine a probability that each pixel in the target image is a skin, wherein a probability that the pixel is a skin is an arithmetic mean of a first probability and a second probability of the pixel.
  • the preset mean value of the skin in the RGB color space includes a preset mean value m r of the red light component, a preset mean value g g of the green light component, and a preset mean value m b of the blue light component, and a preset standard deviation including preset standard red difference component n r, a green component, the difference between the preset standard n g and the preset standard differential blue component n b;
  • the first calculation module is specifically configured to:
  • the preset mean and preset standard deviation in the YUV color space include preset averaging values m u and m v of chromin , and the preset standard deviation includes preset standard deviations n u and n v of chrominance ;
  • the second calculation module is specifically configured to:
  • the device for detecting a skin region further includes:
  • a third calculating module configured to calculate, after the first determining module determines a probability that each pixel in the target image is a skin, a gray value of each pixel in the target image, where a gray value of the pixel The probability that the pixel is skin and the product of 255;
  • a fourth calculation module configured to calculate a gray value of each pixel in the target image by using a contour search algorithm, and when a specific contour is acquired in the target image, gray of each pixel in the specific contour
  • the degree value and the probability for the skin are set to zero, and the particular contour is a profile of length and/or width less than 5 pixels.
  • the device for detecting a skin region further includes:
  • a fifth calculating module configured to calculate, after the first determining module determines a probability that each pixel in the target image is a skin, a gray value of each pixel in the target image, where a gray value of the pixel The probability that the pixel is skin and the product of 255;
  • a second determining module configured to determine a probability that each pixel in the target graphic is a skin, wherein the probability that the pixel is a skin is a gray value of a pixel after Gaussian blur is divided by 255.
  • the preset mean value and the preset standard deviation of the skin in the RGB color space are obtained, and after the preset mean value and the preset standard deviation in the YUV color space, each pixel in the target image is respectively separated from the skin in the RGB.
  • the preset mean and preset standard deviation in the color space, and the preset mean and preset standard deviation of the skin in the YUV color space can accurately calculate the probability that each pixel is skin, so as to subsequently
  • the operation when the target image is processed can combine the probability so that the processing is more directed to the skin region in the target image, avoiding the need to manually determine the skin region in the target image before processing.
  • FIG. 1 is a flow chart of one embodiment of a method of detecting a skin area of the present invention
  • FIG. 2 is a flow chart of one embodiment of a method of detecting a skin area of the present invention
  • FIG. 3 is a flow chart of one embodiment of a method of detecting a skin area of the present invention.
  • FIG. 4 is a schematic structural view of an embodiment of an apparatus for detecting a skin region according to the present invention.
  • FIG. 5 is a schematic structural view of an embodiment of an apparatus for detecting a skin region according to the present invention.
  • FIG. 6 is a schematic structural view of an embodiment of an apparatus for detecting a skin region according to the present invention.
  • Figure 7 is a schematic structural view of an embodiment of an apparatus for detecting a skin area according to the present invention.
  • Figure 8 is a schematic view showing the structure of an embodiment of the apparatus for detecting a skin area of the present invention.
  • Embodiments of the present invention provide a method for detecting a skin region and a device for detecting a skin region, which are capable of automatically detecting a skin region in a picture.
  • a method for detecting a skin region in an embodiment of the present invention includes the following steps.
  • the target image is composed of a plurality of pixels, wherein each pixel includes a luminance value r of a red component, a luminance value g of a green component, and a luminance value b of a blue component in the RGB color space, in the YUV color space.
  • the tone value u and the saturation value v are included.
  • the target image in this embodiment may be an original entire image, or may be a partial image selected from an original entire image, which is not limited herein.
  • the preset mean and preset standard deviation of the skin in the RGB color space may be pre-stored in memory and retrieved from memory when needed for calculation.
  • it can also be obtained by other methods, for example, by receiving in real time the preset mean and preset standard deviation of the skin in the RGB color space, and the input of the preset mean and preset standard deviation in the YUV color space.
  • the preset mean and preset standard deviation of the skin in the RGB color space, and the preset mean and preset standard deviation in the YUV color space are available through a large number of skin images The sample is calculated.
  • the preset mean and preset standard deviation of the skin in the RGB color space, as well as the preset mean and preset standard deviation in the YUV color space, can also be obtained by other methods.
  • the specific parameter of the pixel is compared with the preset mean value and the preset standard deviation of the acquired skin in the RGB color space to determine that the pixel is in the RGB color space.
  • the probability of the skin There are various methods of comparison, and one of them is described below by way of example.
  • the preset mean value of the skin in the RGB color space includes a preset mean value m r of the red light component, a preset mean value g g of the green light component, and a preset mean value m b of the blue light component, and a preset standard the difference between the preset standard differential comprising n r red component, a green component preset standard preset standard n g and the difference between the blue component of the difference n b.
  • the closer i is to m r , the closer i is to 1; the closer g is to m g , the closer j is to 1; the closer b is to m b , the closer j is to 1. Therefore, in the formula for calculating P1, the parameters i, i, and k are transformed by the equations i 1-i , j 1-j , and k 1-k , respectively, so that when i, j, and k are respectively close to 1, P1 The larger the value, the greater the probability that the pixel is the skin.
  • the coefficient 3 in the formula i 1-
  • /3n r , j 1-
  • /3n g and k 1-
  • /3n b may also be other Numerical values are not limited here.
  • the formula for calculating the first probability P1 according to i, j, and k may be other formulas, and is not limited herein.
  • the pixel can be judged in the YUV color space by comparing the specific parameter of the pixel with the preset mean value and the preset standard deviation of the acquired skin in the YUV color space.
  • the probability of the skin There are various methods of comparison, and one of them is described below by way of example.
  • the preset mean value and the preset standard deviation in the YUV color space include preset averaging values m u and m v of chromaticity, and the preset standard deviation includes preset standard deviations n u and n of chromaticity. v .
  • the parameters p, q are transformed by the formulas p 1-p and q 1-q , respectively, so that the closer the p and q are to 1, the larger the value of P2 is, that is, the The greater the probability that a pixel is a skin.
  • the coefficient 3 in the formula p 1-
  • /3n u , q 1-
  • /3n v may also be other values, which is not limited herein.
  • the formula for calculating the first probability P2 according to p and q may also be other formulas, and is not limited herein.
  • each pixel in the target image is a skin, where the pixel is The probability of the skin is the arithmetic mean of the first probability and the second probability of the pixel.
  • the average of the two probabilities (ie, P1+P2)/2) as the probability P that the pixel is the skin.
  • image processing may be performed for pixels whose pixel is a skin whose probability is greater than a certain threshold.
  • the processing may include processing such as highlighting the pixel.
  • the preset mean value and the preset standard deviation of the skin in the RGB color space are acquired, and after the preset mean value and the preset standard deviation in the YUV color space, each pixel in the target image is respectively associated with the skin in RGB.
  • the preset mean and preset standard deviation in the color space, and the preset mean and preset standard deviation of the skin in the YUV color space can accurately calculate the probability that each pixel is skin, so as to subsequently
  • the operation when the target image is processed can combine the probability so that the processing is more directed to the skin region in the target image, avoiding the need to manually determine the skin region in the target image before processing.
  • step 105 The target image is then processed to optimize the probability value of each pixel as the skin.
  • the target image is then processed to optimize the probability value of each pixel as the skin.
  • the method further includes:
  • a gray value of each pixel in the target image is a product of a probability that the pixel is a skin and a product of 255.
  • each probability value is first converted into a gray value according to the probability that each pixel in the target image is skin. Specifically, each pixel in the target image is The probability of the skin is multiplied by the maximum value of the gray value (ie 255), and a new image is obtained. Among them, the area where the gradation value is larger in the new image is more likely to be skin.
  • the gray value of each pixel in the target image is also a skin
  • the probability is set to zero, and the particular profile is a profile that is longer and/or wider than 5 pixels.
  • the outline finding algorithm can be used to determine the outline appearing in the target figure.
  • the contour search algorithm is prior art and will not be described here.
  • the contour When the contour is small, the possibility that the area enclosed by the contour is skin is excluded. Therefore, in the present embodiment, when it is determined that a contour having a length and/or a width of less than 5 pixels appears in the target image (referred to as a specific contour for convenience of description), the possibility that the region within the specific contour is the skin region may be excluded. Sex, so the probability that each pixel in the particular contour is skin is set to zero.
  • the contour noise on the skin area in the target image can be removed, and the skin area can be determined more accurately.
  • the method further includes:
  • a gray value of each pixel in the target image is a product of a probability that the pixel is a skin and a product of 255.
  • the edge change of the skin area in the target figure may be discontinuous. Therefore, in the present embodiment, the Gaussian blurring of the target image is performed to ensure the continuity of the change of the edge of the skin region. Specifically, each probability value is first converted into a gray value according to the probability that each pixel in the target image is skin. In this embodiment, specifically, the probability that each pixel in the target image is skin is multiplied by the maximum value of the gray value (ie, 255), so that a new image can be obtained. Among them, the area where the gradation value is larger in the new image is more likely to be skin.
  • the Gaussian blur is performed on the gray value of each pixel in the target image, which is not described here. Specifically, a 3 ⁇ 3 Gaussian blur can be performed.
  • a 3 ⁇ 3 Gaussian blur can be performed.
  • the above is merely an example and is not limiting.
  • the gray value of each pixel in the target image changes.
  • the probability that each pixel is the skin can be obtained.
  • the continuity of the change of the edge of the skin region in the target image can be ensured.
  • Figure 4 is a schematic view showing the structure of an embodiment of the apparatus for detecting a skin area of the present invention.
  • the apparatus for detecting a skin area in this embodiment can be used to perform the method of detecting a skin area in the embodiment shown in Fig. 1.
  • the apparatus 400 for detecting a skin area in the embodiment of the present invention includes:
  • a first obtaining module 401 configured to acquire a target image
  • a second obtaining module 402 configured to acquire a preset mean value and a preset standard deviation of the skin in the RGB color space, and a preset mean value and a preset standard deviation in the YUV color space;
  • a first calculating module 403 configured to calculate, according to preset preset values and preset standard deviations in the RGB color space, a first probability of each pixel in the target image, where the first probability is that the pixel is in the RGB The probability of skin in the color space;
  • a second calculating module 404 configured to calculate a second probability of each pixel in the target image according to a preset mean value and a preset standard deviation in the YUV color space, where the second probability is the image The probability of being the skin in the YUV color space;
  • the first determining module 405 is configured to determine a probability that each pixel in the target image is a skin, wherein a probability that the pixel is a skin is an arithmetic mean of a first probability and a second probability of the pixel.
  • image processing may be performed for pixels whose pixel is a skin whose probability is greater than a certain threshold.
  • the processing may include processing such as highlighting the pixel.
  • the preset mean value and the preset standard deviation of the skin in the RGB color space are acquired, and after the preset mean value and the preset standard deviation in the YUV color space, each pixel in the target image is respectively associated with the skin in RGB.
  • the preset mean and preset standard deviation in the color space, and the preset mean and preset standard deviation of the skin in the YUV color space can accurately calculate the probability that each pixel is skin, so as to subsequently
  • the operation when the target image is processed can combine the probability so that the processing is more directed to the skin region in the target image, avoiding the need to manually determine the skin region in the target image before processing.
  • the preset mean value of the skin in the RGB color space includes a preset mean value m r of the red light component, a preset mean value g g of the green light component, and a preset mean value m b of the blue light component, and a preset standard deviation including preset standard red difference component n r, a green component, the difference between the preset standard n g and the preset standard differential blue component n b;
  • the first calculating module 403 is specifically configured to:
  • the preset mean and preset standard deviation in the YUV color space include preset averaging values m u and m v of chromin , and the preset standard deviation includes preset standard deviations n u and n v of chrominance ;
  • the second calculating module 404 is specifically configured to:
  • the device for detecting a skin region further includes:
  • a third calculating module 501 configured to calculate, after the first determining module determines a probability that each pixel in the target image is a skin, a gray value of each pixel in the target image, where a grayscale of the pixel The value is the product of the probability that the pixel is skin and 255;
  • a fourth calculation module 502 configured to calculate a gray value of each pixel in the target image by using a contour search algorithm, and when a specific contour is acquired in the target image, each pixel in the specific contour is The gray value and the probability for the skin are set to zero, which is a profile that is longer and/or wider than 5 pixels.
  • the apparatus for detecting a skin area further includes:
  • a fifth calculating module 601 configured to calculate, after the first determining module determines a probability that each pixel in the target image is a skin, a gray value of each pixel in the target image, where the gray level of the pixel The value is the product of the probability that the pixel is skin and 255;
  • a sixth calculating module 602 configured to perform Gaussian gray value of each pixel in the target image blurry
  • the second determining module 603 is configured to determine a probability that each pixel in the target graphic is a skin, wherein the probability that the pixel is a skin is a gray value of a pixel after Gaussian blur is divided by 255.
  • the device for detecting a skin region in the embodiment of the present invention is described above from the perspective of a unitized functional entity.
  • the device for detecting a skin region in the embodiment of the present invention is described below from the perspective of hardware processing.
  • FIG. 7 is a schematic structural view of an embodiment of a device for detecting a skin region according to the present invention.
  • the apparatus 700 for detecting a skin area includes:
  • a probability that each pixel in the target image is a skin, wherein a probability that the pixel is a skin is an arithmetic mean of a first probability and a second probability of the pixel.
  • the preset mean value of the skin in the RGB color space includes a preset mean value m r of the red light component, a preset mean value of the green light component m g , and a blue light component set mean m b, the preset criteria comprising a preset standard deviation difference in red component n r, a green component, the difference between the preset standard n g and the preset standard differential blue component n b;
  • Calculating the first probability of each pixel in the target image according to the preset mean value and the preset standard deviation in the RGB color space including:
  • the preset mean and preset standard deviation in the YUV color space include preset averaging values m u and m v of chromaticity, and the preset standard deviation includes chromaticity Preset standard deviations n u and n v ;
  • Calculating a second probability of each pixel in the target image according to a preset mean value and a preset standard deviation in the YUV color space including:
  • the processor 701 after determining the probability that each pixel in the target image is a skin, the processor 701 further performs the following steps:
  • the particular profile is a profile that is longer and/or wider than 5 pixels.
  • the processor 701 after determining the probability that each pixel in the target image is a skin, the processor 701 further performs the following steps:
  • a gray value of each pixel in the target image is a product of a probability that the pixel is skin and a product of 255;
  • Determining a probability that each pixel in the target pattern is a skin wherein the probability that the pixel is skin is the gray value of the pixel after Gaussian blur is divided by 255.
  • the embodiment of the present invention further provides another device for detecting a skin region.
  • FIG. 8 for the convenience of description, only parts related to the embodiment of the present invention are shown, and the specific technical details are not disclosed, please refer to the present invention.
  • the terminal can be any terminal device such as a mobile phone or a computer. The following uses a mobile phone as an example for explanation.
  • FIG. 8 is a block diagram showing a partial structure of a mobile phone related to a terminal provided by an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 810, a memory 820, an input unit 830, a display unit 840, a sensor 850, an audio circuit 860, a wireless fidelity (WiFi) module 870, and a processor 880. And power supply 890 and other components.
  • RF radio frequency
  • the RF circuit 810 can be used for transmitting and receiving information or receiving and transmitting signals during a call. Otherwise, the downlink information of the base station is received, and then processed by the processor 880; in addition, the designed uplink data is sent to the base station.
  • RF circuit 810 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 810 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Pack
  • the memory 820 can be used to store software programs and modules, and the processor 880 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 820.
  • the memory 820 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 820 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 830 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 830 may include a touch panel 831 and other input devices 832.
  • the touch panel 831 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 831 or near the touch panel 831. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 831 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, And detecting a signal brought by the touch operation, and transmitting the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, sends it to the processor 880, and can receive the processor 880. Send the order and execute it.
  • the touch panel 831 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 830 may also include other input devices 832.
  • other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 840 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit 840 can include a display panel 841.
  • the display panel 841 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 831 can cover the display panel 841. When the touch panel 831 detects a touch operation thereon or nearby, the touch panel 831 transmits to the processor 880 to determine the type of the touch event, and then the processor 880 according to the touch event. The type provides a corresponding visual output on display panel 841.
  • the touch panel 831 and the display panel 841 are two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 831 can be integrated with the display panel 841. Realize the input and output functions of the phone.
  • the handset can also include at least one type of sensor 850, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 841 according to the brightness of the ambient light, and the proximity sensor may close the display panel 841 and/or when the mobile phone moves to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as well as gyroscopes, barometers, hygrometers, thermometers, infrared rays that can be configured on mobile phones
  • gyroscopes barometers, hygrometers, thermometers, infrared rays that can be configured on mobile phones
  • sensors such as sensors will not be described here.
  • An audio circuit 860, a speaker 861, and a microphone 862 can provide an audio interface between the user and the handset.
  • the audio circuit 860 can transmit the converted electrical data of the received audio data to the speaker 861 for conversion to the sound signal output by the speaker 861; on the other hand, the microphone 862 converts the collected sound signal into an electrical signal by the audio circuit 860. After receiving, it is converted into audio data, and then processed by the audio data output processor 880, sent to the other mobile phone via the RF circuit 810, or outputted to the memory 820 for further processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 870, which provides users with wireless broadband Internet access.
  • FIG. 8 shows the WiFi module 870, it can be understood that it does not belong to the essential configuration of the mobile phone, and can be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 880 is the control center of the handset, and connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 820, and invoking data stored in the memory 820, executing The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 880 may include one or more processing units; preferably, the processor 880 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 880.
  • the handset also includes a power source 890 (such as a battery) that supplies power to the various components.
  • a power source 890 such as a battery
  • the power source can be logically coupled to the processor 880 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the handset may also include a camera, a Bluetooth module, and the like.
  • the camera can be used for photographing by the user, after which the processor performs the photograph taken by the user according to the steps of the method of the present invention.
  • the skin is detected, the area where the skin is located is detected, and then the area where the skin is located is treated.
  • the processor 880 included in the terminal has a function of executing the above method flow.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product.
  • Stored in a storage medium including instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

一种检测皮肤区域的方法和装置。所述方法包括:获取目标图像(101);获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差(102);根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率(103);根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,所述第二概率为所述像素在所述YUV颜色空间中为皮肤的概率(104);确定所述目标图像中各像素为皮肤的概率,其中,所述像素为皮肤的概率为所述像素的第一概率和第二概率的算数平均值(105)。所述方法和装置能够自动检测到图片中的皮肤区域。

Description

检测皮肤区域的方法和检测皮肤区域的装置
本申请要求于2015年10月26日提交中国专利局、申请号为201510700287.3、发明名称为“检测皮肤区域的方法和检测皮肤区域的装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机技术领域,尤其涉及一种检测皮肤区域的方法和检测皮肤区域的装置。
背景技术
随着数码相机、手机摄像的普及,拍摄的图片的数量越来越多。然而,由于光线、摄像器材、个人相貌、拍摄角度、拍摄姿势、镜头畸变等等原因,一些拍摄出来的图片效果(特别是皮肤部分)往往不尽如人意。所以许多用户需要对照片中的皮肤区域进行处理。
在图片处理过程中,确定皮肤区域是一个困难的问题。目前的一种方法是由专业人士采用Photoshop等图像处理软件中专用的选取工具来选取出皮肤区域。但这种方法的学习成本高,操作也比较麻烦,一般用户难以掌握。
发明内容
本发明实施例提供了一种检测皮肤区域的方法和检测皮肤区域的装置,能够自动检测到图片中的皮肤区域。
本发明实施例提供一种检测皮肤区域的方法,包括:
获取目标图像;
获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差;
根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率;
根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,所述第二概率为所述像素在所述YUV颜色空间中为皮肤的概率;
确定所述目标图像中各像素为皮肤的概率,其中,所述像素为皮肤的概率为所述像素的第一概率和第二概率的算数平均值。
可选的,所述皮肤在RGB颜色空间中的预置均值包括红光成分的预置均值mr、绿光成分的预置均值mg和蓝光成分的预置均值mb,预置标准差包括红光成分的预置标准差nr、绿光成分的预置标准差ng和蓝光成分的预置标准差nb
所述根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,包括:
根据公式i=1-|r-mr|/3nr计算所述像素的第一参数i,其中,r为所述像素的红光成分的亮度值,当1-|r-mr|/3nr小于0时,取i=0,当1-|r-mr|/3nr大于1时,取i=1;
根据公式j=1-|g-mg|/3ng计算所述像素的第二参数j,其中,g为所述像素的绿光成分的亮度值,当1-|g-mg|/3ng小于0时,取j=0,当1-|g-mg|/3ng大于1时,取j=1;
根据公式k=1-|b-mb|/3nb计算所述像素的第三参数k,其中,b为所述像素的蓝光成分的亮度值,当1-|b-mb|/3nb小于0时,取i=0,当1-|b-mb|/3nb 大于1时,取k=1;
根据公式P1=i1-i×j1-j×k1-k计算所述像素的第一概率P1。
可选的,所述在YUV颜色空间中的预置均值和预置标准差包括色度的预置均值mu和mv,预置标准差包括色度的预置标准差nu和nv
所述根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,包括:
根据公式p=1-|u-mu|/3nu计算所述像素的第四参数p,其中u为所述像素的色调值;
根据公式q=1-|v-mv|/3nv计算所述像素的第五参数q,其中v为所述像素的饱和度值;
根据公式P2=p1-p×q1-q计算所述像素的第二概率P2。
可选的,所述确定所述目标图像中各像素为皮肤的概率之后,还包括:
计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
采用轮廓查找算法对所述目标图像中各像素的灰度值进行计算,当在所述目标图像中获取到特定轮廓时,将所述特定轮廓中的各像素的灰度值以及为皮肤的概率置为0,所述特定轮廓为长和/或宽小于5个像素的轮廓。
可选的,所述确定所述目标图像中各像素为皮肤的概率之后,还包括:
计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
对所述目标图像中各像素的灰度值进行高斯模糊;
确定所述目标图形中各像素为皮肤的概率,其中,所述像素为皮肤 的概率为高斯模糊后的像素的灰度值除以255。
本发明实施例还提供一种检测皮肤区域的装置,包括:
第一获取模块,用于获取目标图像;
第二获取模块,用于获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差;
第一计算模块,用于根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率;
第二计算模块,用于根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,所述第二概率为所述像素在所述YUV颜色空间中为皮肤的概率;
第一确定模块,用于确定所述目标图像中各像素为皮肤的概率,其中,所述像素为皮肤的概率为所述像素的第一概率和第二概率的算数平均值。
可选的,所述皮肤在RGB颜色空间中的预置均值包括红光成分的预置均值mr、绿光成分的预置均值mg和蓝光成分的预置均值mb,预置标准差包括红光成分的预置标准差nr、绿光成分的预置标准差ng和蓝光成分的预置标准差nb
所述第一计算模块具体用于:
根据公式i=1-|r-mr|/3nr计算所述像素的第一参数i,其中,r为所述像素的红光成分的亮度值,当1-|r-mr|/3nr小于0时,取i=0,当1-|r-mr|/3nr大于1时,取i=1;
根据公式j=1-|g-mg|/3ng计算所述像素的第二参数j,其中,g为所述像素的绿光成分的亮度值,当1-|g-mg|/3ng小于0时,取j=0,当1-|g-mg|/3ng大于1时,取j=1;
根据公式k=1-|b-mb|/3nb计算所述像素的第三参数k,其中,b为所述像素的蓝光成分的亮度值,当1-|b-mb|/3nb小于0时,取i=0,当1-|b-mb|/3nb大于1时,取k=1;
根据公式P1=i1-i×j1-j×k1-k计算所述像素的第一概率P1。
可选的,所述在YUV颜色空间中的预置均值和预置标准差包括色度的预置均值mu和mv,预置标准差包括色度的预置标准差nu和nv
所述第二计算模块具体用于:
根据公式p=1-|u-mu|/3nu计算所述像素的第四参数p,其中u为所述像素的色调值;
根据公式q=1-|v-mv|/3nv计算所述像素的第五参数q,其中v为所述像素的饱和度值;
根据公式P2=p1-p×q1-q计算所述像素的第二概率P2。
可选的,所述检测皮肤区域的装置还包括:
第三计算模块,用于在所述第一确定模块确定所述目标图像中各像素为皮肤的概率之后,计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
第四计算模块,用于采用轮廓查找算法对所述目标图像中各像素的灰度值进行计算,当在所述目标图像中获取到特定轮廓时,将所述特定轮廓中的各像素的灰度值以及为皮肤的概率置为0,所述特定轮廓为长和/或宽小于5个像素的轮廓。
可选的,所述检测皮肤区域的装置还包括:
第五计算模块,用于在所述第一确定模块确定所述目标图像中各像素为皮肤的概率之后,计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
第六计算模块,用于对所述目标图像中各像素的灰度值进行高斯模 糊;
第二确定模块,用于确定所述目标图形中各像素为皮肤的概率,其中,所述像素为皮肤的概率为高斯模糊后的像素的灰度值除以255。
本发明中,获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差后,通过将目标图像中各像素分别与皮肤在RGB颜色空间中的预置均值和预置标准差,以及与皮肤在YUV颜色空间中的预置均值和预置标准差进行比较,能够准确地计算出各像素为皮肤的概率,以便后续在对该目标图像进行处理时的运算能够结合该概率,使得该处理更加针对该目标图像中的皮肤区域,避免了在处理前需手动确定出目标图像中的皮肤区域。
附图说明
图1为本发明检测皮肤区域的方法的一个实施例的流程图;
图2为本发明检测皮肤区域的方法的一个实施例的流程图;
图3为本发明检测皮肤区域的方法的一个实施例的流程图;
图4为本发明检测皮肤区域的装置的一个实施例的结构示意图;
图5为本发明检测皮肤区域的装置的一个实施例的结构示意图;
图6为本发明检测皮肤区域的装置的一个实施例的结构示意图;
图7为本发明检测皮肤区域的装置的一个实施例的结构示意图;
图8为本发明检测皮肤区域的装置的一个实施例的结构示意图。
具体实施方式
本发明实施例提供了一种检测皮肤区域的方法和检测皮肤区域的装置,能够自动检测到图片中的皮肤区域。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发 明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、系统、产品或设备固有的其它步骤或单元。
请参阅图1,本发明实施例中检测皮肤区域的方法包括以下步骤。
101、获取目标图像。
本实施例中,目标图像由多个像素组成,其中每个像素在RGB颜色空间内包括红光成分的亮度值r、绿光成分的亮度值g和蓝光成分的亮度值b,在YUV颜色空间内包括色调值u和饱和度值v。
需注意的是,本实施例中的目标图像可以是原始的一整幅图像,也可以是从原始的一整幅图中选取的部分图像,在此不作限制。
102、获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差。
皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差可预先存储在存储器中,在需要计算时,从存储器中获取。当然,也可以通过其他方法获取,例如,通过实时接收对皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差的输入来获取,在此不作限制。
在一种实现方式中,皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差可通过大量皮肤图片 的样本计算出来。皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差也可通过其他方法获得。
103、根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率。
对目标图像中的某一个像素,可通过将该像素的具体参数与所获取到的皮肤在RGB颜色空间中的预置均值和预置标准差进行比较,来判断该像素在RGB颜色空间内为皮肤的概率。比较的方法有多种,下面对其中的一种进行举例描述。
本实施例中,所述皮肤在RGB颜色空间中的预置均值包括红光成分的预置均值mr、绿光成分的预置均值mg和蓝光成分的预置均值mb,预置标准差包括红光成分的预置标准差nr、绿光成分的预置标准差ng和蓝光成分的预置标准差nb
根据公式i=1-|r-mr|/3nr计算所述像素的第一参数i,其中,当1-|r-mr|/3nr小于0时,取i=0,当1-|r-mr|/3nr大于1时,取i=1。
根据公式j=1-|g-mg|/3ng计算所述像素的第二参数j,其中,当1-|g-mg|/3ng小于0时,取j=0,当1-|g-mg|/3ng大于1时,取j=1。
根据公式k=1-|b-mb|/3nb计算所述像素的第三参数k,其中,当1-|b-mb|/3nb小于0时,取i=0,当1-|b-mb|/3nb大于1时,取k=1。
根据公式P1=i1-i×j1-j×k1-k计算所述像素的第一概率P1。
其中,由于r越接近mr时,i越接近1;g越接近mg时,j越接近1;b越接近mb时,j越接近1。因此,在计算P1的公式中,对参数i、j、k分别采用公式i1-i、j1-j、k1-k进行变换,使得当i、j、k分别越接近1时,P1的数值也越大,也即该像素为皮肤的概率越大。
上面对其中的一种比较方法进行了详细描述。实际应用中,公式i=1-|r -mr|/3nr、j=1-|g-mg|/3ng和k=1-|b-mb|/3nb中的系数3也可以为其他数值,在此不作限制。根据i、j、k计算第一概率P1的公式也可以是其他公式,在此不作限制。
104、根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,所述第二概率为所述像素在所述YUV颜色空间中为皮肤的概率。
对目标图像中的某一个像素,可通过将该像素的具体参数与所获取到的皮肤在YUV颜色空间中的预置均值和预置标准差进行比较,来判断该像素在YUV颜色空间内为皮肤的概率。比较的方法有多种,下面对其中的一种进行举例描述。
本实施例中,所述在YUV颜色空间中的预置均值和预置标准差包括色度的预置均值mu和mv,预置标准差包括色度的预置标准差nu和nv
根据公式p=1-|u-mu|/3nu计算所述像素的第四参数p,其中u为所述像素的色调值。
根据公式q=1-|v-mv|/3nv计算所述像素的第五参数q,其中v为所述像素的饱和度值。
根据公式P2=p1-p×q1-q计算所述像素的第二概率P2。
其中,由于u越接近mu时,p越接近1;v越接近mv时,q越接近1。因此,在计算P2的公式中,对参数p、q分别采用公式p1-p、q1-q进行变换,使得当p、q分别越接近1时,P2的数值也越大,也即该像素为皮肤的概率越大。
上面对其中的一种比较方法进行了详细描述。实际应用中,公式p=1-|u-mu|/3nu、q=1-|v-mv|/3nv中的系数3也可以为其他数值,在此不作限制。根据p、q计算第一概率P2的公式也可以是其他公式,在此不作限制。
105、确定所述目标图像中各像素为皮肤的概率,其中,所述像素为 皮肤的概率为所述像素的第一概率和第二概率的算数平均值。
对目标图像中的某一个像素,当计算出该像素在RGB颜色空间内为皮肤的概率P1以及在YUV颜色空间内为皮肤的概率P2后,可将该两个概率的平均值(也即(P1+P2)/2)作为该像素为皮肤的概率P。
在获取到目标图像中各像素为皮肤的概率之后,可以针对像素为皮肤的概率大于特定阈值的像素进行图像处理。该处理可包括对该像素进行提亮等处理。
本实施例中,获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差后,将目标图像中各像素分别与皮肤在RGB颜色空间中的预置均值和预置标准差,以及与皮肤在YUV颜色空间中的预置均值和预置标准差进行比较,能够准确地计算出各像素为皮肤的概率,以便后续在对该目标图像进行处理时的运算能够结合该概率,使得该处理更加针对该目标图像中的皮肤区域,避免了在处理前需手动确定出目标图像中的皮肤区域。
实际应用中,目标图像中的皮肤区域上可能由于出现其他东西(如衣服、头发等)或者其他原因而对计算像素为皮肤的概率造成干扰,因此,优选的,本实施例中,在步骤105之后还对目标图像进行处理,以优化各像素为皮肤的概率值。处理的方法有多种,下面对其中的两种进行举例描述。
举例一、如图2所示,本实施例的检测皮肤区域的方法中,在步骤105之后还包括:
201、计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积。
本实施例中,为了确定出非皮肤区域,首先根据目标图像中各像素为皮肤的概率将各概率值转换为灰度值。具体的,将目标图像中各像素为 皮肤的概率与灰度值的最大值(也即255)相乘,那么可得到一副新的图像。其中,该新的图像中灰度值越大的区域为皮肤的可能性越大。
202、采用轮廓查找算法对所述目标图像中各像素的灰度值进行计算,当在所述目标图像中获取到特定轮廓时,将所述特定轮廓中的各像素的灰度值以及为皮肤的概率置为0,所述特定轮廓为长和/或宽小于5个像素的轮廓。
由于皮肤区域的面积一般较大,因此可通过轮廓查找算法来确定目标图形中出现的轮廓。该轮廓查找算法为现有技术,在此不再赘述。
当该轮廓较小时,可排除该轮廓所圈出的区域为皮肤的可能性。因此,本实施例中,当确定出目标图像中出现长和/或宽小于5个像素的轮廓(为描述方便,称为特定轮廓)时,可排除该特定轮廓内的区域为皮肤区域的可能性,因此将该特定轮廓内的各像素为皮肤的概率置为0。
这样,可以去掉目标图像中皮肤区域上的轮廓噪声,能够更加精确的确定出皮肤区域。
举例二、如图3所示,本实施例的检测皮肤区域的方法中,在步骤105之后还包括:
301、计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积。
实际应用中,可能出现目标图形中的皮肤区域的边缘变化不连续。因此,本实施例中,通过对目标图像进行高斯模糊,以保证皮肤区域的边缘的变化连续性。具体的,首先根据目标图像中各像素为皮肤的概率将各概率值转换为灰度值。本实施例中,具体将目标图像中各像素为皮肤的概率与灰度值的最大值(也即255)相乘,那么可得到一副新的图像。其中,该新的图像中灰度值越大的区域为皮肤的可能性越大。
302、对所述目标图像中各像素的灰度值进行高斯模糊。
对目标图像中各像素的灰度值进行高斯模糊为现有技术,在此不再赘述。具体的,可以进行3×3的高斯模糊。当然,上述仅为举例,并不作限制。
303、确定所述目标图形中各像素为皮肤的概率,其中,所述像素为皮肤的概率为高斯模糊后的像素的灰度值除以255。
由于对目标图像进行高斯模糊后,目标图像中各像素的灰度值发生改变。将高斯模糊后的目标图像的各像素的灰度值除以255,即可得到各像素为皮肤的概率。
本例中,可以保证目标图像中的皮肤区域的边缘的变化连续性。
上面对举例一和举例二进行了描述,实际应用中,还可以同时采用举例一和举例二的方法,在此不作限制。
上面对本发明实施例中的检测皮肤区域的方法进行了描述。下面对本发明实施例中的检测皮肤区域的装置进行描述。
请参阅图4。图4为本发明的检测皮肤区域的装置的一个实施例的结构示意图。本实施例中的检测皮肤区域的装置可以用于执行图1所示实施例中的检测皮肤区域的方法。本发明实施例中检测皮肤区域的装置400包括:
第一获取模块401,用于获取目标图像;
第二获取模块402,用于获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差;
第一计算模块403,用于根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率;
第二计算模块404,用于根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,所述第二概率为所述像 素在所述YUV颜色空间中为皮肤的概率;
第一确定模块405,用于确定所述目标图像中各像素为皮肤的概率,其中,所述像素为皮肤的概率为所述像素的第一概率和第二概率的算数平均值。
在获取到目标图像中各像素为皮肤的概率之后,可以针对像素为皮肤的概率大于特定阈值的像素进行图像处理。该处理可包括对该像素进行提亮等处理。
本实施例中,获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差后,将目标图像中各像素分别与皮肤在RGB颜色空间中的预置均值和预置标准差,以及与皮肤在YUV颜色空间中的预置均值和预置标准差进行比较,能够准确地计算出各像素为皮肤的概率,以便后续在对该目标图像进行处理时的运算能够结合该概率,使得该处理更加针对该目标图像中的皮肤区域,避免了在处理前需手动确定出目标图像中的皮肤区域。
可选的,所述皮肤在RGB颜色空间中的预置均值包括红光成分的预置均值mr、绿光成分的预置均值mg和蓝光成分的预置均值mb,预置标准差包括红光成分的预置标准差nr、绿光成分的预置标准差ng和蓝光成分的预置标准差nb
所述第一计算模块403具体用于:
根据公式i=1-|r-mr|/3nr计算所述像素的第一参数i,其中,r为所述像素的红光成分的亮度值,当1-|r-mr|/3nr小于0时,取i=0,当1-|r-mr|/3nr大于1时,取i=1;
根据公式j=1-|g-mg|/3ng计算所述像素的第二参数j,其中,g为所述像素的绿光成分的亮度值,1-|g-mg|/3ng小于0时,取j=0,当1-|g-mg|/3ng大于1时,取j=1;
根据公式k=1-|b-mb|/3nb计算所述像素的第三参数k,其中,b为所述像素的蓝光成分的亮度值,当1-|b-mb|/3nb小于0时,取i=0,当1-|b-mb|/3nb大于1时,取k=1;
根据公式P1=i1-i×j1-j×k1-k计算所述像素的第一概率P1。
可选的,所述在YUV颜色空间中的预置均值和预置标准差包括色度的预置均值mu和mv,预置标准差包括色度的预置标准差nu和nv
所述第二计算模块404具体用于:
根据公式p=1-|u-mu|/3nu计算所述像素的第四参数p,其中u为所述像素的色调值;
根据公式q=1-|v-mv|/3nv计算所述像素的第五参数q,其中v为所述像素的饱和度值;
根据公式P2=p1-p×q1-q计算所述像素的第二概率P2。
可选的,如图5所示,所述检测皮肤区域的装置还包括:
第三计算模块501,用于在所述第一确定模块确定所述目标图像中各像素为皮肤的概率之后,计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
第四计算模块502,用于采用轮廓查找算法对所述目标图像中各像素的灰度值进行计算,当在所述目标图像中获取到特定轮廓时,将所述特定轮廓中的各像素的灰度值以及为皮肤的概率置为0,所述特定轮廓为长和/或宽小于5个像素的轮廓。
可选的,如图6所示,所述检测皮肤区域的装置还包括:
第五计算模块601,用于在所述第一确定模块确定所述目标图像中各像素为皮肤的概率之后,计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
第六计算模块602,用于对所述目标图像中各像素的灰度值进行高斯 模糊;
第二确定模块603,用于确定所述目标图形中各像素为皮肤的概率,其中,所述像素为皮肤的概率为高斯模糊后的像素的灰度值除以255。
上面从单元化功能实体的角度对本发明实施例中的检测皮肤区域的装置进行了描述,下面从硬件处理的角度对本发明实施例中的检测皮肤区域的装置进行描述。
请参阅图7,图7为本发明的检测皮肤区域的装置的一个实施例的结构示意图。本实施例中,检测皮肤区域的装置700包括:
处理器701,以及耦合到所述处理器701的存储器702;其中,所述处理器701读取所述存储器702中存储的计算机程序用于执行以下操作:
获取目标图像;
获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差;
根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率;
根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,所述第二概率为所述像素在所述YUV颜色空间中为皮肤的概率;
确定所述目标图像中各像素为皮肤的概率,其中,所述像素为皮肤的概率为所述像素的第一概率和第二概率的算数平均值。
在本发明的第一个可能的实施方式中,所述皮肤在RGB颜色空间中的预置均值包括红光成分的预置均值mr、绿光成分的预置均值mg和蓝光成分的预置均值mb,预置标准差包括红光成分的预置标准差nr、绿光成分的预置标准差ng和蓝光成分的预置标准差nb
所述根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,包括:
根据公式i=1-|r-mr|/3nr计算所述像素的第一参数i,其中,r为所述像素的红光成分的亮度值,当1-|r-mr|/3nr小于0时,取i=0,当1-|r-mr|/3nr大于1时,取i=1;
根据公式j=1-|g-mg|/3ng计算所述像素的第二参数j,其中,g为所述像素的绿光成分的亮度值,当1-|g-mg|/3ng小于0时,取j=0,当1-|g-mg|/3ng大于1时,取j=1;
根据公式k=1-|b-mb|/3nb计算所述像素的第三参数k,其中,b为所述像素的蓝光成分的亮度值,当1-|b-mb|/3nb小于0时,取i=0,当1-|b-mb|/3nb大于1时,取k=1;
根据公式P1=i1-i×j1-j×k1-k计算所述像素的第一概率P1。
在本发明的第二个可能的实施方式中,所述在YUV颜色空间中的预置均值和预置标准差包括色度的预置均值mu和mv,预置标准差包括色度的预置标准差nu和nv
所述根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,包括:
根据公式p=1-|u-mu|/3nu计算所述像素的第四参数p,其中u为所述像素的色调值;
根据公式q=1-|v-mv|/3nv计算所述像素的第五参数q,其中v为所述像素的饱和度值;
根据公式P2=p1-p×q1-q计算所述像素的第二概率P2。
在本发明的第三个可能的实施方式中,所述确定所述目标图像中各像素为皮肤的概率之后,所述处理器701还执行以下步骤:
计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为 所述像素为皮肤的概率与255的乘积;
采用轮廓查找算法对所述目标图像中各像素的灰度值进行计算,当在所述目标图像中获取到特定轮廓时,将所述特定轮廓中的各像素的灰度值以及为皮肤的概率置为0,所述特定轮廓为长和/或宽小于5个像素的轮廓。
在本发明的第四个可能的实施方式中,所述确定所述目标图像中各像素为皮肤的概率之后,所述处理器701还执行以下步骤:
计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
对所述目标图像中各像素的灰度值进行高斯模糊;
确定所述目标图形中各像素为皮肤的概率,其中,所述像素为皮肤的概率为高斯模糊后的像素的灰度值除以255。
本发明实施例还提供了另一种检测皮肤区域的装置,如图8所示,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该终端可以为手机、电脑等任意终端设备。下面以手机为例进行说明。
图8示出的是与本发明实施例提供的终端相关的手机的部分结构的框图。参考图8,手机包括:射频(Radio Frequency,RF)电路810、存储器820、输入单元830、显示单元840、传感器850、音频电路860、无线保真(wireless fidelity,WiFi)模块870、处理器880、以及电源890等部件。本领域技术人员可以理解,图8中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图8对手机的各个构成部件进行具体的介绍:
RF电路810可用于收发信息或通话过程中,信号的接收和发送,特 别地,将基站的下行信息接收后,给处理器880处理;另外,将设计上行的数据发送给基站。通常,RF电路810包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路810还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器820可用于存储软件程序以及模块,处理器880通过运行存储在存储器820的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器820可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器820可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元830可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元830可包括触控面板831以及其他输入设备832。触控面板831,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板831上或在触控面板831附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板831可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位, 并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器880,并能接收处理器880发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板831。除了触控面板831,输入单元830还可以包括其他输入设备832。具体地,其他输入设备832可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元840可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元840可包括显示面板841,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板841。进一步的,触控面板831可覆盖显示面板841,当触控面板831检测到在其上或附近的触摸操作后,传送给处理器880以确定触摸事件的类型,随后处理器880根据触摸事件的类型在显示面板841上提供相应的视觉输出。虽然在图8中,触控面板831与显示面板841是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板831与显示面板841集成而实现手机的输入和输出功能。
手机还可包括至少一种传感器850,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板841的亮度,接近传感器可在手机移动到耳边时,关闭显示面板841和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线 传感器等其他传感器,在此不再赘述。
音频电路860、扬声器861,传声器862可提供用户与手机之间的音频接口。音频电路860可将接收到的音频数据转换后的电信号,传输到扬声器861,由扬声器861转换为声音信号输出;另一方面,传声器862将收集的声音信号转换为电信号,由音频电路860接收后转换为音频数据,再将音频数据输出处理器880处理后,经RF电路810以发送给比如另一手机,或者将音频数据输出至存储器820以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块870可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图8示出了WiFi模块870,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器880是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器820内的软件程序和/或模块,以及调用存储在存储器820内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器880可包括一个或多个处理单元;优选的,处理器880可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器880中。
手机还包括给各个部件供电的电源890(比如电池),优选的,电源可以通过电源管理系统与处理器880逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等。摄像头可用于用户拍照,之后处理器根据本发明方法的步骤,对用户的拍出的照片进行 皮肤检测,检测出皮肤所在的区域,之后对皮肤所在的区域进行处理。
在本发明实施例中,该终端所包括的处理器880具有执行以上方法流程的功能。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品 存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (15)

  1. 一种检测皮肤区域的方法,其特征在于,包括:
    获取目标图像;
    获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差;
    根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率;
    根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,所述第二概率为所述像素在所述YUV颜色空间中为皮肤的概率;
    确定所述目标图像中各像素为皮肤的概率,其中,所述像素为皮肤的概率为所述像素的第一概率和第二概率的算数平均值。
  2. 根据权利要求1所述的检测皮肤区域的方法,其特征在于,所述皮肤在RGB颜色空间中的预置均值包括红光成分的预置均值mr、绿光成分的预置均值mg和蓝光成分的预置均值mb,预置标准差包括红光成分的预置标准差nr、绿光成分的预置标准差ng和蓝光成分的预置标准差nb
    所述根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,包括:
    根据公式i=1-|r-mr|/3nr计算所述像素的第一参数i,其中,r为所述像素的红光成分的亮度值,当1-|r-mr|/3nr小于0时,取i=0,当1-|r-mr|/3nr大于1时,取i=1;
    根据公式j=1-|g-mg|/3ng计算所述像素的第二参数j,其中,g为所述像素的绿光成分的亮度值,当1-|g-mg|/3ng小于0时,取j=0,当1-|g-mg|/3ng大于1时,取j=1;
    根据公式k=1-|b-mb|/3nb计算所述像素的第三参数k,其中,b为所述像素的蓝光成分的亮度值,当1-|b-mb|/3nb小于0时,取i=0,当1-|b-mb|/3nb大于1时,取k=1;
    根据公式P1=i1-i×j1-j×k1-k计算所述像素的第一概率P1。
  3. 根据权利要求1所述的检测皮肤区域的方法,其特征在于,所述在YUV颜色空间中的预置均值和预置标准差包括色度的预置均值mu和mv,预置标准差包括色度的预置标准差nu和nv
    所述根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,包括:
    根据公式p=1-|u-mu|/3nu计算所述像素的第四参数p,其中u为所述像素的色调值;
    根据公式q=1-|v-mv|/3nv计算所述像素的第五参数q,其中v为所述像素的饱和度值;
    根据公式P2=p1-p×q1-q计算所述像素的第二概率P2。
  4. 根据权利要求1所述的检测皮肤区域的方法,其特征在于,所述确定所述目标图像中各像素为皮肤的概率之后,还包括:
    计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
    采用轮廓查找算法对所述目标图像中各像素的灰度值进行计算,当在所述目标图像中获取到特定轮廓时,将所述特定轮廓中的各像素的灰度值以及为皮肤的概率置为0,所述特定轮廓为长和/或宽小于5个像素的轮廓。
  5. 根据权利要求1所述的检测皮肤区域的方法,其特征在于,所述确定所述目标图像中各像素为皮肤的概率之后,还包括:
    计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
    对所述目标图像中各像素的灰度值进行高斯模糊;
    确定所述目标图形中各像素为皮肤的概率,其中,所述像素为皮肤的概率为高斯模糊后的像素的灰度值除以255。
  6. 一种检测皮肤区域的装置,其特征在于,包括:
    第一获取模块,用于获取目标图像;
    第二获取模块,用于获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差;
    第一计算模块,用于根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率;
    第二计算模块,用于根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,所述第二概率为所述像素在所述YUV颜色空间中为皮肤的概率;
    第一确定模块,用于确定所述目标图像中各像素为皮肤的概率,其中,所述像素为皮肤的概率为所述像素的第一概率和第二概率的算数平均值。
  7. 根据权利要求6所述的检测皮肤区域的装置,其特征在于,所述皮肤在RGB颜色空间中的预置均值包括红光成分的预置均值mr、绿光成分的预置均值mg和蓝光成分的预置均值mb,预置标准差包括红光成分的预置标准差nr、绿光成分的预置标准差ng和蓝光成分的预置标准差nb
    所述第一计算模块具体用于:
    根据公式i=1-|r-mr|/3nr计算所述像素的第一参数i,其中,r为所述像素的红光成分的亮度值,当1-|r-mr|/3nr小于0时,取i=0,当1-|r-mr|/3nr大于1时,取i=1;
    根据公式j=1-|g-mg|/3ng计算所述像素的第二参数j,其中,g为所述像素的绿光成分的亮度值,当1-|g-mg|/3ng小于0时,取j=0,当1-|g-mg|/3ng大于1时,取j=1;
    根据公式k=1-|b-mb|/3nb计算所述像素的第三参数k,其中,b为所述像素的蓝光成分的亮度值,当1-|b-mb|/3nb小于0时,取i=0,当1-|b-mb|/3nb大于1时,取k=1;
    根据公式P1=i1-i×j1-j×k1-k计算所述像素的第一概率P1。
  8. 根据权利要求6所述的检测皮肤区域的装置,其特征在于,所述在YUV颜色空间中的预置均值和预置标准差包括色度的预置均值mu和mv,预置标准差包括色度的预置标准差nu和nv
    所述第二计算模块具体用于:
    根据公式p=1-|u-mu|/3nu计算所述像素的第四参数p,其中u为所述像素的色调值;
    根据公式q=1-|v-mv|/3nv计算所述像素的第五参数q,其中v为所述像素的饱和度值;
    根据公式P2=p1-p×q1-q计算所述像素的第二概率P2。
  9. 根据权利要求6所述的检测皮肤区域的装置,其特征在于,所述检测皮肤区域的装置还包括:
    第三计算模块,用于在所述第一确定模块确定所述目标图像中各像素为皮肤的概率之后,计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
    第四计算模块,用于采用轮廓查找算法对所述目标图像中各像素的灰度值进行计算,当在所述目标图像中获取到特定轮廓时,将所述特定轮廓中的各像素的灰度值以及为皮肤的概率置为0,所述特定轮廓为长和/或宽小于5个像素的轮廓。
  10. 根据权利要求6所述的检测皮肤区域的装置,其特征在于,所述检测皮肤区域的装置还包括:
    第五计算模块,用于在所述第一确定模块确定所述目标图像中各像素为皮肤的概率之后,计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
    第六计算模块,用于对所述目标图像中各像素的灰度值进行高斯模糊;
    第二确定模块,用于确定所述目标图形中各像素为皮肤的概率,其中,所述像素为皮肤的概率为高斯模糊后的像素的灰度值除以255。
  11. 一种用于检测皮肤区域的计算机存储介质,其上存储有指令集,所述指令集由一个或多个处理器执行而执行以下步骤:
    获取目标图像;
    获取皮肤在RGB颜色空间中的预置均值和预置标准差,以及在YUV颜色空间中的预置均值和预置标准差;
    根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,所述第一概率为所述像素在所述RGB颜色空间中为皮肤的概率;
    根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图 像中各像素的第二概率,所述第二概率为所述像素在所述YUV颜色空间中为皮肤的概率;
    确定所述目标图像中各像素为皮肤的概率,其中,所述像素为皮肤的概率为所述像素的第一概率和第二概率的算数平均值。
  12. 根据权利要求11所述的检测皮肤区域的方法,其特征在于,所述皮肤在RGB颜色空间中的预置均值包括红光成分的预置均值mr、绿光成分的预置均值mg和蓝光成分的预置均值mb,预置标准差包括红光成分的预置标准差nr、绿光成分的预置标准差ng和蓝光成分的预置标准差nb
    所述根据所述RGB颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第一概率,包括:
    根据公式i=1-|r-mr|/3nr计算所述像素的第一参数i,其中,r为所述像素的红光成分的亮度值,当1-|r-mr|/3nr小于0时,取i=0,当1-|r-mr|/3nr大于1时,取i=1;
    根据公式j=1-|g-mg|/3ng计算所述像素的第二参数j,其中,g为所述像素的绿光成分的亮度值,当1-|g-mg|/3ng小于0时,取j=0,当1-|g-mg|/3ng大于1时,取j=1;
    根据公式k=1-|b-mb|/3nb计算所述像素的第三参数k,其中,b为所述像素的蓝光成分的亮度值,当1-|b-mb|/3nb小于0时,取i=0,当1-|b-mb|/3nb大于1时,取k=1;
    根据公式P1=i1-i×j1-j×k1-k计算所述像素的第一概率P1。
  13. 根据权利要求11所述的检测皮肤区域的方法,其特征在于,所述在YUV颜色空间中的预置均值和预置标准差包括色度的预置均值mu和mv,预置标准差包括色度的预置标准差nu和nv
    所述根据所述YUV颜色空间中的预置均值和预置标准差计算所述目标图像中各像素的第二概率,包括:
    根据公式p=1-|u-mu|/3nu计算所述像素的第四参数p,其中u为所述像素的色调值;
    根据公式q=1-|v-mv|/3nv计算所述像素的第五参数q,其中v为所述像素的饱和度值;
    根据公式P2=p1-p×q1-q计算所述像素的第二概率P2。
  14. 根据权利要求11所述的检测皮肤区域的方法,其特征在于,所述确定所述目标图像中各像素为皮肤的概率之后,还包括:
    计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
    采用轮廓查找算法对所述目标图像中各像素的灰度值进行计算,当在所述目标图像中获取到特定轮廓时,将所述特定轮廓中的各像素的灰度值以及为皮肤的概率置为0,所述特定轮廓为长和/或宽小于5个像素的轮廓。
  15. 根据权利要求11所述的检测皮肤区域的方法,其特征在于,所述确定所述目标图像中各像素为皮肤的概率之后,还包括:
    计算所述目标图像中各像素的灰度值,其中,所述像素的灰度值为所述像素为皮肤的概率与255的乘积;
    对所述目标图像中各像素的灰度值进行高斯模糊;
    确定所述目标图形中各像素为皮肤的概率,其中,所述像素为皮肤的概率为高斯模糊后的像素的灰度值除以255。
PCT/CN2016/084528 2015-10-26 2016-06-02 检测皮肤区域的方法和检测皮肤区域的装置 WO2017071219A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP16858663.4A EP3370204B1 (en) 2015-10-26 2016-06-02 Method for detecting skin region and device for detecting skin region
US15/705,102 US10489635B2 (en) 2015-10-26 2017-09-14 Method for detecting skin region and apparatus for detecting skin region
US16/666,075 US10783353B2 (en) 2015-10-26 2019-10-28 Method for detecting skin region and apparatus for detecting skin region

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510700287.3 2015-10-26
CN201510700287.3A CN106611429B (zh) 2015-10-26 2015-10-26 检测皮肤区域的方法和检测皮肤区域的装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/705,102 Continuation-In-Part US10489635B2 (en) 2015-10-26 2017-09-14 Method for detecting skin region and apparatus for detecting skin region

Publications (1)

Publication Number Publication Date
WO2017071219A1 true WO2017071219A1 (zh) 2017-05-04

Family

ID=58613443

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/084528 WO2017071219A1 (zh) 2015-10-26 2016-06-02 检测皮肤区域的方法和检测皮肤区域的装置

Country Status (4)

Country Link
US (2) US10489635B2 (zh)
EP (1) EP3370204B1 (zh)
CN (1) CN106611429B (zh)
WO (1) WO2017071219A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215599A (zh) * 2018-10-22 2019-01-15 深圳市华星光电技术有限公司 一种改善有色人种肤色视角表现的8畴设计方法及系统

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491895B2 (en) * 2016-05-23 2019-11-26 Intel Corporation Fast and robust human skin tone region detection for improved video coding
US10628700B2 (en) 2016-05-23 2020-04-21 Intel Corporation Fast and robust face detection, region extraction, and tracking for improved video coding
US11669724B2 (en) 2018-05-17 2023-06-06 Raytheon Company Machine learning using informed pseudolabels
CN110827204B (zh) * 2018-08-14 2022-10-04 阿里巴巴集团控股有限公司 图像处理方法、装置及电子设备
CN109522839A (zh) * 2018-11-15 2019-03-26 北京达佳互联信息技术有限公司 一种人脸皮肤区域确定方法、装置、终端设备及存储介质
CN109815874A (zh) * 2019-01-17 2019-05-28 苏州科达科技股份有限公司 一种人员身份识别方法、装置、设备及可读存储介质
CN110443747B (zh) * 2019-07-30 2023-04-18 Oppo广东移动通信有限公司 图像处理方法、装置、终端及计算机可读存储介质
US11068747B2 (en) * 2019-09-27 2021-07-20 Raytheon Company Computer architecture for object detection using point-wise labels
CN113223039B (zh) * 2020-01-21 2023-04-07 海信集团有限公司 显示设备、服装图像提取方法和存储介质
US11676391B2 (en) 2020-04-16 2023-06-13 Raytheon Company Robust correlation of vehicle extents and locations when given noisy detections and limited field-of-view image frames
CN111627076B (zh) * 2020-04-28 2023-09-19 广州方硅信息技术有限公司 换脸方法、装置及电子设备
US11461993B2 (en) * 2021-01-05 2022-10-04 Applied Research Associates, Inc. System and method for determining the geographic location in an image
US11562184B2 (en) 2021-02-22 2023-01-24 Raytheon Company Image-based vehicle classification
CN113111710B (zh) * 2021-03-11 2023-08-18 广州大学 基于皮肤镜的毛发图像识别方法、装置和存储介质
CN113888543B (zh) * 2021-08-20 2024-03-19 北京达佳互联信息技术有限公司 肤色分割方法、装置、电子设备及存储介质
CN113947568B (zh) * 2021-09-26 2024-03-29 北京达佳互联信息技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN113947606B (zh) * 2021-09-26 2024-03-26 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN113989884B (zh) * 2021-10-21 2024-05-14 武汉博视电子有限公司 基于脸部肌肤图像紫外深层及浅层色斑的识别方法
CN115457381B (zh) * 2022-08-18 2023-09-05 广州从埔高速有限公司 一种高速公路违法用地检测方法、系统、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251890A (zh) * 2008-03-13 2008-08-27 西安交通大学 基于多色域选择性形态学处理的视频图像肤色检测方法
CN101344922A (zh) * 2008-08-27 2009-01-14 华为技术有限公司 一种人脸检测方法及其装置
CN101882223A (zh) * 2009-05-04 2010-11-10 青岛海信数字多媒体技术国家重点实验室有限公司 人体肤色的测评方法
CN102324020A (zh) * 2011-09-02 2012-01-18 北京新媒传信科技有限公司 人体肤色区域的识别方法和装置
CN103927719A (zh) * 2014-04-04 2014-07-16 北京金山网络科技有限公司 图片处理方法及装置
CN104732206A (zh) * 2015-03-12 2015-06-24 苏州阔地网络科技有限公司 一种人脸检测方法及装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609908B2 (en) * 2003-04-30 2009-10-27 Eastman Kodak Company Method for adjusting the brightness of a digital image utilizing belief values
DE60330471D1 (de) * 2003-12-09 2010-01-21 Mitsubishi Electric Corp Verfahren und Vorrichtung zum Trennen von Inhalten in Bildern
US7580563B1 (en) * 2005-09-02 2009-08-25 Adobe Systems Incorporated Detection of objects in an image using color analysis
KR101247147B1 (ko) * 2007-03-05 2013-03-29 디지털옵틱스 코포레이션 유럽 리미티드 디지털 영상 획득 장치에서의 얼굴 탐색 및 검출
US8031961B2 (en) * 2007-05-29 2011-10-04 Hewlett-Packard Development Company, L.P. Face and skin sensitive image enhancement
CN101251898B (zh) * 2008-03-25 2010-09-15 腾讯科技(深圳)有限公司 一种肤色检测方法及装置
US8385638B2 (en) * 2009-01-05 2013-02-26 Apple Inc. Detecting skin tone in images
US8358812B2 (en) * 2010-01-25 2013-01-22 Apple Inc. Image Preprocessing
US8638993B2 (en) * 2010-04-05 2014-01-28 Flashfoto, Inc. Segmenting human hairs and faces
EP2453383A1 (en) * 2010-11-12 2012-05-16 ST-Ericsson SA Facial features detection
CN102096823A (zh) * 2011-02-12 2011-06-15 厦门大学 基于高斯模型和最小均方差的人脸检测方法
US8705853B2 (en) * 2012-04-20 2014-04-22 Apple Inc. Detecting skin tone
CN103971344B (zh) * 2014-05-27 2016-09-07 广州商景网络科技有限公司 一种证件图像的肤色偏色校正方法及系统
CN104156915A (zh) * 2014-07-23 2014-11-19 小米科技有限责任公司 肤色调整方法和装置
US9390478B2 (en) * 2014-09-19 2016-07-12 Intel Corporation Real time skin smoothing image enhancement filter
CN104318558B (zh) * 2014-10-17 2017-06-23 浙江大学 复杂场景下基于多信息融合的手势分割方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251890A (zh) * 2008-03-13 2008-08-27 西安交通大学 基于多色域选择性形态学处理的视频图像肤色检测方法
CN101344922A (zh) * 2008-08-27 2009-01-14 华为技术有限公司 一种人脸检测方法及其装置
CN101882223A (zh) * 2009-05-04 2010-11-10 青岛海信数字多媒体技术国家重点实验室有限公司 人体肤色的测评方法
CN102324020A (zh) * 2011-09-02 2012-01-18 北京新媒传信科技有限公司 人体肤色区域的识别方法和装置
CN103927719A (zh) * 2014-04-04 2014-07-16 北京金山网络科技有限公司 图片处理方法及装置
CN104732206A (zh) * 2015-03-12 2015-06-24 苏州阔地网络科技有限公司 一种人脸检测方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3370204A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215599A (zh) * 2018-10-22 2019-01-15 深圳市华星光电技术有限公司 一种改善有色人种肤色视角表现的8畴设计方法及系统

Also Published As

Publication number Publication date
CN106611429A (zh) 2017-05-03
EP3370204A1 (en) 2018-09-05
US10783353B2 (en) 2020-09-22
EP3370204A4 (en) 2019-07-10
US10489635B2 (en) 2019-11-26
US20200065561A1 (en) 2020-02-27
US20180018505A1 (en) 2018-01-18
EP3370204B1 (en) 2021-08-18
CN106611429B (zh) 2019-02-05

Similar Documents

Publication Publication Date Title
WO2017071219A1 (zh) 检测皮肤区域的方法和检测皮肤区域的装置
TWI696146B (zh) 影像處理方法、裝置、電腦可讀儲存媒體和行動終端
CN107093418B (zh) 一种屏幕显示方法、计算机设备及存储介质
TWI658433B (zh) 影像模糊方法、裝置、電腦可讀儲存媒體和電腦裝置
CN107172364B (zh) 一种图像曝光补偿方法、装置和计算机可读存储介质
WO2018171493A1 (zh) 图像处理方法、装置及存储介质
WO2020199878A1 (zh) 显示亮度调整方法及相关产品
CN109361867B (zh) 一种滤镜处理方法及移动终端
CN109189281A (zh) 壁纸与图标的颜色适配方法、终端及计算机存储介质
CN109697008B (zh) 一种内容分享方法、终端及计算机可读存储介质
CN107644396B (zh) 一种唇色调整方法和装置
CN107067842B (zh) 色值调整方法、移动终端及存储介质
CN108459799B (zh) 一种图片的处理方法、移动终端及计算机可读存储介质
CN109462745B (zh) 一种白平衡处理方法及移动终端
CN107705247B (zh) 一种图像饱和度的调整方法、终端及存储介质
CN109471579B (zh) 终端屏幕信息布局调整方法、装置、移动终端及存储介质
CN107153500B (zh) 一种实现图像显示的方法及设备
CN108198150B (zh) 一种图像坏点的消除方法、终端及存储介质
CN109819166B (zh) 一种图像处理方法和电子设备
CN113179370A (zh) 拍摄方法、移动终端及可读存储介质
CN109639981B (zh) 一种图像拍摄方法及移动终端
CN110944163A (zh) 一种图像处理方法及电子设备
CN107798662B (zh) 一种图像处理方法及移动终端
CN111045769B (zh) 一种背景图片切换方法及电子设备
CN107844353B (zh) 一种显示方法、终端及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16858663

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE