WO2017088365A1 - 一种肤色检测方法及装置 - Google Patents

一种肤色检测方法及装置 Download PDF

Info

Publication number
WO2017088365A1
WO2017088365A1 PCT/CN2016/082540 CN2016082540W WO2017088365A1 WO 2017088365 A1 WO2017088365 A1 WO 2017088365A1 CN 2016082540 W CN2016082540 W CN 2016082540W WO 2017088365 A1 WO2017088365 A1 WO 2017088365A1
Authority
WO
WIPO (PCT)
Prior art keywords
skin
pixel
gaussian model
probability density
probability
Prior art date
Application number
PCT/CN2016/082540
Other languages
English (en)
French (fr)
Inventor
李艳杰
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/247,488 priority Critical patent/US20170154238A1/en
Publication of WO2017088365A1 publication Critical patent/WO2017088365A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6016Conversion to subtractive colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • H04N1/628Memory colours, e.g. skin or sky

Definitions

  • Embodiments of the present invention relate to the field of computer vision, and in particular, to a skin color detecting method and apparatus.
  • the method of skin color detection is divided into two basic types: a statistical-based method and a physics-based method.
  • the statistic-based skin color detection method mainly uses skin color statistical model to detect skin color, which mainly includes two steps: color space transformation and skin color modeling; physics-based method introduces the interaction between light and skin in skin color detection, through research Skin color reflection model and spectral characteristics for skin color detection.
  • the histogram based skin color detection is the simplest, fastest and effective in the skin color detection method.
  • the present invention uses the following technical solutions.
  • an embodiment of the present invention provides a skin color detecting method, including:
  • the pixel point is attributed to the skin region.
  • an embodiment of the present invention provides a skin color detecting device, including:
  • An image conversion module configured to read an RGB image, and convert the RGB image from an RGB color space to an r-g color space to obtain an image to be detected;
  • a probability calculation module configured to traverse reading each pixel in the image to be detected and calculate a first probability density of the pixel under the skin-mixed Gaussian model and the pixel according to a pre-established mixed Gaussian model a second probability density under the non-skin mixed Gaussian model; configured to calculate a posterior probability that the pixel belongs to the skin region according to the first probability density of the pixel point and the second probability density;
  • the skin color region determining module is configured to attribute the pixel point to the skin region when determining that the posterior probability is greater than a preset posterior probability threshold.
  • the skin color detecting method and apparatus provided by the embodiments of the present invention limit the influence of illumination on the skin color detection to some extent by converting the RGB image into the rg image; at the same time, the embodiment of the present invention establishes the skin mixed Gaussian model and the non- The skin mixed Gaussian model determines the posterior probability of each pixel in the detected image to belong to the skin region, and also has a good skin color detection effect when the number of samples is small, thereby improving the efficiency of skin color detection.
  • Embodiment 1 is a technical flowchart of Embodiment 1 of the present invention.
  • Embodiment 2 is a technical flowchart of Embodiment 2 of the present invention.
  • FIG. 3 is a schematic structural diagram of a device according to Embodiment 3 of the present invention.
  • the main idea of the present invention is to determine the posterior probability of a skin region belonging to a skin region by establishing a skin-mixed Gaussian model and a non-skin-mixed Gaussian model by establishing a skin-mixed Gaussian model and a non-skin-mixed Gaussian model.
  • the technical solution proposed by the embodiment can be used in any technical field that requires skin detection detection or skin color segmentation, such as face detection, gesture recognition, intelligent yellowing, and the like.
  • the embodiments do not exist separately, and may complement or combine with each other.
  • the first embodiment is an example of using the mixed Gaussian model for skin color detection
  • the second embodiment is An example of a hybrid Gaussian model building process, the two embodiments being combined with each other is a more detailed description of embodiments of the present invention.
  • a skin color detecting method mainly includes the following steps:
  • Step 110 Read an RGB image, and convert the RGB image from an RGB color space to an r-g color space to obtain an image to be detected.
  • the RGB image is converted from the RGB color space to the r-g color space by using the following formula:
  • R is the red value of the pixel
  • G is the green value of the pixel
  • B is the blue value of the pixel
  • r, g, b are the color values corresponding to the pixel after conversion .
  • the RGB color space here refers to a variety of colors by changing the three color channels of red (R), green (G), and blue (B) and superimposing them on each other.
  • RGB has 256 levels of brightness, expressed as numbers from 0, 1, 2... up to 255.
  • An RGB color value specifies the relative brightness of the three primary colors of red, green, and blue, producing a specific color for display, that is, any one color can be recorded and expressed by a set of RGB values.
  • the RGB value corresponding to a pixel is (149, 123, 98), and the color of this pixel is a superposition of different brightnesses of the three colors of RGB.
  • the RGB value corresponding to each pixel in the picture can be directly obtained by using OpenCv, and the implementation code can be like this:
  • channels 0, 1, and 2 correspond to the brightness values of the three colors of blue, green, and red, respectively;
  • converting the color space from RGB to r-g is actually a normalization process for RGB colors.
  • this normalization process when a pixel is affected by light or shadow and the color channel R, G, and B values change, the numerator and denominator in the normalization formula change simultaneously, and the normalized value obtained actually The float is not large, this transformation removes the information from the image, because This can reduce the effects of lighting.
  • the pixel value of pixel A at the time T1 before normalization is: RGB (30, 60, 90), and at time T2, the color values of the three color channels of RGB are changed due to the influence of illumination, and the pixel value of pixel A is changed. It becomes RGB (60, 120, 180).
  • the pixel value of pixel A at time T1 is: rgb (1/6, 1/3, 1/2)
  • the pixel value of pixel A at time T2 is: rgb (1/ 6, 1/3, 1/2). It can be seen that the values of the normalized RGB at the time of T1 and T2 do not change.
  • Step 120 traverse reading each pixel in the image to be detected and calculating a first probability density of the pixel under the skin-mixed Gaussian model according to a pre-established mixed Gaussian model and the pixel is in a non-skin a second probability density under a mixed Gaussian model;
  • the mixed Gaussian model GMM also known as MOG, is an extension of the single Gaussian model, which uses m (basically 3 to 10) Gaussian models to characterize the individual pixels in the image.
  • x is the d-dimensional Euclidean space
  • a is the mean vector of the single Gaussian model
  • S is the covariance matrix of the single Gaussian model
  • T represents the transpose operation of the matrix
  • () -1 represents the inverse of the matrix .
  • the formula of the mixed Gaussian model is accumulated by m single Gaussian models according to the weights, and is expressed by the following formula:
  • ⁇ k is the weight of the kth Gaussian model
  • m is the number of preset Gaussian models
  • p k (x) is the kth single Gaussian model.
  • x belongs to d-dimensional Euclidean space
  • m is the number of preset Gaussian models
  • p k (x) is the probability density of the k-th Gaussian model
  • a k is the mean of the k-th Gaussian model.
  • S k is the covariance matrix of the kth Gaussian model
  • ⁇ k is the weight of the kth Gaussian model;
  • a mixed Gaussian model is established for the skin pixel and the non-skin pixel respectively, and the formulas of the two models are the same, except that the parameters in the model, that is, the mean vector a k and the covariance matrix S k are different.
  • the embodiment of the present invention calculates its first probability density under the skin-mixed Gaussian model and its second probability density under the non-skin-mixed Gaussian model until all pixels are traversed.
  • the traversing process may be traversing by column by column, or may randomly select a pixel point to determine whether it is a pixel point of the skin area, and if so, first within a certain size neighborhood thereof
  • the pixels are traversed, and the present invention is not limited.
  • the mean vector of the skin-mixed Gaussian model is a k1
  • the covariance matrix is S k1
  • the weights of the plurality of single Gaussian models respectively correspond to ⁇ k1
  • the mean vector of the non-skin mixed Gaussian model is a k2
  • the covariance matrix is S k2
  • the weights corresponding to the multiple single Gaussian models are respectively ⁇ k2 ,
  • Step 130 Calculate a posterior probability that the pixel belongs to a skin region according to the first probability density of the pixel point and the second probability density;
  • the calculation formula of the posterior probability is as follows:
  • P is the value of the posterior probability
  • p skin is the first probability density
  • p non-skin is the second probability density
  • Step 140 When the posterior probability is determined to be greater than a preset posterior probability threshold, the pixel point is attributed to the skin region.
  • the embodiment of the present invention sets the posterior probability threshold to 0.5, that is, when the value of the posterior probability exceeds 0.5, it is determined that the pixel corresponding to the posterior probability belongs to the skin region.
  • Posterior probability The threshold value of 0.5 is an empirical value. It is judged by a large number of experiments that if a pixel point belongs to the skin pixel, the posterior probability exceeds 0.5, and this pixel belongs to the skin area of the image.
  • the posterior probability threshold may also be dynamically adjusted according to different picture samples, and the present invention is not limited thereto.
  • the influence of the illumination on the skin color detection is controlled to some extent; at the same time, in the prior art, the defect of obtaining a large number of samples based on the histogram for detecting the skin color is used.
  • the embodiment of the invention combines the Gaussian model to calculate the posterior probability that each pixel belongs to the skin region, and has a good skin color detection effect when the number of samples is small, thereby improving the efficiency of skin color detection.
  • Embodiment 2 is a technical flowchart of Embodiment 2 of the present invention.
  • the establishment of a mixed Gaussian model mainly includes the following steps.
  • Step 210 Mark a skin pixel area and a non-skin pixel area of the RGB sample picture to obtain a skin pixel sample and a non-skin pixel sample;
  • the RGB sample picture is first marked, which may be artificial, to distinguish the skin area and the non-skin area in the picture, that is, the skin pixel sample and the non-skin pixel sample are obtained. Pre-classifying the samples helps to improve the efficiency of the subsequent EM algorithm in calculating the parameters of the mixed Gaussian model and how close the parameters are to the actual model.
  • Step 220 Convert the skin pixel sample and the non-skin pixel sample from an RGB color space to an r-g color space;
  • R is the red value of the pixel
  • G is the green value of the pixel
  • B is the blue value of the pixel
  • r, g, b are the color values corresponding to the pixel after conversion .
  • Step 230 Calculate parameters of the skin pixel mixed Gaussian model and the non-skin pixel mixed Gaussian model according to the skin space converted skin sample and the non-skin pixel sample, respectively, using a desired maximization algorithm.
  • the parameters include a k , S k and ⁇ k .
  • the mixed Gaussian model is a superposition of multiple single Gaussian models.
  • the weight of each single Gaussian model is different, that is, the data in the mixed Gaussian model is generated from several single Gaussian models.
  • the number m of the single Gaussian model needs to be set in advance, and ⁇ k is the weight of each single Gaussian model.
  • the Expectation Maximization (EM) algorithm is an algorithm for finding a parameter maximum likelihood estimate or a maximum a posteriori estimate in a probabilistic model, where the probability model relies on an unobservable hidden variable.
  • the EM algorithm provides an efficient iterative procedure to calculate the maximum likelihood estimate for these data.
  • the iteration is divided into two steps at each step: the Expectation step and the Maximization step, hence the EM algorithm.
  • the EM algorithm is a very mature algorithm and the derivation process is complicated, which is not described in detail in the embodiments of the present invention.
  • Step 240 Establish a mixed Gaussian model according to the mixed Gaussian model formula.
  • the mean vector a k1 of the skin mixed Gaussian model, the covariance matrix S k1 and the weights ⁇ k1 corresponding to the multiple single Gaussian models can be calculated and substituted into the mixed Gaussian model formula.
  • the mean vector a k2 of the non-skin mixed Gaussian model, the covariance matrix S k2 , and the weights ⁇ k2 corresponding to the plurality of single Gaussian models respectively can be calculated, and the obtained non-skin mixture is obtained.
  • the Gaussian model is:
  • each pixel of the picture to be detected is read after the color space is transformed, and the pixel is substituted into the two models, and the pixel points are respectively calculated. Skin and p non-skin .
  • the EM algorithm is used to establish a mixed Gaussian model of skin pixels and non-skin pixels, compared with the skin color detection based on the histogram in the prior art. Does not require a large number of training samples, saving the elimination of various resources Consumption, improve the efficiency of skin color detection.
  • a skin color detecting device mainly includes the following large modules: an image converting module 310, a probability calculating module 320, and a skin color region determining module 330.
  • the image conversion module 310 is configured to read an RGB image, and convert the RGB image from an RGB color space to an r-g color space to obtain an image to be detected;
  • the probability calculation module 320 is connected to the image conversion module 310 for traversing and reading each pixel in the image to be detected and calculating the pixel point in the skin mixing Gauss according to a pre-established mixed Gaussian model. a first probability density under the model and a second probability density of the pixel under the non-skin mixed Gaussian model; for calculating the pixel according to the first probability density and the second probability density of the pixel point The posterior probability of the point belonging to the skin area;
  • the skin color region determining module 330 is connected to the probability calculating module 320, and is configured to attribute the pixel point to the skin region when determining that the posterior probability is greater than a preset posterior probability threshold.
  • the image conversion module 310 is configured to convert the RGB image from the RGB color space to the r-g color space by using the following formula:
  • R is the red value of the pixel
  • G is the green value of the pixel
  • B is the blue value of the pixel
  • r, g, b are the color values corresponding to the pixel after conversion .
  • the probability calculation module 320 is configured to: calculate the posterior probability by using the following formula:
  • P is the value of the posterior probability
  • p skin is the first probability density
  • p non-skin is the second probability density
  • the probability calculation module 320 is further configured to calculate the first probability density and the second probability density by using the following formula:
  • p(x; a k , S k , ⁇ k ) is the probability density of the mixed Gaussian model
  • x belongs to the d-dimensional Euclidean space
  • m is the number of preset Gaussian models
  • p k (x) is The probability density of the kth Gaussian model
  • a k is the mean vector of the kth Gaussian model
  • S k is the covariance matrix of the kth Gaussian model
  • ⁇ k is the weight of the kth Gaussian model
  • the apparatus further includes a model parameter calculation module 340, the model parameter calculation module 340 for:
  • the image to be detected is converted from the RGB color space to the rg color space by the image conversion module 310, which avoids the influence of illumination on the skin color detection to some extent; at the same time, the probability calculation module 320 Calculating the probability and posterior probability that each pixel in the image to be detected belongs to the skin region and the non-skin region respectively according to the pre-established mixed Gaussian model, so that the skin color detection is more efficient, and a good skin color detection effect can be achieved without a large number of samples. .
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to implement the embodiment. The purpose of the program. Those of ordinary skill in the art can understand and implement without deliberate labor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Color Image Communication Systems (AREA)

Abstract

一种肤色检测方法及装置。读取RGB图像,并将RGB图像由RGB颜色空间转换到r-g颜色空间,得到待检测的图像(110);遍历读取待检测的图像中的每一像素点并根据预先建立的混合高斯模型计算像素点在皮肤混合高斯模型下的第一概率密度以及像素点在非皮肤混合高斯模型下的第二概率密度(120);根据像素点的第一概率密度和第二概率密度计算像素点属于皮肤区域的后验概率(130);当判定后验概率大于预设的后验概率阈值时,将像素点归属于皮肤区域(140)。实现了少量样本情况下高效的肤色检测。

Description

一种肤色检测方法及装置
交叉引用
本申请引用于2015年11月26日递交的名称为“一种肤色检测方法及装置”的第2015108444607号中国专利申请,其通过引用被全部并入本申请。
技术领域
本发明实施例涉及计算机视觉领域,尤其涉及一种肤色检测方法及装置。
背景技术
在与人有关的各种机器视觉系统中,肤色检测越来越多的被重视,例如在基于手势的人机交互系统中,需要首先图像中获取手的位置。而当前最常用的方法就是通过对肤色进行检测从而获取手势信息。将手从图像分割出来,目前最常用的分割方法就是基于肤色的分割。
根据是否涉及成像的过程,肤色检测的方法分成两种基本类型:基于统计的方法和基于物理的方法。基于统计的肤色检测方法主要通过建立肤色统计模型进行肤色检测,主要包括两个步骤:颜色空间变换和肤色建模;基于物理的方法则在肤色检测中引入光照与皮肤间的相互作用,通过研究肤色反射模型和光谱特性进行肤色检测。
基于统计的肤色检测方法的静态建模中,基于直方图的肤色检测是肤色检测方法中最简单、快速和有效的。但它需要采集大量的样本进行统计才能得到较好的分割效果,而样本的采集是一件耗时耗力的工作。
因此,一种高效的肤色检测方法亟待提出。
发明内容
本发明的目的在于提供一种肤色检测方法及装置,用以解决现有技术中需采集大量样本进行统计才能得到较好分割效果的缺陷,实现了高效的肤色检测。
为实现以上目标,本发明使用了以下技术方案。
第一方面,本发明实施例提供一种肤色检测方法,包括:
读取RGB图像,并将所述RGB图像由RGB颜色空间转换到r-g颜色空间,得到待检测的图像;
遍历读取所述待检测的图像中的每一像素点并根据预先建立的混合高斯模型计算所述像素点在皮肤混合高斯模型下的第一概率密度以及所述像素点在非皮肤混合高斯模型下的第二概率密度;
根据所述像素点的所述第一概率密度和所述第二概率密度计算所述像素点属于皮肤区域的后验概率;
当判定所述后验概率大于预设的后验概率阈值时,将所述像素点归属于皮肤区域。
第二方面,本发明实施例提供一种肤色检测装置,包括:
图像转换模块,用于读取RGB图像,并将所述RGB图像由RGB颜色空间转换到r-g颜色空间,得到待检测的图像;
概率计算模块,用于遍历读取所述待检测的图像中的每一像素点并根据预先建立的混合高斯模型计算所述像素点在皮肤混合高斯模型下的第一概率密度以及所述像素点在非皮肤混合高斯模型下的第二概率密度;用于根据所述像素点的所述第一概率密度和所述第二概率密度计算所述像素点属于皮肤区域的后验概率;
肤色区域判断模块,用于当判定所述后验概率大于预设的后验概率阈值时,将所述像素点归属于皮肤区域。
与现有技术相比,本申请可以获得的技术效果包括:
本发明实施例提供的肤色检测方法及装置,通过将RGB图像转化为r-g图像,在一定程度上限制了光照对肤色检测的影响;与此同时,本发明实施例通过建立皮肤混合高斯模型和非皮肤混合高斯模型,对待检测图像中的每一像素点判断其属于皮肤区域的后验概率,在样本数量较少时也具有很好的肤色检测效果,提高了肤色检测的效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例一的技术流程图;
图2为本发明实施例二的技术流程图;
图3为本发明实施例三的装置结构示意图。
具体实施方式
本发明的主要思想在于,通过将RGB图像转化为r-g图像;同时通过建立皮肤混合高斯模型和非皮肤混合高斯模型,对待检测图像中的每一像素点判断其属于皮肤区域的后验概率。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明是实施例提出的技术方案可以使用在任何需要皮肤检测检测或肤色分割的技术领域,例如人脸检测,手势识别,智能鉴黄等。另外,需要说明的是,本发明实施例中,各实施例并非单独存在,是可以相互补充或组合的,例如,实施例一是使用混合高斯模型进行肤色检测的一种示例,实施例二是混合高斯模型建立过程的一种示例,两个实施例相互组合是对本发明实施例的更详细叙述。
实施例一
图1是本发明实施例一的技术流程图,结合图1,本发明实施例一种肤色检测方法主要包括如下的步骤:
步骤110:读取RGB图像,并将所述RGB图像由RGB颜色空间转换到r-g颜色空间,得到待检测的图像;
本发明实施例中,采用如下公式将所述RGB图像由RGB颜色空间转换到r-g颜色空间:
Figure PCTCN2016082540-appb-000001
Figure PCTCN2016082540-appb-000002
b=1-r-g
其中,R为所述像素点的红色值、G为所述像素点的绿色值、B为所述像素点的蓝色值;r、g、b分别为转化后所述像素点对应的颜色值。
此处的RGB颜色空间是指通过对红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色。通常情况下,RGB各有256级亮度,用数字表示为从0、1、2...直到255。一个RGB颜色值指定红、绿、蓝三原色的相对亮度,生成一个用于显示的特定颜色,即任何一个颜色都可以由一组RGB值来记录和表达。例如,某一像素点对应的RGB值为(149,123,98),这一像素点的颜色为RGB三种颜色的不同亮度的叠加。
本发明实施例中,使用OpenCv可直接获得图片中每一像素点对应的RGB值,实现代码可以是这样:
CvScalar p;
p=cvGet2D(ImageIn,j,i);
double a=p.val[0];
double b=p.val[1];
double c=p.val[2];
其中,i、j分别是像素点在图像上的横纵坐标;通道0、1、2分别对应的是蓝、绿、红三种颜色的亮度数值;
本发明实施例中,将颜色空间由RGB转化为r-g,实际上是对RGB色彩的归一化过程。在这个归一化的过程中,当某个像素受光照或阴影的影响而产生颜色通道R、G、B值变化时,归一化公式中的分子分母同时变化,得到的归一化值实际浮动并不大,这种变换方式从图像上移除了光照的信息,因 此可以减弱光照的影响。
例如:归一化前T1时刻的像素A的像素值为:RGB(30,60,90),T2时刻,由于受光照影响,RGB三个颜色通道的颜色值产生了变化,像素A的像素值变为RGB(60,120,180)。
经归一化公式转化为r-g空间之后,T1时刻的像素A的像素值为:rgb(1/6,1/3,1/2),T2时刻的像素A的像素值为:rgb(1/6,1/3,1/2)。由此可见,T1和T2时刻的归一化RGB的值并没有发生变化。
步骤120:遍历读取所述待检测的图像中的每一像素点并根据预先建立的混合高斯模型计算所述像素点在皮肤混合高斯模型下的第一概率密度以及所述像素点在非皮肤混合高斯模型下的第二概率密度;
混合高斯模型GMM,也称MOG,是单高斯模型的扩展,它使用m(基本为3到10个)个高斯模型来表征图像中各个像素点的特征。
单高斯模型的公式表述如下所示:
Figure PCTCN2016082540-appb-000003
其中,x属于d维欧几里得空间,a是单高斯模型的均值向量,S是单高斯模型的协方差矩阵,()T表示矩阵的转置运算,()-1表示矩阵的逆运算。
混合高斯模型的公式由m个单高斯模型按照权重累加而成,用下述公式体现:
Figure PCTCN2016082540-appb-000004
其中,πk是第k个高斯模型的权重,m是预设的高斯模型的个数,pk(x)是第k个单高斯模型。其中,对于第k个单高斯模型,其公式表述如下:
Figure PCTCN2016082540-appb-000005
如上所述,x属于d维欧几里得空间,m是预设的高斯模型的个数,pk(x)是第k个高斯模型的概率密度,ak是第k个高斯模型的均值向量,Sk是第k个高斯模型的协方差矩阵,πk是第k个高斯模型的权重;
需要说明的是,p(x;ak,Sk,πk)和pk(x)实际计算结果表征的是x在相应 模型下的概率密度。
本发明实施例中,对皮肤像素和非皮肤像素分别建立混合高斯模型,两种模型的公式表述相同,不同之处在于模型中的参数,即均值向量ak和协方差矩阵Sk不同。
对于待检测图像中的每一个像素点,本发明实施例在皮肤混合高斯模型下计算其第一概率密度,在非皮肤混合高斯模型下计算其第二概率密度,直至遍历所有的像素点。
本发明实施例中,所述遍历的过程可以是按行按列逐个遍历,也可以是随机选取一个像素点,判断其是否为皮肤区域的像素点,若是,则首先对其一定尺寸邻域内的像素点进行遍历,本发明并不限制。
当皮肤混合高斯模型的均值向量为ak1、协方差矩阵为Sk1以及多个单高斯模型分别对应的权重为πk1时,
Figure PCTCN2016082540-appb-000006
当非皮肤混合高斯模型的均值向量为ak2、协方差矩阵为Sk2以及多个单高斯模型分别对应的权重为πk2时,
Figure PCTCN2016082540-appb-000007
步骤130:根据所述像素点的所述第一概率密度和所述第二概率密度计算所述像素点属于皮肤区域的后验概率;
本发明实施例中,后验概率的计算公式如下:
Figure PCTCN2016082540-appb-000008
其中,P为所述后验概率的值,pskin为所述第一概率密度;pnon-skin为所述第二概率密度。
步骤140:当判定所述后验概率大于预设的后验概率阈值时,将所述像素点归属于皮肤区域。
优选地,本发明实施例将所述后验概率阈值设为0.5,即当所述后验概率的值超过0.5时,判断所述后验概率对应的像素点属于皮肤区域。后验概率 阈值0.5是一个经验值,经大量的实验判断得出,若一个像素点属于皮肤像素的后验概率超过0.5时,这一像素点属于图像的皮肤区域。当然,根据不同的图片样本,所述后验概率阈值也可以是动态调整的,本发明并不限于此。
本实施例中,通过将RGB图像转化为r-g图像,在一定程度上控制了光照对肤色检测的影响;与此同时,针对现有技术中基于直方图进行肤色检测需获取大量样本的缺陷,本发明实施例结合混合高斯模型,通过计算每一像素点属于皮肤区域的后验概率,在样本数量较少时也具有很好的肤色检测效果,提高了肤色检测的效率。
实施例二
图2是本发明实施例二的技术流程图,结合图2,本发明实施例一种肤色检测方法中,混合高斯模型的建立主要包括如下的步骤
步骤210:对RGB样本图片的皮肤像素区域和非皮肤像素区域进行标记,得到皮肤像素样本和非皮肤像素样本;
本发明实施例中,首先对RGB样本图片进行标记,可以是人工的,用以区分出图片中的皮肤区域和非皮肤区域,即得到皮肤像素样本和非皮肤像素样本。预先对样本进行分类,有助于提高后续EM算法在计算混合高斯模型参数的效率以及参数与实际模型的接近程度。
步骤220:将所述皮肤像素样本和非皮肤像素样本由RGB颜色空间转换到r-g颜色空间;
本步骤中的转换方式同实施例一中描述的相同,采用如下公式:
Figure PCTCN2016082540-appb-000009
Figure PCTCN2016082540-appb-000010
b=1-r-g
其中,R为所述像素点的红色值、G为所述像素点的绿色值、B为所述像素点的蓝色值;r、g、b分别为转化后所述像素点对应的颜色值。
步骤230:使用期望最大化算法,根据颜色空间转化后的所述皮肤像素 样本和非皮肤像素样本分别计算出所述皮肤像素混合高斯模型和所述非皮肤像素混合高斯模型的参数,其中,所述参数包括ak、Sk和πk
混合高斯模型是多个单高斯模型的叠加,在混合高斯模型中,每个单高斯模型的权重不相同,即混合高斯模型中的数据是从几个单高斯模型中生成的。单高斯模型的个数m需要预先设置,πk即是每个单高斯模型的权重。
在统计计算中,期望最大化(EM)算法是在概率(probabilistic)模型中寻找参数最大似然估计或者最大后验估计的算法,其中概率模型依赖于无法观测的隐藏变量(Latent Variable)。当有部分数据缺失或者无法观察到时,EM算法提供了一个高效的迭代程序用来计算这些数据的最大似然估计。在每一步迭代分为两个步骤:期望(Expectation)步骤和最大化(Maximization)步骤,因此称为EM算法。EM算法是非常成熟的算法且推导过程复杂,本发明实施例不作详述。
步骤240:根据混合高斯模型公式建立混合高斯模型。
根据标记后的皮肤像素样本,结合EM算法,可以计算出皮肤混合高斯模型的均值向量ak1、协方差矩阵Sk1以及多个单高斯模型分别对应的权重πk1,代入混合高斯模型公式,可以得到皮肤混合高斯模型为:
Figure PCTCN2016082540-appb-000011
根据标记后的非皮肤像素样本,结合EM算法,可以计算出非皮肤混合高斯模型的均值向量ak2、协方差矩阵Sk2以及多个单高斯模型分别对应的权重πk2,得到的非皮肤混合高斯模型为:
Figure PCTCN2016082540-appb-000012
当读取到一幅新的待检测图片时,在颜色空间变换后读取所述待检测图片的每一像素点并将所述像素点代入上述两个模型,分别计算所述像素点的pskin和pnon-skin
本实施例中,通过对少量样本图片的皮肤区域与非皮肤区域进行标记,辅以EM算法建立皮肤像素与非皮肤像素的混合高斯模型,与现有技术中基于直方图进行肤色检测相比,并不需要大量训练样本,节省了各种资源的消 耗,提高了肤色检测的效率。
实施例三
图3是本发明实施例一种肤色检测装置的结构示意图,结合图3,一种肤色检测装置主要包括以下几个大的模块:图像转换模块310、概率计算模块320、肤色区域判断模块330。
所述图像转换模块310,用于读取RGB图像,并将所述RGB图像由RGB颜色空间转换到r-g颜色空间,得到待检测的图像;
所述概率计算模块320与所述图像转换模块310相连接,用于遍历读取所述待检测的图像中的每一像素点并根据预先建立的混合高斯模型计算所述像素点在皮肤混合高斯模型下的第一概率密度以及所述像素点在非皮肤混合高斯模型下的第二概率密度;用于根据所述像素点的所述第一概率密度和所述第二概率密度计算所述像素点属于皮肤区域的后验概率;
所述肤色区域判断模块330与所述概率计算模块320相连接,用于,当判定所述后验概率大于预设的后验概率阈值时,将所述像素点归属于皮肤区域。
进一步地,所述图像转换模块310用于采用如下公式将所述RGB图像由RGB颜色空间转换到r-g颜色空间:
Figure PCTCN2016082540-appb-000013
Figure PCTCN2016082540-appb-000014
b=1-r-g
其中,R为所述像素点的红色值、G为所述像素点的绿色值、B为所述像素点的蓝色值;r、g、b分别为转化后所述像素点对应的颜色值。
进一步地,所述概率计算模块320用于:采用以下公式计算所述后验概率:
Figure PCTCN2016082540-appb-000015
其中,P为所述后验概率的值,pskin为所述第一概率密度;pnon-skin为所 述第二概率密度。
进一步地,所述概率计算模块320还用于,采用以下公式计算所述第一概率密度和所述第二概率密度:
Figure PCTCN2016082540-appb-000016
Figure PCTCN2016082540-appb-000017
其中,p(x;ak,Sk,πk)是混合高斯模型的概率密度,x属于d维欧几里得空间,m是预设的高斯模型的个数,pk(x)是第k个高斯模型的概率密度,ak是第k个高斯模型的均值向量,Sk是第k个高斯模型的协方差矩阵,πk是第k个高斯模型的权重;
所述装置进一步包括模型参数计算模块340,所述模型参数计算模块340用于:
对RGB样本图片的皮肤像素区域和非皮肤像素区域进行标记,得到皮肤像素样本和非皮肤像素样本;
将所述皮肤像素样本和非皮肤像素样本由RGB颜色空间转换到r-g颜色空间;
使用期望最大化算法,根据颜色空间转化后的所述皮肤像素样本和非皮肤像素样本分别计算出所述皮肤像素混合高斯模型和所述非皮肤像素混合高斯模型的参数,其中,所述参数包括ak、Sk和πk
本实施例中,通过所述图像转换模块310将待检测的图片从RGB颜色空间转换至r-g颜色空间,在一定程度上避免了光照对肤色检测的影响;与此同时,所述概率计算模块320根据预先建立的混合高斯模型计算待检测图像中的每一像素点分别属于皮肤区域和非皮肤区域的概率及后验概率,使得肤色检测更加高效,无需大量的样本也可达到良好的肤色检测效果。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例 方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种肤色检测方法,其特征在于,包括如下的步骤:
    读取RGB图像,并将所述RGB图像由RGB颜色空间转换到r-g颜色空间,得到待检测的图像;
    遍历读取所述待检测的图像中的每一像素点并根据预先建立的混合高斯模型计算所述像素点在皮肤混合高斯模型下的第一概率密度以及所述像素点在非皮肤混合高斯模型下的第二概率密度;
    根据所述像素点的所述第一概率密度和所述第二概率密度计算所述像素点属于皮肤区域的后验概率;
    当判定所述后验概率大于预设的后验概率阈值时,将所述像素点归属于皮肤区域。
  2. 根据权利要求1所述的方法,其特征在于,将所述RGB图像由RGB颜色空间转换到r-g颜色空间,进一步包括:
    采用如下公式将所述RGB图像由RGB颜色空间转换到r-g颜色空间:
    Figure PCTCN2016082540-appb-100001
    Figure PCTCN2016082540-appb-100002
    b=1-r-g
    其中,R为所述像素点的红色值、G为所述像素点的绿色值、B为所述像素点的蓝色值;r、g、b分别为转化后所述像素点对应的颜色值。
  3. 根据权利要求1所述的方法,其特征在于,根据所述像素点的所述第一概率密度和所述第二概率密度计算所述像素点属于皮肤区域的后验概率,进一步包括:
    采用以下公式计算所述后验概率:
    Figure PCTCN2016082540-appb-100003
    其中,P为所述后验概率的值,pskin为所述第一概率密度;pnon-skin为所述第二概率密度。
  4. 根据权利要求3所述的方法,其特征在于,计算所述像素点在皮肤混合高斯模型下的第一概率密度以及所述像素点在非皮肤混合高斯模型下的第二概率密度,进一步采用以下公式:
    Figure PCTCN2016082540-appb-100004
    Figure PCTCN2016082540-appb-100005
    其中,p(x;ak,Sk,πk)是混合高斯模型的概率密度,x属于d维欧几里得空间,m是预设的高斯模型的个数,pk(x)是第k个高斯模型的概率密度,ak是第k个高斯模型的均值向量,Sk是第k个高斯模型的协方差矩阵,πk是第k个高斯模型的权重。
  5. 根据权利要求4所述的方法,其特征在于,p(x;ak,Sk,πk)是混合高斯模型的概率密度,进一步包括:
    对RGB样本图片的皮肤像素区域和非皮肤像素区域进行标记,得到皮肤像素样本和非皮肤像素样本;
    将所述皮肤像素样本和非皮肤像素样本由RGB颜色空间转换到r-g颜色空间;
    使用期望最大化算法,根据颜色空间转化后的所述皮肤像素样本和非皮肤像素样本分别计算出所述皮肤像素混合高斯模型和所述非皮肤像素混合高斯模型的参数,其中,所述参数包括ak、Sk和πk
  6. 一种肤色检测装置,其特征在于,包括如下的模块:
    图像转换模块,用于读取RGB图像,并将所述RGB图像由RGB颜色空间转换到r-g颜色空间,得到待检测的图像;
    概率计算模块,用于遍历读取所述待检测的图像中的每一像素点并根据预先建立的混合高斯模型计算所述像素点在皮肤混合高斯模型下的第一概率 密度以及所述像素点在非皮肤混合高斯模型下的第二概率密度;用于根据所述像素点的所述第一概率密度和所述第二概率密度计算所述像素点属于皮肤区域的后验概率;
    肤色区域判断模块,用于当判定所述后验概率大于预设的后验概率阈值时,将所述像素点归属于皮肤区域。
  7. 根据权利要求6所述的装置,其特征在于,所述图像转换模块一步用于:
    采用如下公式将所述RGB图像由RGB颜色空间转换到r-g颜色空间:
    Figure PCTCN2016082540-appb-100006
    Figure PCTCN2016082540-appb-100007
    b=1-r-g
    其中,R为所述像素点的红色值、G为所述像素点的绿色值、B为所述像素点的蓝色值;r、g、b分别为转化后所述像素点对应的颜色值。
  8. 根据权利要求6所述的装置,其特征在于,所述概率计算模块进一步用于:
    采用以下公式计算所述后验概率:
    Figure PCTCN2016082540-appb-100008
    其中,P为所述后验概率的值,pskin为所述第一概率密度;pnon-skin为所述第二概率密度。
  9. 根据权利要求8所述的装置,所述概率计算模块,进一步用于,采用以下公式计算所述第一概率密度和所述第二概率密度:
    Figure PCTCN2016082540-appb-100009
    Figure PCTCN2016082540-appb-100010
    其中,p(x;ak,Sk,πk)是混合高斯模型的概率密度,x属于d维欧几里得空间,m是预设的高斯模型的个数,pk(x)是第k个高斯模型的概率密度,ak是第k个高斯模型的均值向量,Sk是第k个高斯模型的协方差矩阵,πk是第k个高斯模型的权重。
  10. 根据权利要求9所述的装置,其特征在于,所述装置进一步包括模型参数计算模块,所述模型参数计算模块用于:
    对RGB样本图片的皮肤像素区域和非皮肤像素区域进行标记,得到皮肤像素样本和非皮肤像素样本;
    将所述皮肤像素样本和非皮肤像素样本由RGB颜色空间转换到r-g颜色空间;
    使用期望最大化算法,根据颜色空间转化后的所述皮肤像素样本和非皮肤像素样本分别计算出所述皮肤像素混合高斯模型和所述非皮肤像素混合高斯模型的参数,其中,所述参数包括ak、Sk和πk
PCT/CN2016/082540 2015-11-26 2016-05-18 一种肤色检测方法及装置 WO2017088365A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/247,488 US20170154238A1 (en) 2015-11-26 2016-08-25 Method and electronic device for skin color detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510844460.7A CN105678813A (zh) 2015-11-26 2015-11-26 一种肤色检测方法及装置
CN201510844460.7 2015-11-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/247,488 Continuation US20170154238A1 (en) 2015-11-26 2016-08-25 Method and electronic device for skin color detection

Publications (1)

Publication Number Publication Date
WO2017088365A1 true WO2017088365A1 (zh) 2017-06-01

Family

ID=56946979

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/082540 WO2017088365A1 (zh) 2015-11-26 2016-05-18 一种肤色检测方法及装置

Country Status (3)

Country Link
US (1) US20170154238A1 (zh)
CN (1) CN105678813A (zh)
WO (1) WO2017088365A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903266A (zh) * 2019-01-21 2019-06-18 深圳市华成工业控制有限公司 一种基于样本窗的双核密度估计实时背景建模方法及装置
CN110310268A (zh) * 2019-06-26 2019-10-08 深圳市同为数码科技股份有限公司 基于白平衡统计分区信息的肤色检测方法及系统
CN110619648A (zh) * 2019-09-19 2019-12-27 四川长虹电器股份有限公司 一种基于rgb变化趋势划分图像区域的方法
CN112837259A (zh) * 2019-11-22 2021-05-25 福建师范大学 基于特征分割的皮肤色素病变治疗效果图像处理方法
CN113034467A (zh) * 2021-03-23 2021-06-25 福建师范大学 一种基于灰度分段及Lab颜色聚类的鲜红斑痣色卡生成方法
CN116030356A (zh) * 2023-03-31 2023-04-28 山东省土地发展集团有限公司 一种矿山生态修复的环境评估方法

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165060A1 (en) * 2015-04-14 2016-10-20 Intel Corporation Skin detection based on online discriminative modeling
CN107194363B (zh) * 2017-05-31 2020-02-04 Oppo广东移动通信有限公司 图像饱和度处理方法、装置、存储介质及计算机设备
CN107633252B (zh) * 2017-09-19 2020-04-21 广州市百果园信息技术有限公司 肤色检测方法、装置及存储介质
CN108830184B (zh) * 2018-05-28 2021-04-16 厦门美图之家科技有限公司 黑眼圈识别方法及装置
CN110163805B (zh) * 2018-06-05 2022-12-20 腾讯科技(深圳)有限公司 一种图像处理方法、装置和存储介质
CN109325946B (zh) * 2018-09-14 2021-08-24 北京石油化工学院 一种危险化学品堆垛监测方法和系统
CN110009588B (zh) * 2019-04-09 2022-12-27 成都品果科技有限公司 一种人像图像色彩增强方法及装置
US11532400B2 (en) * 2019-12-06 2022-12-20 X Development Llc Hyperspectral scanning to determine skin health
CN111325728B (zh) * 2020-02-19 2023-05-30 南方科技大学 产品缺陷检测方法、装置、设备及存储介质
CN111784814A (zh) * 2020-07-16 2020-10-16 网易(杭州)网络有限公司 一种虚拟角色皮肤调整方法和装置
CN112907457A (zh) * 2021-01-19 2021-06-04 Tcl华星光电技术有限公司 图像处理方法、图像处理装置及计算机设备
CN113888543B (zh) * 2021-08-20 2024-03-19 北京达佳互联信息技术有限公司 肤色分割方法、装置、电子设备及存储介质
CN113656627B (zh) * 2021-08-20 2024-04-19 北京达佳互联信息技术有限公司 肤色分割方法、装置、电子设备及存储介质
CN115393657B (zh) * 2022-10-26 2023-01-31 金成技术股份有限公司 基于图像处理的金属管材生产异常识别方法
CN117392732B (zh) * 2023-12-11 2024-03-22 深圳市宗匠科技有限公司 肤色检测方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251898A (zh) * 2008-03-25 2008-08-27 腾讯科技(深圳)有限公司 一种肤色检测方法及装置
US20100021056A1 (en) * 2008-07-28 2010-01-28 Fujifilm Corporation Skin color model generation device and method, and skin color detection device and method
CN101923652A (zh) * 2010-07-23 2010-12-22 华中师范大学 一种基于肤色和特征部位联合检测的色情图片识别方法
CN102236786A (zh) * 2011-07-04 2011-11-09 北京交通大学 一种光照自适应的人体肤色检测方法
CN102521607A (zh) * 2011-11-30 2012-06-27 西安交通大学 高斯框架下近似最优肤色检测方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011152841A1 (en) * 2010-06-01 2011-12-08 Hewlett-Packard Development Company, L.P. Replacement of a person or object in an image
CN102968623B (zh) * 2012-12-07 2015-12-23 上海电机学院 肤色检测系统及方法
US9864901B2 (en) * 2015-09-15 2018-01-09 Google Llc Feature detection and masking in images based on color distributions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251898A (zh) * 2008-03-25 2008-08-27 腾讯科技(深圳)有限公司 一种肤色检测方法及装置
US20100021056A1 (en) * 2008-07-28 2010-01-28 Fujifilm Corporation Skin color model generation device and method, and skin color detection device and method
CN101923652A (zh) * 2010-07-23 2010-12-22 华中师范大学 一种基于肤色和特征部位联合检测的色情图片识别方法
CN102236786A (zh) * 2011-07-04 2011-11-09 北京交通大学 一种光照自适应的人体肤色检测方法
CN102521607A (zh) * 2011-11-30 2012-06-27 西安交通大学 高斯框架下近似最优肤色检测方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903266A (zh) * 2019-01-21 2019-06-18 深圳市华成工业控制有限公司 一种基于样本窗的双核密度估计实时背景建模方法及装置
CN109903266B (zh) * 2019-01-21 2022-10-28 深圳市华成工业控制股份有限公司 一种基于样本窗的双核密度估计实时背景建模方法及装置
CN110310268A (zh) * 2019-06-26 2019-10-08 深圳市同为数码科技股份有限公司 基于白平衡统计分区信息的肤色检测方法及系统
CN110619648A (zh) * 2019-09-19 2019-12-27 四川长虹电器股份有限公司 一种基于rgb变化趋势划分图像区域的方法
CN110619648B (zh) * 2019-09-19 2022-03-15 四川长虹电器股份有限公司 一种基于rgb变化趋势划分图像区域的方法
CN112837259A (zh) * 2019-11-22 2021-05-25 福建师范大学 基于特征分割的皮肤色素病变治疗效果图像处理方法
CN112837259B (zh) * 2019-11-22 2023-07-07 福建师范大学 基于特征分割的皮肤色素病变治疗效果图像处理方法
CN113034467A (zh) * 2021-03-23 2021-06-25 福建师范大学 一种基于灰度分段及Lab颜色聚类的鲜红斑痣色卡生成方法
CN113034467B (zh) * 2021-03-23 2023-07-14 福建师范大学 一种基于灰度分段及Lab颜色聚类的鲜红斑痣色卡生成方法
CN116030356A (zh) * 2023-03-31 2023-04-28 山东省土地发展集团有限公司 一种矿山生态修复的环境评估方法

Also Published As

Publication number Publication date
CN105678813A (zh) 2016-06-15
US20170154238A1 (en) 2017-06-01

Similar Documents

Publication Publication Date Title
WO2017088365A1 (zh) 一种肤色检测方法及装置
WO2017092431A1 (zh) 基于肤色的人手检测方法及装置
WO2019223069A1 (zh) 基于直方图的虹膜图像增强方法、装置、设备及存储介质
CN109446982B (zh) 一种基于ar眼镜的电力屏柜压板状态识别方法及系统
CN109859171A (zh) 一种基于计算机视觉和深度学习的楼面缺陷自动检测方法
CN110276264B (zh) 一种基于前景分割图的人群密度估计方法
CN110210360B (zh) 一种基于视频图像目标识别的跳绳计数方法
CN109741318A (zh) 基于有效感受野的单阶段多尺度特定目标的实时检测方法
CN105631455A (zh) 一种图像主体提取方法及系统
CN105493141B (zh) 非结构化道路边界检测
CN106326823B (zh) 一种获取图片中头像的方法和系统
JP2003030667A (ja) イメージ内で目を自動的に位置決めする方法
CN112215795B (zh) 一种基于深度学习的服务器部件智能检测方法
CN104182721A (zh) 提升人脸识别率的图像处理系统及图像处理方法
CN109635634A (zh) 一种基于随机线性插值的行人再识别数据增强方法
WO2019223068A1 (zh) 虹膜图像局部增强方法、装置、设备及存储介质
WO2021057395A1 (zh) 一种鞋跟型号识别方法、装置及存储介质
CN106228157A (zh) 基于图像识别技术的彩色图像文字段落分割与识别方法
CN109948450A (zh) 一种基于图像的用户行为检测方法、装置和存储介质
WO2023045183A1 (zh) 图像处理
CN109448307A (zh) 一种火源目标的识别方法和装置
TW202011267A (zh) 用於對車輛損傷影像進行損傷分割的方法及裝置
WO2022135574A1 (zh) 肤色检测方法、装置、移动终端和存储介质
CN106650638A (zh) 一种遗留物检测方法
CN115908774B (zh) 一种基于机器视觉的变形物资的品质检测方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867594

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867594

Country of ref document: EP

Kind code of ref document: A1