CN106529432A - Hand area segmentation method deeply integrating significance detection and prior knowledge - Google Patents
Hand area segmentation method deeply integrating significance detection and prior knowledge Download PDFInfo
- Publication number
- CN106529432A CN106529432A CN201610937434.3A CN201610937434A CN106529432A CN 106529432 A CN106529432 A CN 106529432A CN 201610937434 A CN201610937434 A CN 201610937434A CN 106529432 A CN106529432 A CN 106529432A
- Authority
- CN
- China
- Prior art keywords
- region
- color
- pixel
- hand
- skin
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a hand area segmentation method deeply integrating significance detection and prior knowledge. This method combines the significant pattern of a hand area at pixel level with the significant pattern of a hand area at the regional level to make the detection algorithm for the entire hand area achieve higher robustness and accuracy. The method comprises the following steps: using an introduced Bayesian framework to obtain the confidence degree of each pixel in the hand area; combining the relevant techniques such as threshold segmentation, and finally obtaining a hand area segmentation result with high accuracy. The invention overcomes the shortcomings of a traditional hand area method which can only be applied to relatively simple background and non-near-skin color interference scene, and the method can obtain a hand area segmentation image accurately even in the case of various disturbances such as non-uniform illumination, near skin color background and face noise, therefore, making it applied with great prospects.
Description
Technical field
The present invention relates to a kind of depth integration conspicuousness detection and the hand region dividing method of priori, belong to calculating
The art such as machine vision, image procossing, pattern-recognition field
Background technology
The gesture identification of view-based access control model refers to that using all kinds of cameras form, displacement to hand etc. carry out continuous collecting,
The sequence frame of a model information is formed, then converts them to corresponding instruction, for some operations of control realization.The skill
Art obtains quite varied application in numerous scenes such as man-machine interaction, robot control and virtual reality.Common gesture
Technology of identification generally involves hand region segmentation, the tracking of hand Shape Feature Extraction, hand and the several aspects of gesture identification.Its
In, hand region cutting techniques are exactly the interference that other elements are excluded in picture, and the hand region of people is partitioned into exactly
Come, while ensureing the definition of hand profile.Hand region splits the initial step as gesture identification, and the quality of its effect is straight
Connect the quality for determining that gesture recognition system recognizes accuracy.
Traditional hand region cutting techniques mainly include following several:1. Face Detection method, i.e., build skin face in advance
Color model, such as Hsu R L exist《Face detection in color images》The YCrCb color space skins proposed in one text
Color model etc., and colour of skin judgement is carried out to each pixel in image using complexion model, select the maximum region of the nearly colour of skin to make
For hand region, so as to realize the segmentation of hand region.But under complex background condition, there are numerous nearly colour of skin objects
Interference, carries out hand Segmentation robustness by the method merely poor;2. template matching method, i.e., by collecting a large amount of hand shapes
Sample is traveled through building a hand shape database, then in altimetric image to be checked or random template matches are finding
Hand region.The method high cost, speed are slow, accuracy is low, are progressively eliminated at present;3. background subtraction method, that is, pass through
Background is modeled, and corresponding background is reduced in original image, to obtain foreground target.This kind of method requirement is right
Background has accurately estimation, and also needs to the hand region that the subsequent treatment of complexity can obtain estimation, changeable in scene
Under the conditions of be difficult to;4. rim detection, i.e., carry out rim detection, and exclude non-hand by certain technological means to image
The edge in region, so as to obtain the estimation of hand region.This method effect under uniform background or simple background is preferable, but
It is difficult to apply under complex background;5. increase and limit, i.e., by limiting backcolor, wearing the means such as exclusive data gloves, so as to
Exclude the interference that complex background is particularly nearly colour of skin background.Although the method effectively, significantly limit answering for gesture identification
With scope, not with higher practicality.In a word, above-mentioned conventional method all has certain lacking in accuracy, robustness
Fall into, it is difficult to be directly used in actual scene.
In recent years, emerged the new method of many hand regions segmentation, mostly by traditional hand region dividing method with
Combine carries out the judgement of hand region to other image processing methods (such as motion detection, Threshold segmentation etc.).As Zou Jie China exists《Base
In the dynamic gesture track identification system research of monocular vision》In one text, with reference to moving object detection, Face Detection and texture
The methods such as detection are completing the detection of hand region.Since Laurent Itti exist《A Model of Saliency-Based
Visual Attention for Rapid Scene Analysis》After proposing saliency detection algorithm earliest in one text,
A large amount of conspicuousness detection methods emerge in an endless stream, in view of unique advantage of the saliency detection in display foreground Objective extraction,
Some articles start to introduce the supplementary means that image significance detection method is detected as hand region.As a Yang Wen Ji exists《Towards
The gesture interaction technical research of home-services robot》In one text, obtained using multiple dimensioned color contrast and texture contrast
The Saliency maps of image are taken, then Saliency maps are carried out with skin color probability map, objectivity attribute tolerance contour level priori
Fusion, row threshold division of going forward side by side, so as to obtain final hand region testing result, this method is divided under relatively easy background
Cut accuracy high, but due to excessively considering the other contrast of region class, cause which and still imitate under complex background condition
Fruit is not good;Chuang Yuelong exist《Saliency-guided improvement for hand posture
detectionand recognition》In one text, it is proposed that a kind of conspicuousness detection method for not combining any prior information,
It is mainly used in the coarse localization of hand, then the notable figure for obtaining and skin color probability map is merged, is estimated so as to obtain hand region
Meter, this method can exclude the interference of large stretch of nearly colour of skin background, but for being all more significant region and the face for the nearly colour of skin
Portion lacks elimination ability.In a word, it is compared with conventional method, existing that conspicuousness detection is incorporated in hand region split sence
A series of schemes come, be enhanced in the ambient interferences ability of part in accuracy and overcoming, but big due to which
Mostly conspicuousness detection is each carried out with the detection based on prioris such as the colours of skin, is finally merged again so that its algorithm
Robustness it is poor, it is difficult to overcome the interference of face and other complex backgrounds completely, still cannot veritably in actual scene
Used.
In a word, traditional hand region dividing method technological means is single, and depends on Face Detection technology unduly, most of
Hand region segmentation under simple background is can be only used for, once and in image to be split, there is complex texture, nearly area of skin color, the back of the body
During the disturbing factor such as scape is changeable, uneven illumination is even, these methods cannot nearly all realize accurate hand region segmentation.It is existing
Hand region detection method fails to make full use of priori, is particularly nearly colour of skin ambient interferences energy complex background interference is overcome
There is larger defect in power.
The content of the invention
For the deficiencies in the prior art, the invention provides the hand of a kind of detection of depth integration conspicuousness and priori
Region segmentation method;
The present invention successfully eliminates the interference that various complex backgrounds are particularly nearly colour of skin background, finally obtains accurate hand
Portion region.
The technical scheme is that:
A kind of depth integration conspicuousness detection and the hand region dividing method of priori, concrete steps include:
(1) original image is divided into into N number of region;100≤N≤400
(2) by measuring the hand region detection side that conspicuousness detection is merged with colour of skin priori based on color distribution degree
Method, realizes the Preliminary detection to hand region, specifically includes:
A, in RGB color, t different value is obtained to the color quantizing of each passage, subtracts color total quantity
T is arrived less3Kind;
B, the significance for calculating each color respectively;
C, traversal each color, form color significance look-up table, and the color significance look-up table includes every kind of quantization
Color afterwards and its corresponding significance;
D, the significance to each color are normalized;
Each pixel in image after e, the process of traversal step a, according to color significance look-up table to the pixel
Significance carry out assignment, obtain hand region notable figure image1;Each pixel significance is higher, and the point belongs to hand area
The possibility in domain is bigger.
Based on " backcolor generally spreads relatively broad and well-balanced in the picture, and as the forecolor of hand region
Be distributed in the picture and relatively more concentrate " and the priori of " being more likely to be hand region closer to the pixel of the colour of skin " know
Know, define a kind of new hand region that conspicuousness detection and colour of skin priori depth integration are measured based on color distribution degree
Detection method, realizes the Preliminary detection to hand region.The method merges priori Skin Color Information, while make use of color of image
Space spreads situation, has carried out the detection of hand region conspicuousness in pixel scale to image, preferably eliminates the non-colour of skin back of the body
The interference that scape and the nearly colour of skin background area of fritter are detected to hand.
(3) by based on skin color probability distance and the hand region detection method of region contrast, realizing to hand region
Preliminary detection, specifically include:
F, the N number of region for splitting original image acquisition to step (1), each area in the image after calculation procedure d is processed respectively
The significance in domain;
G, traversal regional, forming region significance look-up table, the region significance look-up table include regional
And its corresponding significance;
H, the significance to regional are normalized;
I, according to region significance look-up table, assignment is carried out to the significance of each pixel in original image, hand is obtained
Region notable figure image2;Each pixel significance is higher, and the possibility that the point belongs to hand region is bigger.
Based on " being more likely to be hand region closer to the pixel of the colour of skin " and " concentrate in entire image aggregation and
More likely belong to hand region closer to the region of the colour of skin than peripheral region " priori, definition is a kind of general based on the colour of skin
The hand region detection method of rate distance and region contrast, by according to the difference of interregional skin color probability come between definition region
Contrast, so as to enhance the significance in the high region of the nearly colour of skin, concentration class, realize the Preliminary detection to hand region.
The method merges priori Skin Color Information, hand region conspicuousness detection is carried out to image, preferably eliminate in the rank of region
Interference of the nearly colour of skin background area of bulk to hand region detection, at the same time enhances the edge of hand region, after being
Region segmentation provides good condition.
(4) the hand region notable figure that hand region notable figure image1 and step (3) that fusion steps (2) are obtained is obtained
Image2, obtains the hand region confidence map image3 after excluding complex background interference;
(5) according to the hand region confidence map image3 for obtaining, calculate the prior probability of each pixel and observe seemingly
So probability, according to Bayesian formula, further calculates the posterior probability that the pixel belongs to hand region, obtains final accurate
Hand region confidence map image4.
The hand region after excluding complex background interference can be obtained to put after above two hand region notable figure is merged
Letter figure.Thus, in order to further improve hand region segmentation accuracy, this patent according to Bayes principle, using hand
The information provided by region confidence map, has carried out more accurate Pixel-level detection using Bayesian frame to hand region.Will
A Bayesian inference problem, i.e., the hand region for being obtained according to technological means 1 and technological means 2 are regarded in hand region detection as
Confidence map come calculate each pixel prior probability and observation likelihood probability, then further calculated according to Bayesian formula
The pixel belongs to the posterior probability of hand region, so as to obtain final accurate hand region confidence map.
According to currently preferred, step b, for random color i, i ∈ 1,2...t3, its significance Se (i)
Ask for formula (I), (II) as follows:
Se(i)=Pskin(i)exp(-E(i)/σe) (Ⅰ)
In formula (I), (II), parameter σeRefer to influence degree of the color dispersion level of control color i to its significance, σe
Bigger, influence degree is less, σeSpan be (0,5.0];
PskinThe skin color probability of (i) for color i;Work as Lskin(i)<When 12, Pskin(i)=0;Otherwise, LskinI () refers to color i obtained by Oconaire skin similarity models
Belong to the log-likelihood of skin points;LmaxRefer to the L in original imageskinThe maximum of (i);H (i) refers to color i in skin region
Shared ratio in the color histogram H in domain, it is shared in the color histogram h in non-skin region that h (i) refers to color i
Ratio;
The present invention adopts Oconaire skin similarity models, and the model is by counting a large amount of colour of skin samples and non-colour of skin sample
This, so as to obtain the color histogram h of color histogram H in RGB color inner skin region and non-skin region, most
The log-likelihood L that color i belongs to skin points is obtained eventuallyskin(i)。
E (i) is color i space measure of spread function, equal for measure that color i spread in whole image regional
Even degree and the spatial distribution dispersion degree in whole image;Spatial distribution is more extensive in the picture for color i, E's (i)
Value is bigger;
Ds(Rk,Rj) for region RkWith region RjSpace length;K ∈ 1,2...N and k ≠ j;
pikRefer to region RkMiddle color accounts for ratio of the color for the pixel quantity of i in whole image for the pixel quantity of i
Example;
pijRefer to region RjMiddle color accounts for ratio of the color for the pixel quantity of i in whole image for the pixel quantity of i
Example.
According to currently preferred, step f, for arbitrary region Rk, its significance Sr(Rk) ask for formula
(III), (IV) is as follows:
In formula (III), (IV), NRiFor region RiPixel quantity;
Parameter σcRefer to that space length is for the influence degree of region significance, σ between control areacSuggestion span
For (0,5.0], σcBigger, distant region is just bigger for one's respective area conspicuousness influence degree;
Ds(Rk,Rj) for region RkWith region RjSpace length;
Dc(Rk,Rj) for region RkWith region RjSkin color probability distance;
f(ckm) refer in region RkIn probability of color m appeared in one's respective area;m∈1,2...t3And m ≠ i;
f(cij) refer in region RjIn probability of color i appeared in one's respective area;
Pskin(ckm) refer to region RkIn m kind colors ckmSkin color probability;
Pskin(cij) refer to region RjIn i-th kind of color cijSkin color probability;
According to currently preferred, in the step (5), concrete steps include:
J, any pixel v of calculating belong to the prior probability p (hand) and arbitrarily pixel v of hand region and belong to background area
The prior probability p (bk) in domain:Arbitrarily pixel v belongs to the prior probability p (hand) of hand region equal to hand region confidence map
Significances of the image3 at the pixel;Arbitrarily pixel v belongs to the computing formula of the prior probability p (bk) of background area such as
Shown in formula (V):
P (bk)=1-p (hand) (V);
K, original image is transformed in CIELab color spaces, it is for any pixel v, empty in CIELab colors according to which
In l, a, b tri- Color Channel components value, calculate any pixel v belong to hand region observation likelihood probability p (v |
Hand) and pixel v belongs to observation likelihood probability p (v | bk) of background area, as shown in formula (VI), formula (VII):
In formula (VI), formula (VII),Refer in hand region RhandIn, there is same color passage with pixel v
The pixel quantity of component l;Refer in hand region RhandIn, there are same color channel components a with pixel v
Pixel quantity;Refer in hand region RhandIn, there is the pixel number of same color channel components b with pixel v
Amount;Represent region RhandIn total pixel quantity;
Refer in background area RbkIn, there is the pixel quantity of same color channel components l with pixel v;Refer in background area RbkIn, there is the pixel quantity of same color channel components a with pixel v;It is
Refer in background area RbkIn, there is the pixel quantity of same color channel components b with pixel v;Represent background area
RbkIn total pixel quantity;
L, p (hand), p (bk), p (v | hand), p (v | bk) are substituted in Bayesian formula, obtain any pixel v category
In posterior probability p (hand | v) of gesture area, shown in computing formula such as formula (VIII):
The posterior probability that each pixel belongs to gesture area is obtained by formula (VIII);Accurate hand region confidence is obtained
Figure image4.In accurate hand region confidence map image4, the gray value of each pixel has been weighed the pixel and has belonged to hand region
Possibility size.
According to currently preferred, the step (4), the hand region obtained by formula (Ⅸ) fusion steps (2) are notable
Hand region notable figure image2 that figure image1 and step (3) are obtained, obtains the hand region after excluding complex background interference
Confidence map image3, for hand region confidence map image3 any point, if its coordinate (x, y), then the pixel confidence
Image3 (x, y) computational methods are:
In formula (Ⅸ), image1 (x, y) represents in hand region notable figure image1 coordinate as the pixel of (x, y)
Significance;Image2 (x, y) represents significance of the coordinate as the pixel of (x, y) in hand region notable figure image2.
Image co-registration mode of the present invention be by image1 and two width images of image2 same coordinate point it is aobvious
Work degree is multiplied and evolution, while realizing that two kinds of notable figures merge, effectively enhances middle confidence pixels, improves fusion
Effect, so as to finally obtain image3 with the confidence level at coordinate points.
Beneficial effects of the present invention are:
The invention overcome traditional hand region method can be only applied to relatively easy background, without nearly colour of skin interference scene
Shortcoming, in the case of the various interference such as inhomogeneous illumination, nearly colour of skin background, face noise, still sufficiently accurately can obtain
The segmentation figure picture of hand region is obtained, thus is had broad application prospects.
Description of the drawings
Fig. 1 is original image schematic diagram in embodiment;
Fig. 2 is the schematic diagram of hand region notable figure image1 in embodiment;
Fig. 3 is the schematic diagram of hand region notable figure image2 in embodiment;
Fig. 4 is the schematic diagram of hand region confidence map image3 in embodiment;
Fig. 5 is the schematic diagram of hand region binary map image4 in embodiment;
Fig. 6 is the schematic diagram of the final confidence map image5 of hand region in embodiment;
Fig. 7 is the schematic diagram of final binary map image6 of hand region in embodiment;
Fig. 8 is the schematic diagram of hand region segmentation result in embodiment;
Fig. 9 is the stream of a kind of depth integration conspicuousness detection of the present invention and the hand region dividing method of priori
Journey schematic diagram.
Specific embodiment
The present invention is further qualified with reference to Figure of description and embodiment, but not limited to this.
Embodiment
A kind of depth integration conspicuousness detection and the hand region dividing method of priori, as shown in figure 9, concrete steps
Including:
(1) SLIC super-pixel segmentations are carried out to original image, R1, the common N number of regions of R2 ... RN, original image such as Fig. 1 institutes is obtained
Show:
(2) by measuring the hand region detection side that conspicuousness detection is merged with colour of skin priori based on color distribution degree
Method, realizes the Preliminary detection to hand region, specifically includes:
A, in RGB color, t different value is obtained to the color quantizing of each passage, subtracts color total quantity
T is arrived less3Kind;
B, the significance for calculating each color respectively;For random color i, i ∈ 1,2...t3, its significance Se (i)
Ask for formula (I), (II) as follows:
Se(i)=Pskin(i)exp(-E(i)/σe) (Ⅰ)
In formula (I), (II), parameter σeRefer to influence degree of the color dispersion level of control color i to its significance, σe
Bigger, influence degree is less, σeSpan be (0,5.0];
PskinThe skin color probability of (i) for color i;Work as Lskin(i)<When 12, Pskin(i)=0;Otherwise,
LskinI () refers to that color i obtained by Oconaire skin similarity models belongs to skin
The log-likelihood of point;LmaxRefer to the L in original imageskinThe maximum of (i);H (i) refers to color of color i in skin area
Shared ratio in histogram H, h (i) refer to the shared ratio in the color histogram h in non-skin region of color i;This
Bright employing Oconaire skin similarity models, the model pass through to count a large amount of colour of skin samples and non-colour of skin sample, so as to obtain
The color histogram H in the RGB color inner skin region and color histogram h in non-skin region, finally gives color i category
In the log-likelihood L of skin pointsskin(i)。
E (i) is color i space measure of spread function, equal for measure that color i spread in whole image regional
Even degree and the spatial distribution dispersion degree in whole image;Spatial distribution is more extensive in the picture for color i, E's (i)
Value is bigger;
Ds(Rk,Rj) for region RkWith region RjSpace length;K ∈ 1,2...N and k ≠ j;
pikRefer to region RkMiddle color accounts for ratio of the color for the pixel quantity of i in whole image for the pixel quantity of i
Example;
pijRefer to region RjMiddle color accounts for ratio of the color for the pixel quantity of i in whole image for the pixel quantity of i
Example.
C, traversal each color, form color significance look-up table, and the color significance look-up table includes every kind of quantization
Color afterwards and its corresponding significance;
D, the significance to each color are normalized;
Each pixel in image after e, the process of traversal step a, according to color significance look-up table to the pixel
Significance carry out assignment, obtain hand region notable figure image1;As shown in Figure 2;Each pixel significance is higher, the point
The possibility for belonging to hand region is bigger.
Based on " backcolor generally spreads relatively broad and well-balanced in the picture, and as the forecolor of hand region
Be distributed in the picture and relatively more concentrate " and the priori of " being more likely to be hand region closer to the pixel of the colour of skin " know
Know, define a kind of new hand region that conspicuousness detection and colour of skin priori depth integration are measured based on color distribution degree
Detection method, realizes the Preliminary detection to hand region.The method merges priori Skin Color Information, while make use of color of image
Space spreads situation, has carried out the detection of hand region conspicuousness in pixel scale to image, preferably eliminates the non-colour of skin back of the body
The interference that scape and the nearly colour of skin background area of fritter are detected to hand.
(3) by based on skin color probability distance and the hand region detection method of region contrast, realizing to hand region
Preliminary detection, specifically include:
F, the N number of region for splitting original image acquisition to step (1), each area in the image after calculation procedure d is processed respectively
The significance in domain;For arbitrary region Rk, its significance Sr(Rk) to ask for formula (III), (IV) as follows:
In formula (III), (IV),For region RiPixel quantity;
Parameter σcRefer to that space length is for the influence degree of region significance, σ between control areacSuggestion span
For (0,5.0], σcBigger, distant region is just bigger for one's respective area conspicuousness influence degree;
Ds(Rk,Rj) for region RkWith region RjSpace length;
Dc(Rk,Rj) for region RkWith region RjSkin color probability distance;
f(ckm) refer in region RkIn probability of color m appeared in one's respective area;m∈1,2...t3And m ≠ i;
f(cij) refer in region RjIn probability of color i appeared in one's respective area;
Pskin(ckm) refer to region RkIn m kind colors ckmSkin color probability;
Pskin(cij) refer to region RjIn i-th kind of color cijSkin color probability;
G, traversal regional, forming region significance look-up table, the region significance look-up table include regional
And its corresponding significance;
H, the significance to regional are normalized;
I, according to region significance look-up table, assignment is carried out to the significance of each pixel in original image, hand is obtained
Region notable figure image2;As shown in Figure 3;Each pixel significance is higher, and the possibility that the point belongs to hand region is bigger.
Based on " being more likely to be hand region closer to the pixel of the colour of skin " and " concentrate in entire image aggregation and
More likely belong to hand region closer to the region of the colour of skin than peripheral region " priori, definition is a kind of general based on the colour of skin
The hand region detection method of rate distance and region contrast, by according to the difference of interregional skin color probability come between definition region
Contrast, so as to enhance the significance in the high region of the nearly colour of skin, concentration class, realize the Preliminary detection to hand region.
The method merges priori Skin Color Information, hand region conspicuousness detection is carried out to image, preferably eliminate in the rank of region
Interference of the nearly colour of skin background area of bulk to hand region detection, at the same time enhances the edge of hand region, after being
Region segmentation provides good condition.
(4) the hand region notable figure that hand region notable figure image1 and step (3) that fusion steps (2) are obtained is obtained
Image2, obtains the hand region confidence map image3 after excluding complex background interference;As shown in Figure 4;Merged by formula (Ⅸ)
Hand region notable figure image2 that hand region notable figure image1 and step (3) that step (2) is obtained is obtained, is excluded
Hand region confidence map image3 after complex background interference, for hand region confidence map image3 any point, if its seat
Mark (x, y), then pixel confidence image3 (x, y) computational methods are:
In formula (Ⅸ), image1 (x, y) represents in hand region notable figure image1 coordinate as the pixel of (x, y)
Significance;Image2 (x, y) represents significance of the coordinate as the pixel of (x, y) in hand region notable figure image2.
Image co-registration mode of the present invention be by image1 and two width images of image2 same coordinate point it is aobvious
Work degree is multiplied and evolution, while realizing that two kinds of notable figures merge, effectively enhances middle confidence pixels, improves fusion
Effect, so as to finally obtain image3 with the confidence level at coordinate points.
(5) Da-Jin algorithm is utilized, enters row threshold division to hand region confidence map image3, obtain hand region binary map
Image4, in binary map image4, highlight regions are gesture area R according to a preliminary estimatehand, black region is according to a preliminary estimate
Background area Rbk, binary map image4 is as shown in Figure 5;
(6) according to the hand region confidence map image4 for obtaining, calculate the prior probability of each pixel and observe seemingly
So probability, according to Bayesian formula, further calculates the posterior probability that the pixel belongs to hand region, obtains final accurate
Hand region confidence map image5, as shown in Figure 6.
The hand region after excluding complex background interference can be obtained to put after above two hand region notable figure is merged
Letter figure.Thus, in order to further improve hand region segmentation accuracy, this patent according to Bayes principle, using hand
The information provided by region confidence map, has carried out more accurate Pixel-level detection using Bayesian frame to hand region.Will
A Bayesian inference problem, i.e., the hand region for being obtained according to technological means 1 and technological means 2 are regarded in hand region detection as
Confidence map come calculate each pixel prior probability and observation likelihood probability, then further calculated according to Bayesian formula
The pixel belongs to the posterior probability of hand region, so as to obtain final accurate hand region confidence map.
In the step (6), concrete steps include:
J, any pixel v of calculating belong to the prior probability p (hand) and arbitrarily pixel v of hand region and belong to background area
The prior probability p (bk) in domain:Arbitrarily pixel v belongs to the prior probability p (hand) of hand region equal to hand region confidence map
Significances of the image4 at the pixel;Arbitrarily pixel v belongs to the computing formula of the prior probability p (bk) of background area such as
Shown in formula (V):
P (bk)=1-p (hand) (V);
K, original image is transformed in CIELab color spaces, it is for any pixel v, empty in CIELab colors according to which
In l, a, b tri- Color Channel components value, calculate any pixel v belong to hand region observation likelihood probability p (v |
Hand) and pixel v belongs to observation likelihood probability p (v | bk) of background area, as shown in formula (VI), formula (VII):
In formula (VI), formula (VII),Refer in hand region RhandIn, there is same color passage with pixel v
The pixel quantity of component l;Refer in hand region RhandIn, there are same color channel components a with pixel v
Pixel quantity;Refer in hand region RhandIn, there is the pixel number of same color channel components b with pixel v
Amount;Represent region RhandIn total pixel quantity;
Refer in background area RbkIn, there is the pixel quantity of same color channel components l with pixel v;Refer in background area RbkIn, there is the pixel quantity of same color channel components a with pixel v;It is
Refer in background area RbkIn, there is the pixel quantity of same color channel components b with pixel v;Represent background area
RbkIn total pixel quantity;
L, p (hand), p (bk), p (v | hand), p (v | bk) are substituted in Bayesian formula, obtain any pixel v category
In posterior probability p (hand | v) of gesture area, shown in computing formula such as formula (VIII):
The posterior probability that each pixel belongs to gesture area is obtained by formula (VIII);Accurate hand region confidence is obtained
Figure image5.In accurate hand region confidence map image4, the gray value of each pixel has been weighed the pixel and has belonged to hand region
Possibility size.
(7) Da-Jin algorithm is utilized, row threshold division is entered to final confidence map image5, and carried out eliminating non-maximum region, fill out
Mend regional void etc. to process, obtain final binary map image6 of hand region, as shown in Figure 7:
(8) using binary map image6 as mask, image segmentation is carried out to original image, you can obtain hand region segmentation knot
Really, as shown in Figure 8.
Claims (5)
1. a kind of depth integration conspicuousness detects the hand region dividing method with priori, it is characterised in that concrete steps
Including:
(1) original image is divided into into N number of region;100≤N≤400;
(2) by measuring the hand region detection method that conspicuousness detection is merged with colour of skin priori based on color distribution degree,
The Preliminary detection to hand region is realized, is specifically included:
A, in RGB color, t different value is obtained to the color quantizing of each passage, makes color total quantity be reduced to t3
Kind;
B, the significance for calculating each color respectively;
C, traversal each color, form color significance look-up table, after the color significance look-up table includes every kind of quantization
Color and its corresponding significance;
D, the significance to each color are normalized;
Each pixel in image after e, the process of traversal step a, shows to the pixel according to color significance look-up table
Work degree carries out assignment, obtains hand region notable figure image1;
(3) by based on skin color probability distance and the hand region detection method of region contrast, realizing to the first of hand region
Step detection, specifically includes:
F, the N number of region for splitting original image acquisition to step (1), regional in the image after calculation procedure d is processed respectively
Significance;
G, traversal regional, forming region significance look-up table, the region significance look-up table include regional and its
Corresponding significance;
H, the significance to regional are normalized;
I, according to region significance look-up table, assignment is carried out to the significance of each pixel in original image, hand region is obtained
Notable figure image2;
(4) the hand region notable figure that hand region notable figure image1 and step (3) that fusion steps (2) are obtained is obtained
Image2, obtains the hand region confidence map image3 after excluding complex background interference;
(5) according to the hand region confidence map image3 for obtaining, prior probability and the observation likelihood for calculating each pixel is general
Rate, according to Bayesian formula, further calculates the posterior probability that the pixel belongs to hand region, obtains final accurate hand
Region confidence map image4.
2. a kind of depth integration conspicuousness according to claim 1 detects the hand region dividing method with priori,
Characterized in that, step b, for random color i, i ∈ 1,2...t3, its significance SeI () asks for formula (I), (II)
It is as follows:
Se(i)=Pskin(i)exp(-E(i)/σe) (Ⅰ)
In formula (I), (II), parameter σeRefer to influence degree of the color dispersion level of control color i to its significance, σeTake
Value scope be (0,5.0];
PskinThe skin color probability of (i) for color i;Work as Lskin(i)<When 12, Pskin(i)=0;Otherwise, LskinI () refers to that color i obtained by Oconaire skin similarity models belongs to the logarithm of skin points
Likelihood;LmaxRefer to the L in original imageskinThe maximum of (i);H (i) refers to color histogram H of color i in skin area
In shared ratio, h (i) refers to the shared ratio in the color histogram h in non-skin region of color i;
E (i) is color i space measure of spread function, for measuring the uniform journey that color i is spread in whole image regional
Degree and the spatial distribution dispersion degree in whole image;Spatial distribution is more extensive in the picture for color i, and the value of E (i) is got over
Greatly;
Ds(Rk,Rj) for region RkWith region RjSpace length;K ∈ 1,2...N and k ≠ j;
pikRefer to region RkMiddle color accounts for ratio of the color for the pixel quantity of i in whole image for the pixel quantity of i;
pijRefer to region RjMiddle color accounts for ratio of the color for the pixel quantity of i in whole image for the pixel quantity of i.
3. a kind of depth integration conspicuousness according to claim 1 detects the hand region dividing method with priori,
Characterized in that, step f, for arbitrary region Rk, its significance Sr(Rk) to ask for formula (III), (IV) as follows:
In formula (III), (IV),For region RiPixel quantity;
Parameter σcRefer to that space length is for the influence degree of region significance, σ between control areacSuggestion span for (0,
5.0];
Ds(Rk,Rj) for region RkWith region RjSpace length;
Dc(Rk,Rj) for region RkWith region RjSkin color probability distance;
F (ckm) refers to the probability of color m in the Rk of region appeared in one's respective area;m∈1,2...t3And m ≠ i;
f(cij) refer in region RjIn probability of color i appeared in one's respective area;
Pskin(ckm) refer to region RkIn m kind colors ckmSkin color probability;
Pskin(cij) refer to region RjIn i-th kind of color cijSkin color probability.
4. a kind of depth integration conspicuousness according to claim 1 detects the hand region dividing method with priori,
Characterized in that, in the step (5), concrete steps include:
J, any pixel v of calculating belong to the prior probability p (hand) and arbitrarily pixel v of hand region and belong to background area
Prior probability p (bk):Arbitrarily pixel v belongs to the prior probability p (hand) of hand region equal to hand region confidence map
Significances of the image3 at the pixel;Arbitrarily pixel v belongs to the computing formula of the prior probability p (bk) of background area such as
Shown in formula (V):
P (bk)=1-p (hand) (V);
K, original image is transformed in CIELab color spaces, for any pixel v, according to which in CIELab color spaces
The value of tri- Color Channel components of l, a, b, calculates observation likelihood probability p (v | hand) that any pixel v belongs to hand region
And pixel v belongs to observation likelihood probability p (v | bk) of background area, as shown in formula (VI), formula (VII):
In formula (VI), formula (VII),Refer in hand region RhandIn, there are same color channel components l with pixel v
Pixel quantity;Refer in hand region RhandIn, there is the pixel of same color channel components a with pixel v
Point quantity;Refer in hand region RhandIn, there is the pixel quantity of same color channel components b with pixel v;Represent region RhandIn total pixel quantity;
Refer in background area RbkIn, there is the pixel quantity of same color channel components l with pixel v;Refer in background area RbkIn, there is the pixel quantity of same color channel components a with pixel v;It is
Refer in background area RbkIn, there is the pixel quantity of same color channel components b with pixel v;Represent background area
RbkIn total pixel quantity;
L, p (hand), p (bk), p (v | hand), p (v | bk) are substituted in Bayesian formula, obtain any pixel v and belong to hand
Posterior probability p (hand | v) in gesture region, shown in computing formula such as formula (VIII):
The posterior probability that each pixel belongs to gesture area is obtained by formula (VIII);Accurate hand region confidence map is obtained
image4。
5. a kind of depth integration conspicuousness according to claim 1 detects the hand region dividing method with priori,
Characterized in that, the step (4), hand region notable figure image1 obtained by formula (Ⅸ) fusion steps (2) and step
(3) hand region notable figure image2 for obtaining, obtains the hand region confidence map image3 after excluding complex background interference, right
In hand region confidence map image3 any point, if its coordinate (x, y), then pixel confidence image3 (x, y) computational methods
For:
In formula (Ⅸ), image1 (x, y) represents in hand region notable figure image1 coordinate as the notable of the pixel of (x, y)
Degree;Image2 (x, y) represents significance of the coordinate as the pixel of (x, y) in hand region notable figure image2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610937434.3A CN106529432B (en) | 2016-11-01 | 2016-11-01 | A kind of hand region dividing method of depth integration conspicuousness detection and priori knowledge |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610937434.3A CN106529432B (en) | 2016-11-01 | 2016-11-01 | A kind of hand region dividing method of depth integration conspicuousness detection and priori knowledge |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106529432A true CN106529432A (en) | 2017-03-22 |
CN106529432B CN106529432B (en) | 2019-05-07 |
Family
ID=58292491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610937434.3A Expired - Fee Related CN106529432B (en) | 2016-11-01 | 2016-11-01 | A kind of hand region dividing method of depth integration conspicuousness detection and priori knowledge |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106529432B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107247466A (en) * | 2017-06-12 | 2017-10-13 | 中山长峰智能自动化装备研究院有限公司 | Robot head gesture control method and system |
CN107633252A (en) * | 2017-09-19 | 2018-01-26 | 广州市百果园信息技术有限公司 | Skin color detection method, device and storage medium |
CN108133204A (en) * | 2018-01-23 | 2018-06-08 | 歌尔科技有限公司 | A kind of hand body recognition methods, device, equipment and computer readable storage medium |
CN108537745A (en) * | 2018-03-15 | 2018-09-14 | 深圳蛋壳物联信息技术有限公司 | Face-image problem skin Enhancement Method |
CN108694719A (en) * | 2017-04-05 | 2018-10-23 | 北京京东尚科信息技术有限公司 | image output method and device |
CN109766822A (en) * | 2019-01-07 | 2019-05-17 | 山东大学 | Gesture identification method neural network based and system |
CN110728225A (en) * | 2019-10-08 | 2020-01-24 | 北京联华博创科技有限公司 | High-speed face searching method for attendance checking |
CN112099526A (en) * | 2020-09-09 | 2020-12-18 | 北京航空航天大学 | Unmanned aerial vehicle control system and control method based on voice and gesture recognition |
CN114862894A (en) * | 2022-03-25 | 2022-08-05 | 哈尔滨工程大学 | Hand segmentation method based on multi-cue fusion |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6128003A (en) * | 1996-12-20 | 2000-10-03 | Hitachi, Ltd. | Hand gesture recognition system and method |
EP2365420A2 (en) * | 2010-03-11 | 2011-09-14 | Deutsche Telekom AG | System and method for hand gesture recognition for remote control of an internet protocol TV |
US20120105613A1 (en) * | 2010-11-01 | 2012-05-03 | Robert Bosch Gmbh | Robust video-based handwriting and gesture recognition for in-car applications |
CN104463191A (en) * | 2014-10-30 | 2015-03-25 | 华南理工大学 | Robot visual processing method based on attention mechanism |
CN104732521A (en) * | 2015-02-02 | 2015-06-24 | 北京理工大学 | Similar target segmentation method based on weight set similarity movable contour model |
CN105335711A (en) * | 2015-10-22 | 2016-02-17 | 华南理工大学 | Fingertip detection method in complex environment |
-
2016
- 2016-11-01 CN CN201610937434.3A patent/CN106529432B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6128003A (en) * | 1996-12-20 | 2000-10-03 | Hitachi, Ltd. | Hand gesture recognition system and method |
EP2365420A2 (en) * | 2010-03-11 | 2011-09-14 | Deutsche Telekom AG | System and method for hand gesture recognition for remote control of an internet protocol TV |
US20110221974A1 (en) * | 2010-03-11 | 2011-09-15 | Deutsche Telekom Ag | System and method for hand gesture recognition for remote control of an internet protocol tv |
US20120105613A1 (en) * | 2010-11-01 | 2012-05-03 | Robert Bosch Gmbh | Robust video-based handwriting and gesture recognition for in-car applications |
CN104463191A (en) * | 2014-10-30 | 2015-03-25 | 华南理工大学 | Robot visual processing method based on attention mechanism |
CN104732521A (en) * | 2015-02-02 | 2015-06-24 | 北京理工大学 | Similar target segmentation method based on weight set similarity movable contour model |
CN105335711A (en) * | 2015-10-22 | 2016-02-17 | 华南理工大学 | Fingertip detection method in complex environment |
Non-Patent Citations (3)
Title |
---|
JINGWEN DAI AND RONALD CHUNG: "Combining Contrast Saliency and Region Discontinuity", 《21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012)》 * |
周航: "中国优秀硕士学位论文全文数据库", 《中国博士学位论文全文数据库》 * |
莫舒: "基于视觉的手势分割算法的研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108694719A (en) * | 2017-04-05 | 2018-10-23 | 北京京东尚科信息技术有限公司 | image output method and device |
CN107247466A (en) * | 2017-06-12 | 2017-10-13 | 中山长峰智能自动化装备研究院有限公司 | Robot head gesture control method and system |
CN107633252B (en) * | 2017-09-19 | 2020-04-21 | 广州市百果园信息技术有限公司 | Skin color detection method, device and storage medium |
CN107633252A (en) * | 2017-09-19 | 2018-01-26 | 广州市百果园信息技术有限公司 | Skin color detection method, device and storage medium |
US11080894B2 (en) | 2017-09-19 | 2021-08-03 | Bigo Technology Pte. Ltd. | Skin color detection method, skin color detection apparatus, and storage medium |
CN108133204A (en) * | 2018-01-23 | 2018-06-08 | 歌尔科技有限公司 | A kind of hand body recognition methods, device, equipment and computer readable storage medium |
CN108133204B (en) * | 2018-01-23 | 2021-02-02 | 歌尔科技有限公司 | Hand body identification method, device, equipment and computer readable storage medium |
CN108537745A (en) * | 2018-03-15 | 2018-09-14 | 深圳蛋壳物联信息技术有限公司 | Face-image problem skin Enhancement Method |
CN108537745B (en) * | 2018-03-15 | 2021-04-30 | 深圳蛋壳物联信息技术有限公司 | Face image problem skin enhancement method |
CN109766822A (en) * | 2019-01-07 | 2019-05-17 | 山东大学 | Gesture identification method neural network based and system |
CN110728225A (en) * | 2019-10-08 | 2020-01-24 | 北京联华博创科技有限公司 | High-speed face searching method for attendance checking |
CN110728225B (en) * | 2019-10-08 | 2022-04-19 | 北京联华博创科技有限公司 | High-speed face searching method for attendance checking |
CN112099526A (en) * | 2020-09-09 | 2020-12-18 | 北京航空航天大学 | Unmanned aerial vehicle control system and control method based on voice and gesture recognition |
CN114862894A (en) * | 2022-03-25 | 2022-08-05 | 哈尔滨工程大学 | Hand segmentation method based on multi-cue fusion |
Also Published As
Publication number | Publication date |
---|---|
CN106529432B (en) | 2019-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106529432B (en) | A kind of hand region dividing method of depth integration conspicuousness detection and priori knowledge | |
CN107977671B (en) | Tongue picture classification method based on multitask convolutional neural network | |
CN103942794B (en) | A kind of image based on confidence level is collaborative scratches drawing method | |
CN103186904B (en) | Picture contour extraction method and device | |
CN103456010B (en) | A kind of human face cartoon generating method of feature based point location | |
CN103310194B (en) | Pedestrian based on crown pixel gradient direction in a video shoulder detection method | |
CN105740945B (en) | A kind of people counting method based on video analysis | |
CN110276264B (en) | Crowd density estimation method based on foreground segmentation graph | |
CN101706445B (en) | Image processing mthod for beef marbling grain grade scoring | |
CN111539273A (en) | Traffic video background modeling method and system | |
CN103914699A (en) | Automatic lip gloss image enhancement method based on color space | |
CN106462771A (en) | 3D image significance detection method | |
CN107220949A (en) | The self adaptive elimination method of moving vehicle shade in highway monitoring video | |
CN103810491B (en) | Head posture estimation interest point detection method fusing depth and gray scale image characteristic points | |
CN105913456A (en) | Video significance detecting method based on area segmentation | |
CN110738676A (en) | GrabCT automatic segmentation algorithm combined with RGBD data | |
CN105844621A (en) | Method for detecting quality of printed matter | |
CN110232379A (en) | A kind of vehicle attitude detection method and system | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
CN103035013A (en) | Accurate moving shadow detection method based on multi-feature fusion | |
CN107564022A (en) | Saliency detection method based on Bayesian Fusion | |
Li et al. | Saliency based image segmentation | |
Chen et al. | Facial expression recognition based on edge detection | |
CN102024156A (en) | Method for positioning lip region in color face image | |
CN106204594A (en) | A kind of direction detection method of dispersivity moving object based on video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190507 Termination date: 20201101 |