CN101493887A - Eyebrow image segmentation method based on semi-supervision learning and Hash index - Google Patents

Eyebrow image segmentation method based on semi-supervision learning and Hash index Download PDF

Info

Publication number
CN101493887A
CN101493887A CNA2009100795188A CN200910079518A CN101493887A CN 101493887 A CN101493887 A CN 101493887A CN A2009100795188 A CNA2009100795188 A CN A2009100795188A CN 200910079518 A CN200910079518 A CN 200910079518A CN 101493887 A CN101493887 A CN 101493887A
Authority
CN
China
Prior art keywords
eyebrow
pixels
block
hash
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009100795188A
Other languages
Chinese (zh)
Other versions
CN101493887B (en
Inventor
李玉鑑
张晨光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN2009100795188A priority Critical patent/CN101493887B/en
Publication of CN101493887A publication Critical patent/CN101493887A/en
Application granted granted Critical
Publication of CN101493887B publication Critical patent/CN101493887B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention discloses an eyebrow image segmentation technology based on semi-supervised learning and Hash index, which comprises the following steps in sequence: accepting an original eyebrow image of a user, separating the eyebrow image into small pixel blocks with equal dimension and selecting certain pixel blocks respectively outside and inside the eyebrow area by a computer to endow such blocks with different symbols; expressing each pixel block as a vector and calculating the similarity between such pixel blocks through the Hash method of partial sensitiveness so as to obtain a normalized resemblance distance matrix; and using a technology based on semi-supervised learning of the image to mark unmarked pixel blocks from which pixel blocks marked as eyebrow are picked up to finish the extraction of eyebrow. The technology greatly improves the image segmentation speed by the technology based on semi-supervised learning due to the utilization of the Hash method of partial sensitiveness to solve the resemblance distance matrix in the semi-supervised learning technology of the image.

Description

Eyebrow image segmentation method based on semi-supervised learning and hash index
Technical field
The present invention relates to the method that from original eyebrow image, extracts pure eyebrow image that a kind of hash indexing method based on semi-supervised learning technology and local sensitivity combines, belong to electronic information technical field.
Background technology
In modern society, rapid rise along with ecommerce in the high speed development of computer networking technology and the global range, information security demonstrates unprecedented importance, and living things feature recognition begins more and more to be subject to people's attention as an importance of information security.People's research at present and the biometrics identification technology that uses mainly contain: recognition of face, iris recognition, fingerprint recognition, the identification of hand shape, palmmprint identification, ear recognition, signature identification, voice recognition, Gait Recognition, or the like.Eyebrow has ubiquity, uniqueness, stability and collection property as recognition feature as people's key character on the face.In fact, compare with facial image, the eyebrow image not only has clean cut, the simple in structure and advantage chosen easily, and is subjected to the influence of illumination and expression less, has better stability and anti-interference; Compare with iris image, the eyebrow image then has again and is easy to gather and advantage easy to use.In addition, human eyebrow has diversified shape, and no fixed sturcture has good identity specificity, therefore can be effectively applied to identity and differentiate.
Use eyebrow and discern, an important step is exactly cutting apart of eyebrow image.Image segmentation always is the matter of utmost importance of graphical analysis and pattern-recognition, also is one of classic problem of Flame Image Process, and it is the important component part of graphical analysis and pattern recognition system, and the final analysis quality of decision image and the differentiation result of pattern-recognition.At present,, comprising though the method for numerous auto divide images is arranged: threshold method, clustering procedure, rim detection etc., the segmentation result of these methods often is not desired.Compare with these automatic dividing methods, automanual image separation technology is more practical, therefore also has been subjected to more concern.Semi-automatic or mutual image segmentation refers to that representational work comprises: figure cuts by the image segmentation that helps through by the people, walk random search, Lazy Snapping etc.In essence, these work can be included the framework the inside of semi-supervised learning classification in.
The semi-supervised learning technology is a kind of learning art between supervised learning and unsupervised learning, and wherein given data have only part to mark classification.Such as, χ={ x 1..., x l, x L+1..., x nBe a given point set, preceding l point all has category label { y among the χ 1..., y l, y wherein i∈ t={1 ..., and c} (i=1 ..., l), all the other points do not have label, and the target of semi-supervised learning is exactly to mark the point that does not have label among the χ with certain optiaml ciriterion.Up to the present, the semi-supervised learning technology has following several main implementation method: production model (generative model), self-training (self-training), joint training (Co-training), drawing method (graph based methods) etc.The object of the present invention is to provide a kind of eyebrow image Segmentation Technology of the semi-supervised learning technology based on figure.Usually, the eyebrow image cuts apart owing to the influence that is subjected to hair, illumination, posture and expression is more more difficult than it common image segmentation.Automatically the method for cutting apart hardly may be correct extract the eyebrow image, other eyebrow image segmentation method is such as principal component analysis (PCA) and template matches, though can basic fixed position, also all can not accurately demarcate the border of eyebrow.
Summary of the invention
The purpose of this invention is to provide a kind of hash method, the eyebrow image segmentation method that combines based on semi-supervised learning technology and local sensitivity.
The present invention adopts following technological means to realize:
The present invention may further comprise the steps successively:
Step 1; Accept user's original eyebrow image, and the eyebrow image division is become equal-sized small pixel piece s * s, the value of s can be selected s=2 according to speed and accuracy requirement, and 3,4 ..., 10.
Step 2; By computing machine more selected eyebrow points and non-eyebrow point from original eyebrow image, all block of pixels are according to the corresponding label that how much gives of the eyebrow point that is comprised and non-eyebrow point: count then the label of this block of pixels is 1 greater than non-eyebrow if eyebrow is counted, otherwise be 0; If do not comprise any selected eyebrow point and non-eyebrow point in the block of pixels, then this block of pixels does not have label;
Step 3; All block of pixels are all used vector representation, such as can with five dimensional vectors (r, g, b, x, y) expression, r wherein, g, b represent the mean value of the rgb value of this block of pixels, x, y represent the coordinate figure of the center of this block of pixels with respect to the upper left corner.The set of remembering the vector composition of all block of pixels correspondences is X, and labelled vectorial subclass is L.By the similarity between the hash method calculating pixel piece of local sensitivity, generate the similarity distance matrix W, and this similarity distance matrix of normalization is S, concrete steps are as follows:
Step 3.1; Make that d is the dimension of block of pixels, R is segmentation threshold (Euclidean distance will hash to same position in the Hash table in bigger probability less than the block of pixels of R), 1-δ accomplishes successfully that for wishing distance hashes to the probability of the same position in the Hash table less than the block of pixels of R, w is the yardstick of cutting apart of Hash table, such as 0≤h 1≤ 4,0≤h 2≤ 4, then think h 1, h 2Has identical cryptographic hash;
Step 3.2; Estimation parameter k and l make query time minimum;
Step 3.2.1; By computing machine, from X, select the vector of fixed number respectively arbitrarily, form new set X tAnd X q
Step 3.2.2; Selected k is certain fixed constant, and such as k=16, l is taken as log δ/log (1-p 1 k) on round, wherein p 1 = ∫ 0 w ( 1 / w ) ( 1 / π ) e - t 2 / 2 R 2 ( 1 - t / R ) dt ;
Step 3.2.3; Generate l k dimension composite vector c by computing machine i=(c Il, c I2..., c Ik) (1≤i≤l), wherein c Ij(1≤i≤l, 1≤j≤k) are the d dimensional vector, and c Ijz(1≤i≤l, 1≤j≤k, 1≤z≤d) are the real number of taking from standardized normal distribution; Remember that the set that this l k dimension composite vector formed is C; Generate l real number b by computing machine again i(1≤i≤l), wherein b i(1≤i≤l) all take from even distribution U (0, w);
Step 3.2.4; To X tIn each vector x tOrder:
p ij=(C ij·x t+b i)/w(1≤i≤l,1≤j≤k)
The dot product of wherein representing vector; Make vectorial p i=(p I1, p I2..., p Ik) (1≤i≤l), then x tL Hash key assignments can be expressed as
Figure A20091007951800082
Wherein Bracket function in the expression, a u(1≤u≤(0, hashsize), hashsize is the length of Hash table H, generally is taken as X k) to take from even distribution U tVectorial number; Hash table H is made of the index of each cryptographic hash to corresponding Hash bucket, and each Hash bucket is then by X tIn have an identical cryptographic hash vector constitute;
Step 3.2.5; According to x tL Hash key assignments, successively with x tIn the Hash bucket of this Hash key assignments correspondence among the H of packing into;
Step 3.2.6; To X qIn each vector x q: execution in step 3.2.4 calculates x qL Hash key assignments, and use U qRepresent the needed time; According to x qL Hash key assignments, search among the Hash table H corresponding Hash bucket B q, use T qRepresent all B qVectorial number sum, use V qRepresent total searching the time; Make u q=U q/ kl makes v q=V q/ l; Calculate all B qEach vector sum x qBetween Euclidean distance, the time that is spent is designated as G qMake g q=G q/ T q
Step 3.2.7; Order u = ( Σ x q ∈ X q U q ) / n , v = ( Σ x q ∈ X q v q ) / n , g = ( Σ x q ∈ X q g q ) / n , Wherein n is set X qIn vectorial number;
Step 3.2.8; Utilize the value of u, v and g, estimate new k value, satisfy condition:
k = arg min 1 ≤ k ≤ 100 { Σ x q ∈ X q ( u × k × l + v × l + g × collision ) } ,
Wherein l is log δ/log (1-p 1 k) on the value of rounding, collison = Σ x t ∈ X t ∫ 0 w ( 1 / w ) ( 1 / π ) e - x 2 / 2 dist 2 ( 1 - x / dist ) dx (dist is x tWith x qBetween Euclidean distance);
Step 3.2.9; According to new k value, calculating new l is log δ/log (1-p 1 k) on the value of rounding;
Step 3.3; Make X tBe X, execution in step 3.2.3 regenerates Hash table H to step 3.2.5;
Step 3.4; Travel through whole Hash table H, for any two vector x among the X iWith x j,, then define x if having identical Hash key assignments iWith x jBetween similarity be:
w ij=exp(-‖x i-x j2/2σ 2)
Wherein σ is a constant, otherwise definition t iWith t jBetween similarity be 0, obtain the similarity distance matrix W thus;
Step 3.5; Normalization similarity distance matrix W, order
Wherein D is the diagonally opposing corner matrix, and S = D - 1 2 W D - 1 2 D ii = Σ j w ij ;
Step 4, the normalization similar matrix S according to step 3 obtains adopts and carries out iterative computation based on the semi-supervised learning technology of figure, stamps label for the block of pixels that does not have label, and concrete steps are as follows:
Step 4.1; Structure original state matrix Y N * 2, wherein n is the sum of the contained vector of X, comprises labelled and unlabeled vector, if x i∈ L is the eyebrow piece in the label vector, then Y I1=1, Y I2=0; If x i∈ L is the non-eyebrow piece in the label vector, then Y I2=1, Y I1=0; Otherwise, Y I1=Y I2=0;
Step 4.2; Iterative computation F (t+1)=α SF (t)+(1-α) Y is up to convergence, and F (0)=Y wherein, α are the constants between 0 to 1;
Step 4.3; Suppose F *Be the result of iteration, then the unlabelled vector x iLabel depend on F i *Middle maximum component.That is: if F I1 *>F I2 *, block of pixels that then should the vector correspondence belongs to the eyebrow zone, and the label that makes block of pixels is 1; Otherwise the block of pixels of this vector correspondence belongs to non-eyebrow zone, and the label that makes this block of pixels is 0.
Step 5; According to the result of iteration, from original eyebrow image, extract with original label be 1 block of pixels be connected and the iteration result for the block of pixels of eyebrow, finish the extraction work of eyebrow;
Step 5.1; Make that set A is an empty set; Take out the vectorial o ∈ L of an eyebrow block of pixels correspondence arbitrarily, o is added set A, and change the label of o into 2;
Step 5.2; From set A, take out vectorial a arbitrarily, make A=A-{a}; Block of pixels with a correspondence is a starting point, searches label and is all block of pixels of 1 in eight fields and add set A, and change the label of these block of pixels into 2;
Step 5.3; Execution in step 5.2 is empty up to set A;
Step 5.4; Is label that 2 block of pixels is linked to be the eyebrow zone, and the eyebrow zone being put into the pure eyebrow image that generates one 256 look in the minimum rectangle that can comprise it by computing machine, the part unification between eyebrow region exterior and the minimum rectangle is taken as the average of eyebrow region exterior color.
Ultimate principle of the present invention is to think whether certain block of pixels belongs to the eyebrow zone and can judge by its neighbours, utilize the method for iteration, the neighbour who the label information of each block of pixels is expanded to (whether belonging to the eyebrow zone) it is until reaching an overall stable state.
The present invention compared with prior art has following remarkable advantages and beneficial effect:
The present invention is owing to utilized prior imformation, segmentation effect has better effect than the method for cutting apart automatically, in addition because adopted the hash method of local sensitivity to calculate, so have than splitting speed faster based on the similarity distance matrix in the semi-supervised method of figure.
The experiment effect of embodiment is obvious, illustrates that the present invention can carry out cutting apart of eyebrow image in actual applications.In a concrete experiment, select 40 original eyebrow images of 5 people to experimentize, what the present invention can both be correct will cut out in the background of eyebrow from original image.So good segmentation effect is obtained under indoor general nature illumination condition, and the image quality of image is not had very high requirement.So, can think that the present invention has very high practical value.In fact, compare with the method for auto divide image, because hair, eyelash and eyebrow belong to hair together, and the method for cutting apart automatically hardly may be eyebrow right-on separating from background.The application LNP (Linear Neighborhood Propagation) that people such as the Lazy Snapping that comprise Microsoft more similar to the present invention and Fei Wang proposes on CVPR ' 06 in the image partition method of other man-machine interaction carries out the method for image segmentation.These two methods and the present invention know that by draw some lines or some mark in original image background and prospect finish image segmentation.Compare with Lazy Snapping, use Lazy Snapping and carry out the eyebrow image segmentation, need be to eyelash, the place that hair etc. are close with the eyebrow color marks especially, and the present invention does not need; In addition, because the edge in eyebrow zone is trickleer, complexity, Lazy Snapping does not have the present invention accurate in the demarcation on some border.Compare with LNP, the LNP operating ratio is slower, and the present invention is greatly improved on arithmetic speed because adopted the hash method of local sensitivity to ask for the similarity distance matrix.
Having important use among the present invention in a lot of fields is worth.Such as, in eyebrow identification, can utilize the present invention to carry out the pre-service in early stage and extract pure eyebrow image, set up the eyebrow database; Again such as, can be used as the plug-in unit of some image processing software, only need simply on original image, draw strokes, just can from background, extract the object of needs.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention;
Fig. 2 is the synoptic diagram of original eyebrow image;
The synoptic diagram of Fig. 3 on original eyebrow image, marking;
Fig. 4 is the synoptic diagram of pure eyebrow image;
Fig. 5 is the design sketch of original eyebrow image;
The design sketch of Fig. 6 on original eyebrow image, marking;
Fig. 7 is the design sketch of pure eyebrow image.
Embodiment
Dispose embodiments of the invention according to Fig. 1.The present invention needs digital image acquisition apparatus of digital camera or Digital Video and so on and the common Desktop Computer with general pattern processing power when implementing.Concrete solid yardage case is:
Step 1; Adopt image pick-up card CG300, CP240 Panasonic video camera and 75mm high precision Japan import lens group to dress up digital image acquisition apparatus, microcomputer is elected DELL GX620 type computing machine as; Under general illumination condition, gather original eyebrow image, and original eyebrow image is packed in the computing machine; By computing machine Flame Image Process is become the RGB coloured image, and the eyebrow image division is become more equal-sized small pixel pieces, the size of small pixel piece is 7 * 7;
Step 2; On the display of computing machine, demonstrate original eyebrow image, as shown in Figure 2, and on image, annotate point in some eyebrow zones and the point in some non-eyebrow zones by mouse, Fig. 3 is an example that the point in the image is marked, wherein Wai Wei black line (red line in the coloured image) is represented non-eyebrow zone, the white line in the eyebrow (green line in the coloured image) expression eyebrow zone; All block of pixels are according to the corresponding label that how much gives of the eyebrow point that is comprised and non-eyebrow point: count then the label of this block of pixels is 1 greater than non-eyebrow if eyebrow is counted, otherwise be 0; If do not comprise any selected eyebrow point and non-eyebrow point in the block of pixels, then this block of pixels does not have label.
Step 3; All block of pixels all use five dimensional vectors (r, g, b, x, y) expression, r wherein, g, b represent the mean value of the rgb value of this block of pixels, x, y represent the coordinate figure of the center of this block of pixels with respect to the upper left corner; The set of remembering the five dimensional vectors composition of all block of pixels correspondences is X, and labelled vector is formed set L; Below by the similarity between the hash method calculating pixel piece of local sensitivity, generate the similarity distance matrix W, and this similarity distance matrix of normalization is S, concrete steps are as follows:
Step 3.1; Make d=5, w=4, δ=0.3, R=20;
Step 3.2; Estimation parameter k and l;
Step 3.2.1; By computing machine, from X, select 1000 and 100 vectors arbitrarily, form new set X respectively tAnd X q
Step 3.2.2; Get and decide k=16, l is log δ/log (1-p 1 k) on the value of rounding, wherein
p 1 = ∫ 0 w ( 1 / w ) ( 1 / π ) e - t 2 / 2 R 2 ( 1 - t / R ) dt ;
Step 3.2.3; Generate l k dimension composite vector c by computing machine i=(c I1, c I2..., c Ik) (1≤i≤l), wherein c Ij(1≤i≤l, 1≤j≤k) are the d dimensional vector, and c Ijz(1≤i≤l, 1≤j≤k, 1≤z≤d) are the real number of taking from standardized normal distribution; Remember that the set that this l k dimension composite vector formed is C; Generate l real number b by computing machine i(1≤i≤l), b i(1≤i≤l) all take from even distribution U (0, w);
Step 3.2.4; To X tIn each vector x tOrder:
p ij=(C ij·x t+b i)/w(1≤i≤l,1≤j≤k)
The dot product of wherein representing vector; Make vectorial p i=(p I1, p I2..., p Ik) (1≤i≤l), then x tL Hash key assignments can be expressed as
Figure A20091007951800122
Wherein
Figure A20091007951800123
Bracket function in the expression, a u(1≤u≤(0, hashsize), hashsize is the length of Hash table H, generally is taken as X k) to take from even distribution U tVectorial number; Hash table H is made of the index of each cryptographic hash to corresponding Hash bucket, and each Hash bucket is then by X tIn have an identical cryptographic hash vector constitute;
Step 3.2.5; According to x tL Hash key assignments, successively with x tIn the Hash bucket of this Hash key assignments correspondence among the H of packing into;
Step 3.2.6; To X qIn each vector x q: execution in step 3.2.4 calculates x qL Hash key assignments, and use U qRepresent the needed time; According to x qL Hash key assignments, search among the Hash table H corresponding Hash bucket B q, use T qRepresent all B qVectorial number sum, use V qRepresent total searching the time; Make u q=U q/ kl makes v q=V q/ l; Calculate all B qEach vector sum x qBetween Euclidean distance, the time that is spent is designated as G qMake g q=G q/ T q
Step 3.2.7; Order u = ( Σ x q ∈ X q U q ) / n , v = ( Σ x q ∈ X q v q ) / n , g = ( Σ x q ∈ X q g q ) / n , Wherein n is set X qIn vectorial number;
Step 3.2.8; Utilize the value of u, v and g, estimate new k value, satisfy condition:
k = arg min 1 ≤ k ≤ 100 { Σ x q ∈ X q ( u × k × l + v × l + g × collision ) } ,
Wherein l is log δ/log (1-p 1 k) on the value of rounding, collison = Σ x t ∈ X t ∫ 0 w ( 1 / w ) ( 1 / π ) e - x 2 / 2 dist 2 ( 1 - x / dist ) dx (dist is x tWith x qBetween Euclidean distance);
Step 3.2.9; According to new k value, calculating new l is log δ/log (1-p 1 k) on the value of rounding;
Step 3.3; Make X tBe X, execution in step (3.2.3) to step (3.2.5) regenerates Hash table H;
Step 3.4; For any two vector x among the X iWith x j,, then define x if having identical Hash key assignments iWith x jBetween similarity be:
w ij=exp(-||x i-x j|| 2/2σ 2)
Wherein σ=100 are constant, otherwise definition t iWith t jBetween similarity be 0, obtain the similarity distance matrix W thus.Normalization similarity distance matrix W,
Step 3.5; Order
S = D - 1 2 W D - 1 2
Wherein D is the diagonally opposing corner matrix, and D ii = Σ j w ij ;
Step 4; Normalization similar matrix S according to step 3 obtains adopts and carries out iterative computation based on the semi-supervised learning technology of figure, stamps label for the block of pixels that does not have label, and concrete steps are as follows:
Step 4.1; Structure is based on the original state matrix Y in the semi-supervised learning technology of figure N * 2, wherein n is the sum of the contained vector of X, comprises labelled and unlabeled vector; If x i∈ L is the eyebrow piece in the label vector, then Y I1=1, Y I2=0; If x i∈ L is the non-eyebrow piece in the label vector, then Y I2=1, Y I1=0; Otherwise, Y I1=Y I2=0;
Step 4.2; Iterative computation F (t+1)=α SF (t)+(1-α) Y is up to convergence, F (0)=Y wherein, α=0.9;
Step 4.3; Suppose F *Be the result of iteration, then the unlabelled vector x iLabel depend on F i *Middle maximum component.That is: if F I1 *>F I2 *, block of pixels that then should the vector correspondence belongs to the eyebrow zone, and the label that makes block of pixels is 1; Otherwise the block of pixels of this vector correspondence belongs to non-eyebrow zone, and the label that makes this block of pixels is 0;
Step 5; According to the result of iteration, extracting with original label from original eyebrow image is that 1 block of pixels is connected and the iteration result is the block of pixels of eyebrow, finishes the extraction work of eyebrow:
Step 5.1; Make that set A is an empty set; Take out the vectorial o ∈ L of an eyebrow block of pixels correspondence arbitrarily, o is added set A, and change the label of o into 2;
Step 5.2; From set A, take out vectorial a arbitrarily, make A=A-{a}; Block of pixels with a correspondence is a starting point, searches label and is all block of pixels of 1 in eight fields and add set A, and change the label of these block of pixels into 2;
Step 5.3; Execution in step 5.2 is empty up to set A;
Step 5.4; Is label that 2 block of pixels is linked to be the eyebrow zone, and the eyebrow zone being put into the pure eyebrow image that generates one 256 look in the minimum rectangle that can comprise it by computing machine, the part unification between eyebrow region exterior and the minimum rectangle is taken as the average of eyebrow region exterior color.
It should be noted that at last: above embodiment only in order to the explanation the present invention and and unrestricted technical scheme described in the invention; Therefore, although this instructions has been described in detail the present invention with reference to each above-mentioned embodiment,, those of ordinary skill in the art should be appreciated that still and can make amendment or be equal to replacement the present invention; And all do not break away from the technical scheme and the improvement thereof of the spirit and scope of invention, and it all should be encompassed in the middle of the claim scope of the present invention.

Claims (5)

1, a kind of eyebrow extracting method based on semi-supervised learning and hash index is characterized in that, may further comprise the steps successively:
Step 1; Accept user's original eyebrow image, and the eyebrow image division is become equal-sized small pixel piece s * s, the value of s can be selected s=2 according to speed and accuracy requirement, and 3,4 ..., 10.
Step 2; By computing machine selected eyebrow point and non-eyebrow point from original eyebrow image, all block of pixels are according to the corresponding label that how much gives of the eyebrow point that is comprised and non-eyebrow point: count then the label of this block of pixels is 1 greater than non-eyebrow if eyebrow is counted, otherwise be 0; If do not comprise any selected eyebrow point and non-eyebrow point in the block of pixels, then this block of pixels does not have label;
Step 3; All block of pixels are all used vector representation, such as can with five dimensional vectors (r, g, b, x, y) expression, r wherein, g, b represent the mean value of the RGB respective components of this block of pixels, x, y represent the coordinate figure of the center of this block of pixels with respect to the upper left corner; The set of remembering the vector composition of all block of pixels correspondences is X, and wherein labelled subclass is L; By the similarity between the hash method calculating pixel piece of local sensitivity, generate the similarity distance matrix W, and this similarity distance matrix of normalization is S.
2, the eyebrow extracting method based on semi-supervised learning and hash index according to claim 1, it is characterized in that: described step 3 comprises:
Step 3.1; Make that d is the dimension of block of pixels, R is segmentation threshold (Euclidean distance will hash to same position in the Hash table in bigger probability less than the block of pixels of R), 1-δ accomplishes successfully that for wishing distance hashes to the probability of the same position in the Hash table less than the block of pixels of R, w is the yardstick of cutting apart of Hash table, such as 0≤h 1≤ 4,0≤h 2≤ 4, then think h 1, h 2Has identical cryptographic hash;
Step 3.2; Estimation parameter k and l make the query time of Hash table minimum.
3, the eyebrow extracting method based on semi-supervised learning and hash index according to claim 2, it is characterized in that: described step 3.2 comprises:
Step 3.2.1; By computing machine, from X, select the vector of fixed number respectively arbitrarily, form new set X tAnd X q
Step 3.2.2; Selected k is certain fixed constant, and such as k=16, l is log δ/log (1-p 1 k) on the value of rounding, wherein p 1 = ∫ 0 w ( 1 / w ) ( 1 / π ) e - t 2 / 2 R 2 ( 1 - t / R ) dt ;
Step 3.2.3; Generate l k dimension composite vector c by computing machine i=(c I1, c I2..., c Ik) (1≤i≤l), wherein c Ij(1≤i≤l, 1≤j≤k) are the d dimensional vector, and c Ijz(1≤i≤l, 1≤j≤k, 1≤z≤d) are the real number of taking from standardized normal distribution; Remember that the set that this l k dimension composite vector formed is C; Generate l real number b by computing machine i(1≤i≤l), b i(1≤i≤l) all take from even distribution U (0, w);
Step 3.2.4; To X tIn each vector x tOrder:
p ij=(C ij·x t+b i)/w(1≤i≤l,1≤j≤k)
The dot product of wherein representing vector; Make vectorial p i=(p I1, p I2..., p Ik) (1≤i≤l), then x tL Hash key assignments can be expressed as
Figure A2009100795180003C1
Wherein " " bracket function in the expression, a u(1≤u≤(0, hashsize), hashsize is the length of Hash table H, generally is taken as X k) to take from even distribution U tVectorial number; Hash table H is made of the index of each cryptographic hash to corresponding Hash bucket, and each Hash bucket is then by X tIn have an identical cryptographic hash vector constitute;
Step 3.2.5; According to x tL Hash key assignments, successively with x tIn the Hash bucket of this Hash key assignments correspondence among the H of packing into;
Step 3.2.6; To X qIn each vector x q: execution in step 3.2.4 calculates x qL Hash key assignments, and use U qRepresent the needed time; According to x qL Hash key assignments, search among the Hash table H corresponding Hash bucket B q, use T qRepresent all B qVectorial number sum, use V qRepresent total searching the time; Make u q=U q/ kl makes v q=V q/ l; Calculate all B qEach vector sum x qBetween Euclidean distance, the time that is spent is designated as G qMake g q=G q/ T q
Step 3.2.7; Order u = ( Σ x q ∈ X q U q ) / n , v = ( Σ x q ∈ X q v q ) / n , g = ( Σ x q ∈ X q g q ) / n , Wherein n is set X qIn vectorial number;
Step 3.2.8; Utilize the value of u, v and g, estimate new k value, satisfy condition:
k = arg min 1 ≤ k ≤ 100 { Σ x q ∈ X q ( u × k × l + v × l + g × collision ) } ,
Wherein l is log δ/log (1-p 1 k) on the value of rounding, collison = Σ x t ∈ X t ∫ 0 w ( 1 / w ) ( 1 / π ) e - x 2 / 2 dist 2 ( 1 - x / dist ) dx (dist is x tWith x qBetween Euclidean distance);
Step 3.2.9; According to new k value, calculating new l is log δ/log (1-p 1 k) on the value of rounding;
Step 3.3; Make X tBe X, execution in step 3.2.3 regenerates Hash table H to step 3.2.5;
Step 3.4; Travel through whole Hash table H, for any two vector x among the X iWith x j,, then define x if having identical Hash key assignments iWith x jBetween similarity be:
w ij=exp(-‖x i-x j2/2σ 2)
Wherein σ is a constant, otherwise definition t iWith t jBetween similarity be 0, obtain the similarity distance matrix W thus;
Step 3.5; Normalization similarity distance matrix W, order
S= D - 1 2 WD - 1 2
Wherein D is the diagonally opposing corner matrix, and D ii = Σ j w ij ;
4, the eyebrow extracting method based on semi-supervised learning and hash index according to claim 1, it is characterized in that: the described normalization similar matrix S that obtains, employing is carried out iterative computation based on the semi-supervised learning technology of figure, stamps label for the block of pixels that does not have label, and concrete steps are as follows:
Step 4.1; Structure original state matrix Y N * 2, wherein n is the sum of the contained vector of X, comprises labelled and unlabeled vector, if x i∈ L is the eyebrow piece in the label vector, then Y I1=1, Y I2=0; If x i∈ L is the non-eyebrow piece in the label vector, then Y I2=1, Y I1=0; Otherwise, Y I1=Y I2=0;
Step 4.2; Iterative computation F (t+1)=α SF (t)+(1-α) Y is up to convergence, and F (0)=Y wherein, α are the constants between 0 to 1;
Step 4.3; Suppose F *Be the result of iteration, then the unlabelled vector x iLabel depend on F i *Middle maximum component.That is: if F I1 *>F I2 *, block of pixels that then should the vector correspondence belongs to the eyebrow zone, and the label that makes block of pixels is 1; Otherwise the block of pixels of this vector correspondence belongs to non-eyebrow zone, and the label that makes this block of pixels is 0.
Step 5; According to the result of iteration, from original eyebrow image, extract with original label be 1 block of pixels be connected and the iteration result for the block of pixels of eyebrow, finish the extraction work of eyebrow;
5, the eyebrow extracting method based on semi-supervised learning and hash index according to claim 1, it is characterized in that: described step 5 comprises:
Step 5.1; Make that set A is an empty set; Take out the vectorial o ∈ L of an eyebrow block of pixels correspondence arbitrarily, o is added set A, and change the label of o into 2;
Step 5.2; From set A, take out vectorial a arbitrarily, make A=A-{a}; Block of pixels with a correspondence is a starting point, searches label and is all block of pixels of 1 in eight fields and add set A, and change the label of these block of pixels into 2;
Step 5.3; Execution in step 5.2 is empty up to set A;
Step 5.4; Is label that 2 block of pixels is linked to be the eyebrow zone, and the eyebrow zone being put into the pure eyebrow image that generates one 256 look in the minimum rectangle that can comprise it by computing machine, the part unification between eyebrow region exterior and the minimum rectangle is taken as the average of eyebrow region exterior color.
CN2009100795188A 2009-03-06 2009-03-06 Eyebrow image segmentation method based on semi-supervision learning and Hash index Expired - Fee Related CN101493887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100795188A CN101493887B (en) 2009-03-06 2009-03-06 Eyebrow image segmentation method based on semi-supervision learning and Hash index

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100795188A CN101493887B (en) 2009-03-06 2009-03-06 Eyebrow image segmentation method based on semi-supervision learning and Hash index

Publications (2)

Publication Number Publication Date
CN101493887A true CN101493887A (en) 2009-07-29
CN101493887B CN101493887B (en) 2012-03-28

Family

ID=40924479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100795188A Expired - Fee Related CN101493887B (en) 2009-03-06 2009-03-06 Eyebrow image segmentation method based on semi-supervision learning and Hash index

Country Status (1)

Country Link
CN (1) CN101493887B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901353A (en) * 2010-07-23 2010-12-01 北京工业大学 Subregion-based matched eyebrow image identifying method
CN102970462A (en) * 2011-08-31 2013-03-13 株式会社东芝 Image processing device and image processing method
CN102982320A (en) * 2012-12-05 2013-03-20 山东神思电子技术股份有限公司 Method for extracting eyebrow outline
CN103400155A (en) * 2013-06-28 2013-11-20 西安交通大学 Pornographic video detection method based on semi-supervised learning of images
CN103942779A (en) * 2014-03-27 2014-07-23 南京邮电大学 Image segmentation method based on combination of graph theory and semi-supervised learning
CN109697746A (en) * 2018-11-26 2019-04-30 深圳艺达文化传媒有限公司 Self-timer video cartoon head portrait stacking method and Related product
CN110309143A (en) * 2018-03-21 2019-10-08 华为技术有限公司 Data similarity determines method, apparatus and processing equipment
CN110322445A (en) * 2019-06-12 2019-10-11 浙江大学 A kind of semantic segmentation method based on maximization prediction and impairment correlations function between label
CN111914604A (en) * 2019-05-10 2020-11-10 丽宝大数据股份有限公司 Augmented reality display method for applying hair color to eyebrow
CN113095148A (en) * 2021-03-16 2021-07-09 深圳市雄帝科技股份有限公司 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium
CN115082709A (en) * 2022-07-21 2022-09-20 济南星睿信息技术有限公司 Remote sensing big data processing method and system and cloud platform

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100365645C (en) * 2005-02-24 2008-01-30 北京工业大学 Identity recognition method based on eyebrow recognition
CN1645406A (en) * 2005-02-24 2005-07-27 北京工业大学 Identity discriminating method based on eyebrow identification
JP4533849B2 (en) * 2006-01-16 2010-09-01 株式会社東芝 Image processing apparatus and image processing program

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901353A (en) * 2010-07-23 2010-12-01 北京工业大学 Subregion-based matched eyebrow image identifying method
CN101901353B (en) * 2010-07-23 2012-10-31 北京工业大学 Subregion-based matched eyebrow image identifying method
CN102970462A (en) * 2011-08-31 2013-03-13 株式会社东芝 Image processing device and image processing method
CN102982320A (en) * 2012-12-05 2013-03-20 山东神思电子技术股份有限公司 Method for extracting eyebrow outline
CN102982320B (en) * 2012-12-05 2015-07-08 山东神思电子技术股份有限公司 Method for extracting eyebrow outline
CN103400155A (en) * 2013-06-28 2013-11-20 西安交通大学 Pornographic video detection method based on semi-supervised learning of images
CN103942779A (en) * 2014-03-27 2014-07-23 南京邮电大学 Image segmentation method based on combination of graph theory and semi-supervised learning
CN110309143A (en) * 2018-03-21 2019-10-08 华为技术有限公司 Data similarity determines method, apparatus and processing equipment
CN110309143B (en) * 2018-03-21 2021-10-22 华为技术有限公司 Data similarity determination method and device and processing equipment
CN109697746A (en) * 2018-11-26 2019-04-30 深圳艺达文化传媒有限公司 Self-timer video cartoon head portrait stacking method and Related product
CN111914604A (en) * 2019-05-10 2020-11-10 丽宝大数据股份有限公司 Augmented reality display method for applying hair color to eyebrow
CN110322445A (en) * 2019-06-12 2019-10-11 浙江大学 A kind of semantic segmentation method based on maximization prediction and impairment correlations function between label
CN113095148A (en) * 2021-03-16 2021-07-09 深圳市雄帝科技股份有限公司 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium
CN113095148B (en) * 2021-03-16 2022-09-06 深圳市雄帝科技股份有限公司 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium
CN115082709A (en) * 2022-07-21 2022-09-20 济南星睿信息技术有限公司 Remote sensing big data processing method and system and cloud platform
CN115082709B (en) * 2022-07-21 2023-07-07 陕西合友网络科技有限公司 Remote sensing big data processing method, system and cloud platform

Also Published As

Publication number Publication date
CN101493887B (en) 2012-03-28

Similar Documents

Publication Publication Date Title
CN101493887B (en) Eyebrow image segmentation method based on semi-supervision learning and Hash index
Lin et al. Discriminatively trained and-or graph models for object shape detection
CN102938065B (en) Face feature extraction method and face identification method based on large-scale image data
CN104036255B (en) A kind of facial expression recognizing method
CN103246891B (en) A kind of Chinese Sign Language recognition methods based on Kinect
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN110738207A (en) character detection method for fusing character area edge information in character image
CN102332034B (en) Portrait picture retrieval method and device
CN109508663A (en) A kind of pedestrian's recognition methods again based on multi-level supervision network
CN107330397A (en) A kind of pedestrian's recognition methods again based on large-spacing relative distance metric learning
CN103258037A (en) Trademark identification searching method for multiple combined contents
CN103824052A (en) Multilevel semantic feature-based face feature extraction method and recognition method
CN109711384A (en) A kind of face identification method based on depth convolutional neural networks
CN105825233B (en) A kind of pedestrian detection method based on on-line study random fern classifier
Shah et al. A novel biomechanics-based approach for person re-identification by generating dense color sift salience features
CN113963032A (en) Twin network structure target tracking method fusing target re-identification
CN105975932A (en) Gait recognition and classification method based on time sequence shapelet
CN107330027A (en) A kind of Weakly supervised depth station caption detection method
CN107103311A (en) A kind of recognition methods of continuous sign language and its device
CN106127112A (en) Data Dimensionality Reduction based on DLLE model and feature understanding method
Kumar et al. A novel method for visually impaired using object recognition
CN105718935A (en) Word frequency histogram calculation method suitable for visual big data
CN104376312A (en) Face recognition method based on word bag compressed sensing feature extraction
CN107146215A (en) A kind of conspicuousness detection method based on color histogram and convex closure
CN103455805B (en) A kind of new face characteristic describes method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120328

Termination date: 20140306