CN101493887B - Eyebrow image segmentation method based on semi-supervision learning and Hash index - Google Patents
Eyebrow image segmentation method based on semi-supervision learning and Hash index Download PDFInfo
- Publication number
- CN101493887B CN101493887B CN2009100795188A CN200910079518A CN101493887B CN 101493887 B CN101493887 B CN 101493887B CN 2009100795188 A CN2009100795188 A CN 2009100795188A CN 200910079518 A CN200910079518 A CN 200910079518A CN 101493887 B CN101493887 B CN 101493887B
- Authority
- CN
- China
- Prior art keywords
- eyebrow
- pixels
- block
- hash
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an eyebrow image segmentation technology based on semi-supervised learning and Hash index, which comprises the following steps in sequence: accepting an original eyebrow image of a user, separating the eyebrow image into small pixel blocks with equal dimension and selecting certain pixel blocks respectively outside and inside the eyebrow area by a computer to endow such blocks with different symbols; expressing each pixel block as a vector and calculating the similarity between such pixel blocks through the Hash method of partial sensitiveness so as to obtain a normalized resemblance distance matrix; and using a technology based on semi-supervised learning of the image to mark unmarked pixel blocks from which pixel blocks marked as eyebrow are picked up to finish the extraction of eyebrow. The technology greatly improves the image segmentation speed by the technology based on semi-supervised learning due to the utilization of the Hash method of partial sensitiveness to solve the resemblance distance matrix in the semi-supervised learning technology of the image.
Description
Technical field
The present invention relates to a kind of method that from original eyebrow image, extracts pure eyebrow image that combines based on the hash indexing method of semi-supervised learning technology and local sensitivity, belong to electronic information technical field.
Background technology
In modern society; Rapid rise along with ecommerce in the high speed development of computer networking technology and the global range; Information security demonstrates unprecedented importance, and living things feature recognition begins more and more to receive people's attention as an importance of information security.People's research at present and the biometrics identification technology that uses mainly contain: recognition of face, iris recognition, fingerprint recognition, the identification of hand shape, palmmprint identification, ear recognition, signature identification, voice recognition, Gait Recognition, or the like.Eyebrow has ubiquity, uniqueness, stability and collection property as recognition feature as people's key character on the face.In fact, compare with facial image, the eyebrow image not only has clean cut, the simple in structure and advantage chosen easily, and receives the influence of illumination and expression less, has better stability and anti-interference; Compare with iris image, the eyebrow image then has again and is easy to gather and advantage easy to use.In addition, human eyebrow has diversified shape, and no fixed sturcture has good identity specificity, therefore can be effectively applied to identity and differentiate.
Use eyebrow and discern, an important step is exactly cutting apart of eyebrow image.Image segmentation always is the matter of utmost importance of graphical analysis and pattern-recognition, also is one of classic problem of Flame Image Process, and it is the important component part of graphical analysis and PRS, and the final analysis quality of decision image and the differentiation result of pattern-recognition.At present,, comprising though the method for numerous auto divide images is arranged: threshold method, clustering procedure, rim detection etc., the segmentation result of these methods often is not desired.Compare with these automatic dividing methods, automanual image separation technology is more practical, has therefore also received more concern.Semi-automatic or mutual image segmentation refers to that representational work comprises through the image segmentation that helps through by the people: figure cuts, walk random search, Lazy Snapping etc.In essence, these work can be included the framework the inside of semi-supervised learning classification in.
The semi-supervised learning technology is a kind of learning art between supervised learning and unsupervised learning, and wherein given data have only part to mark classification.Such as, χ={ x
1..., x
l, x
L+1..., x
nBe a given point set, preceding l point all has category label { y among the χ
1..., y
l, y wherein
i∈ t={1 ..., and c} (i=1 ..., l), all the other points do not have label, and the target of semi-supervised learning is exactly to mark the point that does not have label among the χ with certain optiaml ciriterion.Up to the present, the semi-supervised learning technology has following several kinds of main implementation methods: production model (generative model), self-training (self-training), joint training (Co-training), drawing method (graph based methods) etc.The object of the present invention is to provide a kind of based on the technological eyebrow image Segmentation Technology of the semi-supervised learning of figure.Usually, the eyebrow image cuts apart owing to the influence that receives hair, illumination, posture and expression is more more difficult than it common image segmentation.Automatically the method for cutting apart hardly maybe be correct extract the eyebrow image, other eyebrow image segmentation method is such as principal component analysis (PCA) and template matches, though can basic fixed position, also all can not accurately demarcate the border of eyebrow.
Summary of the invention
The purpose of this invention is to provide a kind of hash method, the eyebrow image segmentation method that combines based on semi-supervised learning technology and local sensitivity.
The present invention adopts following technological means to realize:
The present invention may further comprise the steps successively:
Step 1; Accept user's original eyebrow image, and the eyebrow image division is become equal-sized small pixel piece s * s, the value of s can be selected s=2 according to speed and accuracy requirement, and 3,4 ..., 10.
Step 2; Through computing machine more selected eyebrow points and non-eyebrow point from original eyebrow image; All block of pixels are according to the corresponding label that how much gives of the eyebrow point that is comprised and non-eyebrow point: count then the label of this block of pixels is 1 greater than non-eyebrow if eyebrow is counted, otherwise be 0; If do not comprise any selected eyebrow point and non-eyebrow point in the block of pixels, then this block of pixels does not have label;
Step 3; All block of pixels are all used vector representation, such as can use five dimensional vectors (r, g, b, x, y) expression, r wherein, g, b represent the mean value of the rgb value of this block of pixels, x, y represent the coordinate figure of the center of this block of pixels with respect to the upper left corner.Remember that the set that the corresponding vector of all block of pixels is formed is X, labelled vectorial subclass is L.Through the similarity between the hash method calculating pixel piece of local sensitivity, generate the similarity distance matrix W, and this similarity distance matrix of normalization is S, concrete steps are following:
Step 3.1; Make that d is the dimension of block of pixels; R is segmentation threshold (Euclidean distance will hash to the same position in the Hash table in bigger probability less than the block of pixels of R); 1-δ accomplishes successfully that for hoping distance hashes to the probability of the same position in the Hash table less than the block of pixels of R; W is the yardstick of cutting apart of Hash table, such as 0≤h
1≤4,0≤h
2≤4, then think h
1, h
2Has identical cryptographic hash;
Step 3.2; Estimation parameter k and l make query time minimum;
Step 3.2.1; Through computing machine, from X, select the vector of fixed number respectively arbitrarily, form new set X
tAnd X
q
Step 3.2.2; Selected k is certain fixed constant, and such as k=16, l is taken as log δ/log (1-p
1 k) on round, wherein
Step 3.2.3; Generate l k dimension composite vector c through computing machine
i=(c
I1, c
I2..., c
Ik) (1≤i≤l), wherein c
Ij(1≤i≤l, 1≤j≤k) are the d dimensional vector, and c
Ijz(1≤i≤l, 1≤j≤k, 1≤z≤d) are the real number of taking from standardized normal distribution; Remember that the set that this l k dimension composite vector formed is C; Generate l real number b through computing machine again
i(1≤i≤l), wherein b
i(1≤i≤l) all take from even distribution U (0, w);
Step 3.2.4; To X
tIn each vector x
tOrder:
p
ij=(C
ij·x
t+b
i)/w(1≤i≤l,1≤j≤k)
The dot product of wherein representing vector; Make vectorial p
i=(p
I1, p
I2..., p
Ik) (1≤i≤l), then x
tL Hash key assignments can be expressed as
Wherein
Bracket function in the expression, a
u(1≤u≤(0, hashsize), hashsize is the length of Hash table H, generally is taken as X k) to take from even distribution U
tVectorial number; Hash table H is made up of the index of each cryptographic hash to corresponding hash, and each hash is then by X
tIn have an identical cryptographic hash vector constitute;
Step 3.2.5; According to x
tL Hash key assignments, successively with x
tIn the hash that this Hash key assignments is corresponding among the H of packing into;
Step 3.2.6; To X
qIn each vector x
q: execution in step 3.2.4 calculates x
qL Hash key assignments, and use U
qRepresent the needed time; According to x
qL Hash key assignments, search among the Hash table H corresponding hash B
q, use T
qRepresent all B
qVectorial number sum, use V
qRepresent total searching the time; Make u
q=U
q/ kl makes v
q=V
q/ l; Calculate all B
qEach vector sum x
qBetween Euclidean distance, the time that is spent is designated as G
qMake g
q=G
q/ T
q
Step 3.2.7; Order
Wherein n is set X
qIn vectorial number;
Step 3.2.8; Utilize the value of u, v and g, estimate new k value, satisfy condition:
Wherein l is log δ/log (1-p
1 k) on the value of rounding,
(dist is x
tWith x
qBetween Euclidean distance);
Step 3.2.9; According to new k value, calculating new l is log δ/log (1-p
1 k) on the value of rounding;
Step 3.3; Make X
tBe X, execution in step 3.2.3 regenerates Hash table H to step 3.2.5;
Step 3.4; Travel through whole Hash table H, for any two vector x among the X
iWith x
j,, then define x if having identical Hash key assignments
iWith x
jBetween similarity be:
w
ij=exp(-||x
i-x
j||
2/2σ
2)
Wherein σ is a constant, otherwise definition t
iWith t
jBetween similarity be 0, obtain the similarity distance matrix W thus;
Step 3.5; Normalization similarity distance matrix W, order
Wherein D is the diagonally opposing corner matrix, and
Step 4, the normalization similar matrix S according to step 3 obtains adopts based on the semi-supervised learning technology of figure and carries out iterative computation, stamps label for the block of pixels that does not have label, and concrete steps are following:
Step 4.1; Structure original state matrix Y
N * 2, wherein n is the sum of the contained vector of X, comprises labelled and unlabeled vector, if x
i∈ L is the eyebrow piece in the label vector, then Y
I1=1, Y
I2=0; If x
i∈ L is the non-eyebrow piece in the label vector, then Y
I2=1, Y
I1=0; Otherwise, Y
I1=Y
I2=0;
Step 4.2; Iterative computation F (t+1)=α SF (t)+(1-α) Y is up to convergence, and F (0)=Y wherein, α are the constants between 0 to 1;
Step 4.3; Suppose F
*Be the result of iteration, then the unlabelled vector x
iLabel depend on F
i *Middle maximum component.That is: if F
I1 *>F
I2 *, then should the corresponding block of pixels of vector belong to the eyebrow zone, the label that makes block of pixels is 1; Otherwise the block of pixels that this vector is corresponding belongs to non-eyebrow zone, and the label that makes this block of pixels is 0.
Step 5; According to the result of iteration, from original eyebrow image, extract with original label be 1 block of pixels be connected and the iteration result for the block of pixels of eyebrow, accomplish the extraction work of eyebrow;
Step 5.1; Make that set A is an empty set; Take out the corresponding vectorial o ∈ L of an eyebrow block of pixels arbitrarily, o is added set A, and change the label of o into 2;
Step 5.2; From set A, take out vectorial a arbitrarily, make A=A-{a}; The block of pixels corresponding with a is starting point, in eight fields, searches label and is all block of pixels of 1 and add set A, and change the label of these block of pixels into 2;
Step 5.3; Execution in step 5.2 is empty up to set A;
Step 5.4; Is label that 2 block of pixels is linked to be the eyebrow zone; And being put into the pure eyebrow image that generates one 256 look in the minimum rectangle that can comprise it to the eyebrow zone through computing machine, the part unification between eyebrow region exterior and the minimum rectangle is taken as the average of eyebrow region exterior color.
Ultimate principle of the present invention is to think whether certain block of pixels belongs to the eyebrow zone and can judge through its neighbours; Utilize the method for iteration, the neighbour who (whether belonging to the eyebrow zone) expands to the label information of each block of pixels it is until reaching an overall stable state.
The present invention compared with prior art has following remarkable advantages and beneficial effect:
The present invention is owing to utilized prior imformation; Segmentation effect has better effect than the method for cutting apart automatically; In addition because adopted the hash method of local sensitivity to calculate, so have than splitting speed faster based on the similarity distance matrix in the semi-supervised method of figure.
The experiment effect of embodiment is obvious, explains that the present invention can carry out cutting apart of eyebrow image in practical application.In a concrete experiment, select 40 original eyebrow images of 5 people to experimentize, the present invention can both be correct with cutting out in the background of eyebrow from original image.So good segmentation effect is obtained under indoor general nature illumination condition, and the image quality of image is not had very high requirement.So, can think that the present invention has very high practical value.In fact, compare with the method for auto divide image, because hair, eyelash and eyebrow belong to hair together, and the method for cutting apart automatically hardly maybe be eyebrow right-on separating from background.The application LNP (Linear Neighborhood Propagation) that people such as the Lazy Snapping that comprise Microsoft more similar with the present invention and Fei Wang proposes on CVPR ' 06 in the image partition method of other man-machine interaction carries out the method for image segmentation.These two methods and the present invention know that through in original image, drawing some lines or some mark background and prospect accomplish image segmentation.Compare with Lazy Snapping, use Lazy Snapping and carry out the eyebrow image segmentation, need be to eyelash, special mark is carried out in the place that hair etc. are close with the eyebrow color, and the present invention does not need; In addition, because the edge in eyebrow zone is trickleer, complicacy, Lazy Snapping does not have the present invention accurate in the demarcation on some border.Compare with LNP, the LNP operating ratio is slower, and the present invention is greatly improved on arithmetic speed because adopted the hash method of local sensitivity to ask for the similarity distance matrix.
Having important use among the present invention in a lot of fields is worth.Such as, in eyebrow identification, can utilize the present invention to carry out the pre-service in early stage and extract pure eyebrow image, set up the eyebrow database; Again such as, can be used as the plug-in unit of some image processing software, only need simply on original image, draw strokes, just can from background, extract the object of needs.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention;
Fig. 2 is the synoptic diagram of original eyebrow image;
Fig. 3 is the synoptic diagram of annotating at the enterprising rower of original eyebrow image;
Fig. 4 is the synoptic diagram of pure eyebrow image;
Fig. 5 is the design sketch of original eyebrow image;
Fig. 6 is the design sketch of annotating at the enterprising rower of original eyebrow image;
Fig. 7 is the design sketch of pure eyebrow image.
Embodiment
Dispose embodiments of the invention according to Fig. 1.The present invention needs digital image acquisition apparatus of digital camera or DV and so on and the common Desktop Computer with general pattern processing power when implementing.Concrete solid yardage case is:
Step 1; Adopt image pick-up card CG300, CP240 Panasonic video camera and 75mm high precision Japan import lens group to dress up digital image acquisition apparatus, microcomputer is elected DELL GX620 type computing machine as; Under general illumination condition, gather original eyebrow image, and original eyebrow image is packed in the computing machine; Become the RGB coloured image to Flame Image Process through computing machine, and the eyebrow image division is become more equal-sized small pixel pieces, the size of small pixel piece is 7 * 7;
Step 2; On the display of computing machine, demonstrate original eyebrow image; As shown in Figure 2; And on image, annotate the point in point and some the non-eyebrow zones in some eyebrow zones through mouse; Fig. 3 is an example that the point in the image is marked, the wherein peripheral non-eyebrow of black line (red line in the coloured image) expression zone, the white line in the eyebrow (green line in the coloured image) expression eyebrow zone; All block of pixels are according to the corresponding label that how much gives of the eyebrow point that is comprised and non-eyebrow point: count then the label of this block of pixels is 1 greater than non-eyebrow if eyebrow is counted, otherwise be 0; If do not comprise any selected eyebrow point and non-eyebrow point in the block of pixels, then this block of pixels does not have label.
Step 3; All block of pixels all use five dimensional vectors (r, g, b, x, y) expression, r wherein, g, b represent the mean value of the rgb value of this block of pixels, x, y represent the coordinate figure of the center of this block of pixels with respect to the upper left corner; Remember that the set that five corresponding dimensional vectors of all block of pixels are formed is X, labelled vector is formed set L; Through the similarity between the hash method calculating pixel piece of local sensitivity, generate the similarity distance matrix W below, and this similarity distance matrix of normalization is S, concrete steps are following:
Step 3.1; Make d=5, w=4, δ=0.3, R=20;
Step 3.2; Estimation parameter k and l;
Step 3.2.1; Through computing machine, from X, select 1000 and 100 vectors arbitrarily, form new set X respectively
tAnd X
q
Step 3.2.2; Get and decide k=16, l is log δ/log (1-p
1 k) on the value of rounding, wherein
Step 3.2.3; Generate l k dimension composite vector c through computing machine
i=(c
I1, c
I2..., c
Ik) (1≤i≤l), wherein c
Ij(1≤i≤l, 1≤j≤k) are the d dimensional vector, and c
Ijz(1≤i≤l, 1≤j≤k, 1≤z≤d) are the real number of taking from standardized normal distribution; Remember that the set that this l k dimension composite vector formed is C; Generate l real number b through computing machine
i(1≤i≤l), b
i(1≤i≤l) all take from even distribution U (0, w);
Step 3.2.4; To X
tIn each vector x
tOrder:
p
ij=(C
ij·x
t+b
i)/w(1≤i≤l,1≤j≤k)
The dot product of wherein representing vector; Make vectorial p
i=(p
I1, p
I2..., p
Ik) (1≤i≤l), then x
tL Hash key assignments can be expressed as
Wherein
Bracket function in the expression, a
u(1≤u≤(0, hashsize), hashsize is the length of Hash table H, generally is taken as X k) to take from even distribution U
tVectorial number; Hash table H is made up of the index of each cryptographic hash to corresponding hash, and each hash is then by X
tIn have an identical cryptographic hash vector constitute;
Step 3.2.5; According to x
tL Hash key assignments, successively with x
tIn the hash that this Hash key assignments is corresponding among the H of packing into;
Step 3.2.6; To X
qIn each vector x
q: execution in step 3.2.4 calculates x
qL Hash key assignments, and use U
qRepresent the needed time; According to x
qL Hash key assignments, search among the Hash table H corresponding hash B
q, use T
qRepresent all B
qVectorial number sum, use V
qRepresent total searching the time; Make u
q=U
q/ kl makes v
q=V
q/ l; Calculate all B
qEach vector sum x
qBetween Euclidean distance, the time that is spent is designated as G
qMake g
q=G
q/ T
q
Step 3.2.7; Order
Wherein n is set X
qIn vectorial number;
Step 3.2.8; Utilize the value of u, v and g, estimate new k value, satisfy condition:
Wherein l is log δ/log (1-p
1 k) on the value of rounding,
(dist is x
tWith x
qBetween Euclidean distance);
Step 3.2.9; According to new k value, calculating new l is log δ/log (1-p
1 k) on the value of rounding;
Step 3.3; Make X
tBe X, execution in step (3.2.3) to step (3.2.5) regenerates Hash table H;
Step 3.4; For any two vector x among the X
iWith x
j,, then define x if having identical Hash key assignments
iWith x
jBetween similarity be:
w
ij=exp(-||x
i-x
j||
2/2σ
2)
Wherein σ=100 are constant, otherwise definition t
iWith t
jBetween similarity be 0, obtain the similarity distance matrix W thus.Normalization similarity distance matrix W,
Step 3.5; Order
Wherein D is the diagonally opposing corner matrix, and
Step 4; Normalization similar matrix S according to step 3 obtains adopts based on the semi-supervised learning technology of figure and carries out iterative computation, stamps label for the block of pixels that does not have label, and concrete steps are following:
Step 4.1; Structure is based on the original state matrix Y in the semi-supervised learning technology of figure
N * 2, wherein n is the sum of the contained vector of X, comprises labelled and unlabeled vector; If x
i∈ L is the eyebrow piece in the label vector, then Y
I1=1, Y
I2=0; If x
i∈ L is the non-eyebrow piece in the label vector, then Y
I2=1, Y
I1=0; Otherwise, Y
I1=Y
I2=0;
Step 4.2; Iterative computation F (t+1)=α SF (t)+(1-α) Y is up to convergence, F (0)=Y wherein, α=0.9;
Step 4.3; Suppose F
*Be the result of iteration, then the unlabelled vector x
iLabel depend on F
i *Middle maximum component.That is: if F
I1 *>F
I2 *, then should the corresponding block of pixels of vector belong to the eyebrow zone, the label that makes block of pixels is 1; Otherwise the block of pixels that this vector is corresponding belongs to non-eyebrow zone, and the label that makes this block of pixels is 0;
Step 5; According to the result of iteration, from original eyebrow image, extracting with original label is that 1 block of pixels is connected and the iteration result is the block of pixels of eyebrow, accomplishes the extraction work of eyebrow:
Step 5.1; Make that set A is an empty set; Take out the corresponding vectorial o ∈ L of an eyebrow block of pixels arbitrarily, o is added set A, and change the label of o into 2;
Step 5.2; From set A, take out vectorial a arbitrarily, make A=A-{a}; The block of pixels corresponding with a is starting point, in eight fields, searches label and is all block of pixels of 1 and add set A, and change the label of these block of pixels into 2;
Step 5.3; Execution in step 5.2 is empty up to set A;
Step 5.4; Is label that 2 block of pixels is linked to be the eyebrow zone; And being put into the pure eyebrow image that generates one 256 look in the minimum rectangle that can comprise it to the eyebrow zone through computing machine, the part unification between eyebrow region exterior and the minimum rectangle is taken as the average of eyebrow region exterior color.
What should explain at last is: above embodiment only in order to the explanation the present invention and and unrestricted technical scheme described in the invention; Therefore, although this instructions has carried out detailed explanation to the present invention with reference to each above-mentioned embodiment,, those of ordinary skill in the art should be appreciated that still and can make amendment or be equal to replacement the present invention; And all do not break away from the technical scheme and the improvement thereof of the spirit and the scope of invention, and it all should be encompassed in the middle of the claim scope of the present invention.
Claims (4)
1. eyebrow method for distilling based on semi-supervised learning and hash index is characterized in that may further comprise the steps successively:
Step 1; Accept user's original eyebrow image, and the eyebrow image division is become equal-sized small pixel piece s * s, the value of s can be selected s=2 according to speed and accuracy requirement, and 3,4 ..., 10;
Step 2; Through computing machine selected eyebrow point and non-eyebrow point from original eyebrow image; All block of pixels are according to the corresponding label that how much gives of the eyebrow point that is comprised and non-eyebrow point: count then the label of this block of pixels is 1 greater than non-eyebrow if eyebrow is counted, otherwise be 0; If do not comprise any selected eyebrow point and non-eyebrow point in the block of pixels, then this block of pixels does not have label;
Step 3; All block of pixels are all used vector representation; Remember that the set that the corresponding vector of all block of pixels is formed is X, wherein labelled subclass is L; Through the similarity between the hash method calculating pixel piece of local sensitivity, generate the similarity distance matrix W, and this similarity distance matrix of normalization is S;
Step 4; On the basis of normalized similarity distance matrix S, adopt based on the semi-supervised learning technology of figure and carry out iterative computation, for the block of pixels that does not have label is stamped label;
Step 5; According to the result of iteration, from original eyebrow image, extract with original label be 1 block of pixels be connected and the iteration result for the block of pixels of eyebrow, accomplish the extraction work of eyebrow.
2. the eyebrow method for distilling based on semi-supervised learning and hash index according to claim 1, it is characterized in that: said step 3 comprises:
Step 3.1; Make that d is the dimension of block of pixels; R is a segmentation threshold; Euclidean distance will hash to the same position in the Hash table in bigger probability less than the block of pixels of R, and 1-δ successfully accomplishes the probability apart from hash to the same position in the Hash table less than the block of pixels of R for hoping, w is the yardstick of cutting apart of Hash table;
Step 3.2; According to following method estimation parameter k and l, make the query time of Hash table minimum;
Step 3.2.1; Through computing machine, from X, select the vector of fixed number respectively arbitrarily, form new set X
tAnd X
q
Step 3.2.2; Selected k is certain fixed constant, and l is log δ/log (1-p
1 k) on the value of rounding, wherein
Step 3.2.3; Generate l k dimension composite vector c through computing machine
i=(c
I1, c
I2..., c
Ik) (1≤i≤l), wherein c
Ij(1≤i≤l, 1≤j≤k) are the d dimensional vector, and c
Ijz(1≤i≤l, 1≤j≤k, 1≤z≤d) are the real number of taking from standardized normal distribution; Remember that the set that this l k dimension composite vector formed is C; Generate l real number b through computing machine
i(1≤i≤l), b
i(1≤i≤l) all take from even distribution U (0, w);
Step 3.2.4; To X
tIn each vector x
tOrder:
p
ij=(C
ij·x
t+b
i)/w(1≤i≤l,1≤j≤k)
The dot product of wherein representing vector; Make vectorial p
i=(p
I1, p
I2..., p
Ik) (1≤i≤l), then x
tL Hash key assignments can be expressed as
Wherein
Bracket function in the expression, a
u(1≤u≤(0, hashsize), hashsize is the length of Hash table H, generally is taken as X k) to take from even distribution U
tVectorial number; Hash table H is made up of the index of each cryptographic hash to corresponding hash, and each hash is then by X
tIn have an identical cryptographic hash vector constitute;
Step 3.2.5; According to x
tL Hash key assignments, successively with x
tIn the hash that this Hash key assignments is corresponding among the H of packing into;
Step 3.2.6; To X
qIn each vector x
q: execution in step 3.2.4 calculates x
qL Hash key assignments, and use U
qRepresent the needed time; According to x
qL Hash key assignments, search among the Hash table H corresponding hash B
q, use T
qRepresent all B
qVectorial number sum, use V
qRepresent total searching the time; Make u
q=U
q/ Kl makes v
q=V
q/ l; Calculate all B
qEach vector sum x
qBetween Euclidean distance, the time that is spent is designated as G
qMake g
q=G
q/ T
q
Step 3.2.7; Order
Wherein n is set X
qIn vectorial number;
Step 3.2.8; Utilize the value of u, v and g, estimate new k value, satisfy condition:
Wherein l is log δ/log (1-p
1 k) on the value of rounding,
Wherein dist is x
tWith x
qBetween Euclidean distance;
Step 3.2.9; According to new k value, calculating new l is log δ/log (1-p
1 k) on the value of rounding;
Step 3.3; Make X
tBe X, execution in step 3.2.3 regenerates Hash table H to step 3.2.5;
Step 3.4; Travel through whole Hash table H, for any two vector x among the X
iWith x
j,, then define x if having identical Hash key assignments
iWith x
jBetween similarity be:
w
ij=exp(-||x
i-x
j||
2/2σ
2)
Wherein σ is a constant, otherwise definition t
iWith t
jBetween similarity be 0, obtain the similarity distance matrix W thus;
Step 3.5; Normalization similarity distance matrix W, order
3. the eyebrow method for distilling based on semi-supervised learning and hash index according to claim 1, it is characterized in that: said step 4 comprises:
Step 4.1; Structure original state matrix Y
N * 2, wherein n is the sum of the contained vector of X, comprises labelled and unlabeled vector, if x
i∈ L is the eyebrow piece in the label vector, then Y
I1=1, Y
I2=0; If x
i∈ L is the non-eyebrow piece in the label vector, then Y
I2=1, Y
I1=0; Otherwise, Y
I1=Y
I2=0;
Step 4.2; Iterative computation F (t+1)=α SF (t)+(1-α) Y is up to convergence, and F (0)=Y wherein, α are the constants between 0 to 1;
Step 4.3; Suppose F
*Be the result of iteration, then the unlabelled vector x
iLabel depend on F
i *Middle maximum component; That is: if F
I1 *>F
I2 *, then should the corresponding block of pixels of vector belong to the eyebrow zone, the label that makes block of pixels is 1; Otherwise the block of pixels that this vector is corresponding belongs to non-eyebrow zone, and the label that makes this block of pixels is 0.
4. the eyebrow method for distilling based on semi-supervised learning and hash index according to claim 1, it is characterized in that: said step 5 comprises:
Step 5.1; Make that set A is an empty set; Take out the corresponding vectorial o ∈ L of an eyebrow block of pixels arbitrarily, o is added set A, and change the label of o into 2;
Step 5.2; From set A, take out vectorial a arbitrarily, make A=A-{a}; The block of pixels corresponding with a is starting point, in eight fields, searches label and is all block of pixels of 1 and add set A, and change the label of these block of pixels into 2;
Step 5.3; Execution in step 5.2 is empty up to set A;
Step 5.4; Is label that 2 block of pixels is linked to be the eyebrow zone; And being put into the pure eyebrow image that generates one 256 look in the minimum rectangle that can comprise it to the eyebrow zone through computing machine, the part unification between eyebrow region exterior and the minimum rectangle is taken as the average of eyebrow region exterior color.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100795188A CN101493887B (en) | 2009-03-06 | 2009-03-06 | Eyebrow image segmentation method based on semi-supervision learning and Hash index |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100795188A CN101493887B (en) | 2009-03-06 | 2009-03-06 | Eyebrow image segmentation method based on semi-supervision learning and Hash index |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101493887A CN101493887A (en) | 2009-07-29 |
CN101493887B true CN101493887B (en) | 2012-03-28 |
Family
ID=40924479
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009100795188A Expired - Fee Related CN101493887B (en) | 2009-03-06 | 2009-03-06 | Eyebrow image segmentation method based on semi-supervision learning and Hash index |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101493887B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101901353B (en) * | 2010-07-23 | 2012-10-31 | 北京工业大学 | Subregion-based matched eyebrow image identifying method |
JP5060643B1 (en) * | 2011-08-31 | 2012-10-31 | 株式会社東芝 | Image processing apparatus and image processing method |
CN102982320B (en) * | 2012-12-05 | 2015-07-08 | 山东神思电子技术股份有限公司 | Method for extracting eyebrow outline |
CN103400155A (en) * | 2013-06-28 | 2013-11-20 | 西安交通大学 | Pornographic video detection method based on semi-supervised learning of images |
CN103942779A (en) * | 2014-03-27 | 2014-07-23 | 南京邮电大学 | Image segmentation method based on combination of graph theory and semi-supervised learning |
CN110309143B (en) * | 2018-03-21 | 2021-10-22 | 华为技术有限公司 | Data similarity determination method and device and processing equipment |
CN109697746A (en) * | 2018-11-26 | 2019-04-30 | 深圳艺达文化传媒有限公司 | Self-timer video cartoon head portrait stacking method and Related product |
CN111914604A (en) * | 2019-05-10 | 2020-11-10 | 丽宝大数据股份有限公司 | Augmented reality display method for applying hair color to eyebrow |
CN110322445B (en) * | 2019-06-12 | 2021-06-22 | 浙江大学 | Semantic segmentation method based on maximum prediction and inter-label correlation loss function |
CN113095148B (en) * | 2021-03-16 | 2022-09-06 | 深圳市雄帝科技股份有限公司 | Method and system for detecting occlusion of eyebrow area, photographing device and storage medium |
CN115082709B (en) * | 2022-07-21 | 2023-07-07 | 陕西合友网络科技有限公司 | Remote sensing big data processing method, system and cloud platform |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1645406A (en) * | 2005-02-24 | 2005-07-27 | 北京工业大学 | Identity discriminating method based on eyebrow identification |
CN1801180A (en) * | 2005-02-24 | 2006-07-12 | 北京工业大学 | Identity recognition method based on eyebrow recognition |
JP2007188407A (en) * | 2006-01-16 | 2007-07-26 | Toshiba Corp | Image processing device and image processing program |
-
2009
- 2009-03-06 CN CN2009100795188A patent/CN101493887B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1645406A (en) * | 2005-02-24 | 2005-07-27 | 北京工业大学 | Identity discriminating method based on eyebrow identification |
CN1801180A (en) * | 2005-02-24 | 2006-07-12 | 北京工业大学 | Identity recognition method based on eyebrow recognition |
JP2007188407A (en) * | 2006-01-16 | 2007-07-26 | Toshiba Corp | Image processing device and image processing program |
Also Published As
Publication number | Publication date |
---|---|
CN101493887A (en) | 2009-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101493887B (en) | Eyebrow image segmentation method based on semi-supervision learning and Hash index | |
Zhang et al. | Sketch-based image retrieval by salient contour reinforcement | |
CN103246891B (en) | A kind of Chinese Sign Language recognition methods based on Kinect | |
CN102332034B (en) | Portrait picture retrieval method and device | |
CN102945289B (en) | Based on the image search method of CGCI-SIFT local feature | |
CN102938065B (en) | Face feature extraction method and face identification method based on large-scale image data | |
Joo et al. | Human attribute recognition by rich appearance dictionary | |
CN109508663A (en) | A kind of pedestrian's recognition methods again based on multi-level supervision network | |
CN105956560A (en) | Vehicle model identification method based on pooling multi-scale depth convolution characteristics | |
CN103824052A (en) | Multilevel semantic feature-based face feature extraction method and recognition method | |
CN113963032A (en) | Twin network structure target tracking method fusing target re-identification | |
Rao et al. | Sign Language Recognition System Simulated for Video Captured with Smart Phone Front Camera. | |
CN103366160A (en) | Objectionable image distinguishing method integrating skin color, face and sensitive position detection | |
CN105975932A (en) | Gait recognition and classification method based on time sequence shapelet | |
CN108268814A (en) | A kind of face identification method and device based on the fusion of global and local feature Fuzzy | |
CN106127112A (en) | Data Dimensionality Reduction based on DLLE model and feature understanding method | |
CN107315984B (en) | Pedestrian retrieval method and device | |
Kumar et al. | A novel method for visually impaired using object recognition | |
CN104008372A (en) | Distributed face recognition method in wireless multi-media sensor network | |
CN105718935A (en) | Word frequency histogram calculation method suitable for visual big data | |
CN114445691A (en) | Model training method and device, electronic equipment and storage medium | |
Youlian et al. | Face detection method using template feature and skin color feature in rgb color space | |
Srininvas et al. | A framework to recognize the sign language system for deaf and dumb using mining techniques | |
CN107146215A (en) | A kind of conspicuousness detection method based on color histogram and convex closure | |
CN117058736A (en) | Facial false detection recognition method, device, medium and equipment based on key point detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120328 Termination date: 20140306 |