CN101930543B - Method for adjusting eye image in self-photographed video - Google Patents
Method for adjusting eye image in self-photographed video Download PDFInfo
- Publication number
- CN101930543B CN101930543B CN2010102640432A CN201010264043A CN101930543B CN 101930543 B CN101930543 B CN 101930543B CN 2010102640432 A CN2010102640432 A CN 2010102640432A CN 201010264043 A CN201010264043 A CN 201010264043A CN 101930543 B CN101930543 B CN 101930543B
- Authority
- CN
- China
- Prior art keywords
- image
- iris
- pupil
- sclera
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 210000001747 pupil Anatomy 0.000 claims abstract description 50
- 210000001508 eye Anatomy 0.000 claims abstract description 48
- 210000003786 sclera Anatomy 0.000 claims abstract description 34
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 238000013519 translation Methods 0.000 claims description 10
- 230000001815 facial effect Effects 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000007797 corrosion Effects 0.000 claims description 4
- 238000005260 corrosion Methods 0.000 claims description 4
- 230000008034 disappearance Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000004069 differentiation Effects 0.000 claims 1
- 230000006872 improvement Effects 0.000 abstract description 4
- 238000012549 training Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 5
- 230000000877 morphologic effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000006854 communication Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000007717 exclusion Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002902 bimodal effect Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 210000000554 iris Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007176 multidirectional communication Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Abstract
The invention discloses a method for adjusting an eye image in a self-photographed video, which comprises the following steps of: 1, detection and positioning of a target eye image: detecting and positioning the position of an eye from a video image; 2, identification and positioning a sclera image, an iris image and a pupil image in the eye image: identifying the sclera image and the iris image according to the gray scale, identifying the iris image and the pupil image according to the texture, and positioning relative positions of the sclera image and the iris image, the iris image and the pupil image; and 3, secondary projection of the iris image and the pupil image: translating the iris image and the pupil image to the center of the sclera image so as to realize the adjustment of the eye image. The images are processed by a software method without increasing an additional device so that the active video image of the eye watching a display device can be acquired on the display device when the human face faces the display device and the eyes do not watch a pick-up lens, and the improvement cost of a hardware system is greatly reduced.
Description
Technical field
The present invention relates to video data and handle and imaging field, particularly a kind of regulation method from the middle eye image that shoots the video.
Background technology
In the application of Digital Video Processing, especially along with the popularizing of 3G communication network, video autodyne and the application of Internet video more and more widely.Having a very bothersome present situation at present, is exactly that video acquisition device generally all is positioned near the outward flange of display device, as shown in Figure 2.In this case; When the screen of people's watching display device that is taken; Imaging results on the display device is the screen that the blinkpunkt of eyes departs from display device, and in brief, eye image is askew in people's face display image that the person that is exactly the screen viewing obtains; But not face; " facing " of the eye image that human vision is experienced mainly is to experience according to the relative position of sclera, iris and pupil in the human eye with " non-facing ", it is generally acknowledged that iris and pupil are positioned at the sclera center and are " facing ", otherwise is " non-facing ".And when being taken people's watching video acquisition device, the imaging results on the display device is that the blinkpunkt of eyes is towards the screen of display device but the people oneself that is taken can't see this imaging results, shown in Fig. 1 a and Fig. 1 b.
Summary of the invention
Goal of the invention: technical matters to be solved by this invention is the deficiency to prior art; A kind of regulation method from the middle eye image that shoots the video is provided; Thereby make the person of being taken when watching screen attentively, what the video acquisition device collection also finally showed is the image that eyes are faced.
In order to solve the problems of the technologies described above, the invention discloses a kind of regulation method from the middle eye image that shoots the video, may further comprise the steps:
Step 1, subject eye image detection and location: the position of from video image, detecting and locate eyes;
Step 2, the identification positioning of sclera image, iris image and pupil image in the eye image: tell sclera image and iris image according to gray area; Tell iris image and pupil image according to texture area; The relative position of location sclera image and iris image, iris image and pupil image;
Step 3, the reprojection of iris image and pupil image moves to the center of sclera image with iris image and pupil image, thereby realizes the levelling of eye image.
Among the present invention, preferably, said step 1 may further comprise the steps:
Step (11) is carried out pre-service to the image that shoots the video certainly; Comprise the connectedness of using the corrosion plavini to strengthen each spaced point in the image, use medium filtering to handle image and make image more level and smooth.This step can adopt the common image processing method in this area, and simultaneously, this step is not a steps necessary of the present invention, is one of optimization step, and the present invention still can realize goal of the invention having broken away under the situation of this step.
Step (12); Image carries out the chrominance space conversion, because on double-colored difference or tone saturation degree plane, the colour of skin of different ethnic groups changes little; The difference of the colour of skin more is to be present in brightness rather than colourity, therefore can from the image that shoots the video certainly, identify facial image according to colour of skin situation; Under the situation that for example good and contrast is suitable in illumination, promptly average brightness value is between 100~200, and contrast is between 50%~80%, and area of skin color is in the scope of YCbCr volume exclusion 102<Cb<128,125<Cr<160.
Step (13) identifies the image of left and right eyes from facial image according to gray level method; According to completely different on gray scale of eyeball zone and face-image, through this area image being carried out to mark off fast according to the difference of gray scale after the black and white binary conversion treatment both boundary.
Among the present invention, preferably, said step 2 may further comprise the steps:
Step (21) is carried out the black and white binary conversion treatment to the eye image that identifies, and is identified sclera image and iris image according to gray level method; According to completely different on gray scale of sclera and iris image, through this area image being carried out to mark off fast according to the difference of gray scale after the black and white binary conversion treatment both boundary.
Step (22) identifies iris image and pupil image according to texture analysis method, and calculates the relative position of iris image and pupil image; Iris region has the texture of much complex; And pupil region presents single texture basically and iris region always presents circle, therefore can carry out analysis of piecemeal Fourier transform or piecemeal discrete cosine transform to this zone, through analytic transformation territory medium-high frequency component; Bright this zone-texture of high fdrequency component multilist is complicated; Be iris region, otherwise then be pupil region, thereby provide spatial domain boundary between the two.
Step (23) calculates the azimuth angle alpha of pupil image central point distance iris central point and apart from d.
Among the present invention, preferably, said step 3 may further comprise the steps:
Step (31) moves to iris image at the center of sclera image;
Step (32) for the disappearance of the image on the sclera image after iris image translation part, uses the preceding iris image of translation sclera image on every side to fill;
Step (33), according to the azimuth angle alpha of pupil image central point distance iris central point with apart from d, the border circular areas that the iris image after the translation is belonged to is that oriented rotation is carried out at the center with the center of circle; Sense of rotation is π+α, and the anglec of rotation is rtan
-1(d/r), wherein r is the radius of pupil.
Step (34) for the oriented rotation of iris image back vacancy part, uses sclera image iris image on every side to fill.
Principle of the present invention is when the person's of being taken sight line align display screens curtain; With judging the angle that sight line and pupil center arrive video camera photocentre line according to pupil in epibulbar distribution situation after the human eye target detection in the video image that photographs; According to this angle reprojection is carried out near the zone the video image eye that collects, finally be implemented in the video image that shows sight aligning screen on the display screen.
Beneficial effect: the present invention is under the situation that does not increase extra means; Carry out Flame Image Process through software approach; Thereby make when people's face to display device and eyes can obtain the motion video image of eye gaze display device when not watching pick-up lens attentively on display device, greatly reduce the improvement cost of hardware system.The inventive method is in video communication, and there is important application prospects the aspect that needs such as video conference use video to carry out two-way or multidirectional communication.
Description of drawings
Below in conjunction with accompanying drawing and embodiment the present invention is done specifying further, above-mentioned and/or otherwise advantage of the present invention will become apparent.
Fig. 1 watches the synoptic diagram of aiming at and watching misalignment attentively attentively in the reality.
Fig. 2 is the synoptic diagram of the common video self-timer of prior art.
Fig. 3 is that the present invention watches the synoptic diagram of correcting calculating attentively.
Fig. 4 is that the present invention watches rectification calculation process figure attentively.
Fig. 5 is the process flow diagram of the detection of connected region of the present invention.
Fig. 6 is a class Haar rectangular characteristic exemplary plot of the present invention.
Fig. 7 is the inventive method simplified flow chart.
Embodiment:
Hardware components of the present invention is made up of single video capture device, arithmetic processing apparatus and display device; The core thinking is to utilize Target Recognition in the video image, target registration and target reprojection, the video image that demonstration observer's sight is faced in the realization display device.
As shown in Figure 7, the invention discloses a kind of regulation method from the middle eye image that shoots the video, may further comprise the steps:
Step 1, subject eye image detection and location: the position of from video image, detecting and locate eyes;
Said step 1 may further comprise the steps: step 11, the image that shoots the video is certainly carried out pre-service; Step 12 identifies facial image from the image that shoots the video certainly; Step 13 identifies the image of left and right eyes from facial image according to gray level method.
Step 11 is carried out pre-service to the image that shoots the video certainly;
Image to shooting the video certainly carries out pre-service; Because the collection of image is often changeable; Carry out under the unpredictable environment (mainly being photoenvironment); Image is carried out pre-service make it make the requirement that it can adaptive algorithm seem particularly necessary, the image pre-service that relates among the present invention comprises histogram equalization, morphological operation and medium filtering.
Histogram equalization is the most basic in a Digital Image Processing operation, and its effect is to make that the contrast of image is clearly demarcated.Morphological operation is divided into morphological erosion and morphology and expands, and they carry out to bianry image.Corrosion earlier is called closed operation in expansion, can then be called ON operation on the contrary so that damaged figure is closed in the image, makes closed image fracture.The process morphological operation can be removed the isolated noise point in the image and will repair owing to the fracture connected region that a variety of causes causes.
Medium filtering is a kind of nonlinear signal processing technology that can effectively suppress noise.The ultimate principle of medium filtering is to replace the value of any in digital picture or the Serial No. with the Mesophyticum of each point value in the neighborhood of this point, thereby eliminates isolated noise spot.To become level and smooth through image behind the medium filtering ripple.Step 12 identifies facial image from the image that shoots the video certainly, comprising:
People's face based on skin color segmentation detects:
The method of majority's face analysis all is based on gray level image, and skin color segmentation is to have utilized the color chrominance information of the human colour of skin as characteristic, and carry out people's face and detect, be a kind of method for detecting human face based on the characteristic invariant.
There are significantly difference in the human colour of skin and natural background, because the effect of facial vessels, its red component is comparatively full; And the colour of skin under different light, ethnic group condition maintains in the stable scope relatively.Simultaneously, this method only needs global image is carried out the several traversal, and fast operation is easy to realize, is the basic algorithm that a kind of quilt extensively applies to face detection system.
This algorithm mainly is divided into three steps:
Step a, area of skin color is cut apart: utilize the YCbCr color space to carry out skin color segmentation, in this space, the threshold value of colour of skin Cr component is easy to choose, and it is very little to receive illumination effect.The transformational relation of YCbCr and RGB color space is:
Y=0.256789R+0.504129G+0.0097906B+16
Cb=-0.148223R-0.290992G+0.439215B+128
Cr=0.439215R-0.367789G-0.071426B+128
R=1.164383×(Y-16)+1.596027×(Cr-128)
G=1.164382×(Y-16)-0.391762×(Cb-128)-0.812969×(Cr-128)
B=1.164382×(Y-16)+2.017230×(Cb-128)
Passing threshold is cut apart, and the YCbCr coloured image is converted into black white image, and black is represented background, white marking near the zone of the colour of skin.Generally speaking, under the situation that illumination just is good and contrast is suitable, area of skin color is in the scope of YCbCr volume exclusion 102<Cb<128,125<Cr<160, and the threshold value of therefore cutting apart can be selected Cb=116, Cr=144.
Step b will be positioned out with the white portion of all connections, and before the detection zone thresholding, image is carried out some pre-service: (1) utilizes morphology closed operation (corrosion-expansion) to strengthen the connectedness of each spaced point.(2) utilize medium filtering to make image smoothing.
Step c, in all quilt white portions that find, through area, length breadth ratio, it most possibly is the zone of people's face that position or the like information sifting goes out.In the present embodiment, face accounts for image scaled generally all very high (such as more than 60%), and the position is in the picture centre zone, and therefore length breadth ratio was easy to distinguished near 1: 1.
The detection of connected region: in the face complexion partitioning algorithm, an important step is will the connected region in the image after the morphological operation be detected, and confirms to surround the coordinate on these regional minimum rectangle borders, size, length breadth ratio etc.In order to detect, at first confirm the rule of some definition " connected region ", these rules can be expressed as: (1) two connection requires them to be expert at or lists adjacent (monoclinic phase is adjacent not very for being communicated with); (2) if a connected region comprises another connected region, the latter will be left in the basket; (3) if the minimum rectangle boundary member of the minimum rectangle border of a connected region and another connected region is overlapping, both are defined as independently two connected regions.According to these rules, design a kind of algorithm that travels through the border, the class skin pixel that search line by line is communicated with, a definition type skin pixel is 1, non-skin pixel is 0.Algorithm flow is represented as shown in Figure 5, and Fig. 5 a and Fig. 5 b are respectively that idiographic flow is detected in main-process stream and the single zone that a plurality of zones are detected.
Shown in Fig. 5 a; First first pixel of row begins by the row traversal from the image upper left corner; Judge whether current pixel is starting point; If then the detection of the method for application drawing 5b is the border of starting point with this pixel, and accomplish detection back mark boundaries, and pointing to upward pixel in the right of a starting point; If not starting point then continues to repeat above process up to complete all pixels of traversal by the next pixel of row traversal.
Detection boundaries begins detection boundaries from starting point as current point shown in Fig. 5 b.Whether around at first searching current point with the CW of left, up, right, down whether pixel of the same type is arranged, be frontier point at the pixel that judgement is found afterwards.Be convenient explanation, it is example that there is pixel of the same type in the present invention with the current point left side.If current pixel point A, and find an A left side and whether have pixel B of the same type, if do not exist; Then upgrade the border; And judge and can get back to starting point, if can not get back to starting point, then find an A left side again and whether have pixel B of the same type; If can get back to starting point, then finish; If there is pixel B of the same type, B is made as current point, judge again on counterclockwise first direction of AB direction be whether D score side has pixel of the same type, if there is not pixel; Then upgrade the border, and find an A left side again and whether have pixel B of the same type,, explain that then A is not a frontier point if there is pixel C of the same type the below; Current point is made as C walks to search downwards with the counter clockwise direction of BC and judge whether to get back to starting point, if the starting point of getting back to then finishes, if do not get back to starting point; Judge then whether the right has a few, if do not have, if do not explain then that a some B is a marginal point, upgrading the border and continuing with the AB direction is that initial direction is searched the border clockwise; And whether pixel is arranged below rejudging, if a bit, then walk to the right; Judge whether to get back to starting point, if the starting point of getting back to then finishes; If do not get back to starting point, judge then whether the top has a few, if do not have, then upgrade the border, and whether have a few on the right of rejudging; If the top a bit, then makes progress away, and judges whether to get back to starting point,, then rejudge the left side and whether have a few if not getting back to starting point, till finding starting point if then finish.Through boundary information, obtain area, the length breadth ratio in zone, area is excessive, too small, and excessive, the too small zone of length breadth ratio all will be excluded.
Present embodiment adopts the people's face based on AdaBoost to detect
Adaboost has specifically solved two problems: the one, how to handle training sample? In AdaBoost, each sample all is endowed a weight.If certain sample is not by correct classification, its weight will be enhanced, otherwise then reduces.Like this, the AdaBoost method is placed on notice on the sample of " the difficult branch " more.Does how the 2nd, merging Weak Classifier become a strong classifier? Strong classifier is expressed as the linear weighted function and the form of some Weak Classifiers, and the weak learning machine weight that accuracy rate is high more is high more.Therefore AdaBoost has comprised two Important Thought; The one, the fusion of many characteristics (this is the core of Boosting algorithm); The 2nd, the weighting classification is given different weights with a plurality of characteristics, and weight obtains (weight of traditional weighting classification is foreseen) through training.In conjunction with actual, detect for people's face, be from background acquisitor's face, had set forth the front must be according to some characteristic, for example texture and edge feature or the like in the present embodiment, adopt a class Haar method to carry out feature extraction.Choose K characteristic, K Weak Classifier just arranged, T training sample; Obtain the weight combination of K the highest proper vector of classification accuracy rate through loop test; In cyclic process, bring in constant renewal in the weighted value of T training sample, the weight that will be difficult to classify improves, the reduction that is easy to classify.This method takes the AdaBoost method to carry out the detection of people's face, will set forth the details of algorithm in detail below, and all algorithms all carry out to gray level image
Class Haar feature extraction class Haar characteristic is that a kind of rectangle is to characteristic; Under given limited data cases, based on can the encode state of specific region of the detection of class Haar characteristic, rectangular characteristic is to some simple graphic structures; Such as edge, line segment; Relatively more responsive, but it can only describe the structure at particular orientation (level, vertical, diagonal angle), therefore relatively more rough.Some characteristics of face can be described by rectangular characteristic simply, and for example, usually, eyes are darker than cheek color, Here it is a kind of edge feature; Bridge of the nose both sides are darker than bridge of the nose color, Here it is a kind of linear feature; Face is darker or the like than ambient color, shown in Fig. 6 b, and the characteristic of Here it is a kind of specific direction.Characteristic rectangle commonly used; Be divided into edge feature and linear feature and specific direction characteristic; Shown in Fig. 6 a; The edge feature masterplate is used to extract the marginal information of different angles, and the linear feature masterplate is used to extract the linear image piece of different angles, and specific direction characteristic masterplate is used to extract the image block of specified type.
Foundation forms is minimum dimensionally, so can form the same class template of various sizes through convergent-divergent, for example edge feature template 1 is exactly the template of 2 pixels.And template traversing graph as the time eigenwert, be to be deducted by black rectangle region covered sum by white rectangle region covered sum on the image.Therefore the value that goes up all characteristics of a width of cloth pure color image all will be zero.The feature templates of each template can be placed with " arbitrarily " size " arbitrarily " in subwindow, and each form is called a characteristic.Finding out all characteristics of subwindow, is the basis of carrying out the weak typing training.
For an image to be detected, for example the image of m * n obviously wherein comprises number of characteristics, and the total number sum of a plurality of characteristics is considerable numerals, therefore, the problem that comprises the characteristic sum in the image will be discussed below.Image say that image is a big box, template is free movable therein etui, etui has a lot of different putting positions in the big box the inside, the etui of various yardsticks might putting position summation be exactly the sum of characteristic.If the size of template is s * t, the characteristic that is then comprised among m * n sum
is:
The characteristic number of multiple different templates be in the image characteristic sum; Common for 2 edge templates, 2 linear die, 1 specific direction template, 5 templates characteristic number in the image of 16 * 16 sizes is 32384; If the image size is 36 * 36, characteristic number will reach 816264.
The integrogram computing can find out that from above-mentioned data the number of features in the piece image is very huge, and along with the big or small sharp increase of image.Therefore find suitable feature computing method very necessary.The integrogram method that present embodiment adopts is a kind of method of feature calculation effectively and rapidly.
For piece image A, wherein A (x, integrogram value defined y) is:
Just this point and initial point for had a few in the rectangle to angle point with.Utilize integrogram can calculate the class Haar rectangular characteristic of image quickly and easily.The eigenvalue calculation of rectangular characteristic, only the integrogram of characteristic end points is relevant therewith, and irrelevant with the image coordinate value.Therefore, no matter the yardstick of this rectangular characteristic how, the spent time of the calculating of eigenwert all is constant, and all is simple plus and minus calculation.Just because of this, the introducing of integrogram has improved the speed that detects widely.
AdaBoost design cycle AdaBoost algorithm finally is to obtain a suitable strong classifier, and the process of design category device mainly is a training process, and a large amount of samples is adopted in training, comprises people's face and non-face, and its flow process is following:
1) given a series of training sample (x
1, y
1), (x
2, y
2),,, (x
n, y
n), y wherein
iIt is negative sample (non-face) for=0 expression, y
iIt is positive sample (people's face) for=1 expression.N is training sample quantity altogether;
2) initializes weights W
1, i=D (i), order
Or
The quantity of the positive sample of m wherein, l is the quantity of attaching sample, m+l=n;
3) at first normalization weight of t=1...T, T is an iterations:
Again to each characteristic f, train a Weak Classifier h (x, f, p, θ); Calculate the weighting q of the Weak Classifier of corresponding all characteristics
tError rate ε
f, wherein, f is a characteristic, θ is threshold value and p indication sign of inequality direction:
ε
f=∑
iq
i|h(x
i,f,p,θ)-y
i|
Then choose best Weak Classifier h (x) and (have minimal error rate ε
t):
ε
t=min
f,p,θ∑
iq
i|h(x
i,f,p,θ)-y
i|
=∑
iq
i|h(x
i,f
t,p
t,θ
t)-y
i|=∑
iq
i|h
t(x)-y
i|
The training of Weak Classifier and choosing with setting forth in detail below.According to this best Weak Classifier, the adjustment weight:
E wherein
i=0 expression x
iCorrectly classified, wherein e
i=1 expression x
iBy classification by error;
Last strong classifier is:
Wherein
Weak Classifier h (x, f, p, θ) by a characteristic f, the threshold value θ and the p of indication sign of inequality direction form:
For the rectangular characteristic in the present embodiment, the eigenwert f of Weak Classifier (x) is exactly the eigenwert of rectangular characteristic.Because in training; The size of the training sample set of selecting equals to detect the size of subwindow; The size that detects subwindow has determined the quantity of rectangular characteristic; So the characteristic of each sample that training sample is concentrated is identical and quantity is identical, and a characteristic has a fixing eigenwert to a sample.For the image of desirable pixel value stochastic distribution, same rectangular characteristic should be tending towards a definite value K to the mean value of the eigenwert of different images.This situation also should occur on the non-face sample, but because non-face sample is not necessarily pixel image at random, therefore above-mentioned judgement has a bigger deviation.To each characteristic, calculate its mean value to the eigenwert of one type of all sample (people's face or non-face), obtain of the mean value distribution of all characteristics at last to all one type of samples.The distribution curve difference of people's face sample and non-face sample is also little; After but noticing that eigenwert is greater than or less than certain value; The consistance difference has appearred in distribution curve; This has explained that most characteristics are very small for identification people's face and non-face ability, but has some characteristics and corresponding threshold value, can distinguish people's face sample and non-face sample effectively.
The requirement of a weak learner (characteristic) only is: it can distinguish people's face and non-face image to be lower than 50% error rate slightly, mentions above therefore and just can only in certain probable range, distinguish exactly fully enough.According to this requirement, can be lower than 50% rectangular characteristic to all error rates and all find (suitably select threshold value, for fixing training set, nearly all rectangular characteristic can satisfy above-mentioned requirements).Whenever, take turns training, with the best Weak Classifier of choosing in wheel (in algorithm, iteration promptly is for T time to select T best Weak Classifier), will whenever take turns the best Weak Classifier that obtains at last is strong classifier according to certain method lifting.
Training a Weak Classifier (characteristic f) is exactly under the situation of current weight distribution, confirms the optimal threshold of f, makes that this Weak Classifier (characteristic f) is minimum to the error in classification of all training samples.Choose a best Weak Classifier and be exactly and select that the error in classification of all training samples minimum that Weak Classifier (characteristic) in all Weak Classifiers.For each characteristic f, calculate the eigenwert of all training samples, and with its ordering.Through scanning sorted eigenwert one time, can confirm the threshold value of an optimum for this characteristic, thereby be trained to a Weak Classifier.Specifically, to each element in the sorted table, calculate following four values:
1) all the weight of people's face samples and T
+
2) weight of whole non-face samples and T-;
3) weight of the people's face sample before this element and S+;
4) weight of the non-face sample before this element and S-;
Like this; When the number between the eigenwert
of the eigenwert of choosing currentElement
and its front during as threshold value; Resulting Weak Classifier just separates sample at the currentElement place---and that is to say that all elements classification before the corresponding Weak Classifier of this threshold value is with currentElement is people's face (or non-face), and be categorized as non-face (or people's face) to all elements that (contains) behind the currentElement.
Can think that the error in classification that this threshold value is brought is:
e=min(S
++(T
--S
-),S
-+(T
+-S
+))
Just can just choose a best Weak Classifier through scanning the table scan of this ordering from the beginning to the end one time for the Weak Classifier selection makes the minimum threshold value (optimal threshold) of error in classification.
The AdaBoost strong classifier is formed by the Weak Classifier cascade; When strong classifier is treated an image to be detected; Be equivalent to let all Weak Classifiers vote; To the error rate weighted sum of voting results according to Weak Classifier, the result that voting weighted is sued for peace relatively draws final result with average voting results again.Average voting results, it is all identical with the probability of " opposition " ticket to suppose that promptly all Weak Classifiers are thrown " approval " ticket, and the average probability of obtaining is:
Step 13 identifies the image of left and right eyes from facial image according to gray level method.
Step 2, the identification positioning of sclera image, iris image and pupil image in the eye image: identify sclera image and iris image according to gray scale; Go out iris image and pupil image according to texture recognition; The relative position of location sclera image and iris image, iris image and pupil image;
Said step 2 may further comprise the steps: step 21; Eye image to identifying carries out the black and white binary conversion treatment; And identify sclera image and iris image according to the difference of gray scale; Binaryzation adopts the ostu method, promptly choose to make that the gradation of image variance is carried out binaryzation with maximum threshold value after the binaryzation since sclera image and iris image on gray scale than big-difference; The sclera zone is a black for the white iris region after the binaryzation, can make a distinction both very easily for circular characteristic according to the iris region border; Step 22 identifies iris image and pupil image according to texture analysis method, and calculates the relative position of iris image and pupil image; Iris region has the texture of much complex; And pupil region presents single texture basically and iris region always presents circle, therefore can carry out analysis of piecemeal Fourier transform or piecemeal discrete cosine transform to this zone, through analytic transformation territory medium-high frequency component; Bright this zone-texture of high fdrequency component multilist is complicated; Be iris region, otherwise then be pupil region, thereby provide spatial domain boundary between the two.Confirm through the ratio that relatively high fdrequency component accounts for low frequency component between two image spectrums among the present invention, during actual computation, can think that generally when frequency spectrum was bimodal distribution, radio-frequency component accounts for the total frequency spectrum energy just can be defined as iris region more than 20%.Step 23 calculates the position angle and the distance of pupil image central point distance iris central point.Because pupil and iris all present circle, the obtaining through extracting the arc border of pupil and iris respectively of pupil center's point and iris central point among the present invention utilizes the geometric relationship of the center of circle and circular arc to locate out.
Step 3, the reprojection of iris image and pupil image moves to the center of sclera image with iris image and pupil image through oriented rotation, thereby realizes the levelling of eye image.
As shown in Figure 4, wherein translation of slanted bar line region representation and oriented postrotational vacancy part, dot matrix is partly represented iris, black region is represented pupil.Said step 3 may further comprise the steps: step 31 moves to the center of sclera image with iris image, shown in Fig. 4 (a); Step 32 for the disappearance of the image on the sclera image after iris image translation part, uses the preceding iris image of translation sclera image on every side to fill, shown in Fig. 4 (c); Step 33, according to the position angle and the distance of pupil image central point distance iris central point, the border circular areas that the iris image after the translation is belonged to is that oriented rotation is carried out at the center with the center of circle, shown in Fig. 3 and Fig. 4 (c).According to the azimuth angle alpha of pupil image central point distance iris central point with apart from d, the border circular areas that the iris image after the translation is belonged to is that oriented rotation is carried out at the center with the center of circle; Sense of rotation is π+α, and the anglec of rotation is r tan
-1(d/r), wherein r is the radius of pupil; Step 34 for the oriented rotation of iris image back vacancy part, uses sclera image iris image on every side to fill.
The pre-service of said step 11 comprises the connectedness of using the corrosion plavini to strengthen each spaced point in the image.The pre-service of said step 11 comprises the contrast of using histogram equalization to improve image.The pre-service of said step 11 comprises uses medium filtering to handle image.
The invention provides a kind of thinking and method of regulation method of the middle eye image that shoots the video certainly; The method and the approach of concrete this technical scheme of realization are a lot, and the above only is a preferred implementation of the present invention, should be understood that; For those skilled in the art; Under the prerequisite that does not break away from the principle of the invention, can also make some improvement and retouching, these improvement and retouching also should be regarded as protection scope of the present invention.The all available prior art of each ingredient not clear and definite in the present embodiment realizes.
Claims (5)
1. the regulation method of an eye image in shooting the video is characterized in that, may further comprise the steps:
Step 1, subject eye image detection and location: the position of from video image, detecting and locate eyes;
Step 2, the differentiation of sclera image, iris image and pupil image and location in the eye image: tell sclera image and iris image according to gray area; Tell iris image and pupil image according to texture area; The relative position of location sclera image and iris image, iris image and pupil image; Saidly tell iris image and pupil image method for this iris region and pupil region are carried out analysis of piecemeal Fourier transform or piecemeal discrete cosine transform according to texture area; Through analytic transformation territory medium-high frequency component; Bright this zone-texture of high fdrequency component multilist is complicated; Be iris region, otherwise then be pupil region, thereby provide spatial domain boundary between the two;
Step 3, the reprojection of iris image and pupil image moves to the center of sclera image with iris image and pupil image through oriented rotation, thereby realizes the levelling of eye image;
Said step 3 may further comprise the steps:
Step (31) moves to iris image at the center of sclera image;
Step (32) for the disappearance of the image on the sclera image after iris image translation part, uses the preceding iris image of translation sclera image on every side to fill;
Step (33) according to the position angle and the distance of pupil image central point distance iris central point, is carried out the oriented iris image center that moves to pupil image;
Step (34) to the vacancy part that pupil image is oriented after moving, uses iris image to fill.
2. a kind of regulation method from the middle eye image that shoots the video according to claim 1 is characterized in that said step 1 may further comprise the steps:
Step (11) is carried out pre-service to the image that shoots the video certainly;
Step (12) identifies facial image from the image that shoots the video certainly;
Step (13) identifies the image of left and right eyes from facial image according to gray-scale relation.
3. a kind of regulation method from the middle eye image that shoots the video according to claim 2 is characterized in that said step 2 may further comprise the steps:
Step (21) is carried out the black and white binary conversion treatment to the eye image that identifies, and is identified sclera image and iris image according to gray-scale relation;
Step (22) identifies iris image and pupil image according to texture analysis method, and calculates the relative position of iris image and pupil image;
Step (23) calculates the position angle and the distance of pupil image central point distance iris central point.
4. a kind of regulation method from the middle eye image that shoots the video according to claim 2 is characterized in that the pre-service of said step (11) comprises the connectedness of using the corrosion plavini to strengthen each spaced point in the image.
5. a kind of regulation method from the middle eye image that shoots the video according to claim 2 is characterized in that, the pre-service of said step (11) comprises uses medium filtering to handle image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010102640432A CN101930543B (en) | 2010-08-27 | 2010-08-27 | Method for adjusting eye image in self-photographed video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010102640432A CN101930543B (en) | 2010-08-27 | 2010-08-27 | Method for adjusting eye image in self-photographed video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101930543A CN101930543A (en) | 2010-12-29 |
CN101930543B true CN101930543B (en) | 2012-06-27 |
Family
ID=43369708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010102640432A Expired - Fee Related CN101930543B (en) | 2010-08-27 | 2010-08-27 | Method for adjusting eye image in self-photographed video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101930543B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567714B (en) * | 2011-12-14 | 2013-09-11 | 深圳市中控生物识别技术有限公司 | Method for correcting color image and black-and-white image based on double-camera face identification |
US8483450B1 (en) * | 2012-08-10 | 2013-07-09 | EyeVerify LLC | Quality metrics for biometric authentication |
JP5695809B1 (en) | 2013-10-16 | 2015-04-08 | オリンパスイメージング株式会社 | Display device, display method, and program |
CN104778729A (en) * | 2014-01-09 | 2015-07-15 | 上海帝仪科技有限公司 | Iris extraction method and equipment under uneven illumination condition |
CN104809424B (en) * | 2014-01-23 | 2020-11-10 | 北京七鑫易维信息技术有限公司 | Method for realizing sight tracking based on iris characteristics |
CN104933419B (en) * | 2015-06-30 | 2019-05-21 | 小米科技有限责任公司 | The method, apparatus and red film for obtaining iris image identify equipment |
TWI557601B (en) * | 2015-10-30 | 2016-11-11 | 由田新技股份有限公司 | A puppil positioning system, method, computer program product and computer readable recording medium |
CN105205480B (en) * | 2015-10-31 | 2018-12-25 | 潍坊学院 | Human-eye positioning method and system in a kind of complex scene |
CN105812709A (en) * | 2016-03-18 | 2016-07-27 | 合肥联宝信息技术有限公司 | Method for realizing virtual camera by using cameras |
KR101854991B1 (en) * | 2016-09-12 | 2018-05-08 | 에스테 로우더, 인코포레이티드 | System and method for correcting color of digital image based on the human sclera and pupil |
CN107958457B (en) * | 2017-12-19 | 2021-04-06 | 北京小米移动软件有限公司 | Method, apparatus and computer-readable storage medium for evaluating image quality |
CN110740246A (en) * | 2018-07-18 | 2020-01-31 | 阿里健康信息技术有限公司 | image correction method, mobile device and terminal device |
CN110795975B (en) * | 2018-08-03 | 2023-07-21 | 浙江宇视科技有限公司 | Face false detection optimization method and device |
CN110956590B (en) * | 2019-11-04 | 2023-11-17 | 张杰辉 | Iris image denoising device, method and storage medium |
CN111031234B (en) * | 2019-11-20 | 2021-09-03 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111695474B (en) * | 2020-06-03 | 2021-12-10 | 福建福特科光电股份有限公司 | Method for adaptively adjusting gamma curve of iris image of human eye |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794262A (en) * | 2005-12-29 | 2006-06-28 | 上海交通大学 | Method of fast and effective cutting apart vivi iris |
CN1889016A (en) * | 2006-07-25 | 2007-01-03 | 周辰 | Eye-to-computer cursor automatic positioning controlling method and system |
CN101201893A (en) * | 2006-09-30 | 2008-06-18 | 电子科技大学中山学院 | Iris recognizing preprocessing method based on grey level information |
CN101339603A (en) * | 2008-08-07 | 2009-01-07 | 电子科技大学中山学院 | Method for selecting qualified iris image from video frequency stream |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7783077B2 (en) * | 2006-12-01 | 2010-08-24 | The Boeing Company | Eye gaze tracker system and method |
-
2010
- 2010-08-27 CN CN2010102640432A patent/CN101930543B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794262A (en) * | 2005-12-29 | 2006-06-28 | 上海交通大学 | Method of fast and effective cutting apart vivi iris |
CN1889016A (en) * | 2006-07-25 | 2007-01-03 | 周辰 | Eye-to-computer cursor automatic positioning controlling method and system |
CN101201893A (en) * | 2006-09-30 | 2008-06-18 | 电子科技大学中山学院 | Iris recognizing preprocessing method based on grey level information |
CN101339603A (en) * | 2008-08-07 | 2009-01-07 | 电子科技大学中山学院 | Method for selecting qualified iris image from video frequency stream |
Non-Patent Citations (2)
Title |
---|
支兴超,等.一种基于坐标投影的虹膜定位算法.《计算机应用》.2008,第28卷(第8期),第2055-2058页. * |
邢磊,等.虹膜图像的质量评价方法.《中国体视学与图像分析》.2003,第8卷(第2期),第108-113页. * |
Also Published As
Publication number | Publication date |
---|---|
CN101930543A (en) | 2010-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101930543B (en) | Method for adjusting eye image in self-photographed video | |
CN101383001B (en) | Quick and precise front human face discriminating method | |
CN101142584B (en) | Method for facial features detection | |
CN100458831C (en) | Human face model training module and method, human face real-time certification system and method | |
CN105205480B (en) | Human-eye positioning method and system in a kind of complex scene | |
CN104063700B (en) | The method of eye center point location in natural lighting front face image | |
CN101840509B (en) | Measuring method for eye-observation visual angle and device thereof | |
CN104036278B (en) | The extracting method of face algorithm standard rules face image | |
Abate et al. | BIRD: Watershed based iris detection for mobile devices | |
CN102902967A (en) | Method for positioning iris and pupil based on eye structure classification | |
CN103942539B (en) | A kind of oval accurate high efficiency extraction of head part and masking method for detecting human face | |
CN101339607A (en) | Human face recognition method and system, human face recognition model training method and system | |
CN103473564B (en) | A kind of obverse face detection method based on sensitizing range | |
CN101739546A (en) | Image cross reconstruction-based single-sample registered image face recognition method | |
Li et al. | Face detection in complex background based on skin color features and improved AdaBoost algorithms | |
CN106485222A (en) | A kind of method for detecting human face being layered based on the colour of skin | |
CN106446779A (en) | Method and apparatus for identifying identity | |
CN106203375A (en) | A kind of based on face in facial image with the pupil positioning method of human eye detection | |
CN105938551A (en) | Video data-based face specific region extraction method | |
CN100389388C (en) | Screen protection method and apparatus based on human face identification | |
CN103218605A (en) | Quick eye locating method based on integral projection and edge detection | |
CN103810491A (en) | Head posture estimation interest point detection method fusing depth and gray scale image characteristic points | |
CN109725721A (en) | Human-eye positioning method and system for naked eye 3D display system | |
CN103218615A (en) | Face judgment method | |
CN104156689B (en) | Method and device for positioning feature information of target object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120627 Termination date: 20150827 |
|
EXPY | Termination of patent right or utility model |