CN103440476A - Locating method for pupil in face video - Google Patents
Locating method for pupil in face video Download PDFInfo
- Publication number
- CN103440476A CN103440476A CN2013103764510A CN201310376451A CN103440476A CN 103440476 A CN103440476 A CN 103440476A CN 2013103764510 A CN2013103764510 A CN 2013103764510A CN 201310376451 A CN201310376451 A CN 201310376451A CN 103440476 A CN103440476 A CN 103440476A
- Authority
- CN
- China
- Prior art keywords
- image
- pupil
- module
- face
- sobel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a locating method for a pupil in face video, and belongs to the technical field of signal processing. The locating method for the pupil in the face video comprises the steps that (1) a face detection module, an image preprocessing module, a human eye coarse locating module and a pupil fine locating module are used; (2) an input video image passes through the face detection module, the image preprocessing module, the human eye coarse locating module and the pupil fine locating module, and the position of the center of the pupil is obtained.
Description
Technical field
The present invention relates to the localization method of pupil in a kind of people's face video, belong to the signal processing technology field.
Background technology
The pupil of facial image is positioned at the fields such as face image processing, eye movement man-machine interaction and has the important application prospect.Pupil positioning method mainly contains domain division method, edge extracting method, Gray Projection method, template matching method, Adaboost method etc.Domain division method is subject to glasses and disturbs, and effect is more coarse; Feature extraction is actually with the Hough conversion and finds an eyes template, but this needs a large amount of pre-service, and it and it some are improved algorithm all will be in the face of the interference caused by eyeglass and eyelashes etc.; The Gray Projection method is a kind of fast algorithm, and it to two coordinate axis projections, and locates image for eye position according to peak value and the valley of projection, but it only depends on two-dimensional projection, for the interference of black surround glasses, eyebrow, hair etc., is difficult to distinguish; Template matching method needs yardstick and the direction of normalization facial image, and template need to obtain by training, so calculated amount is larger; Aspect the eyes location, some superiority is being arranged based on sample training Adaboost algorithm, it is strict with abundant training sample, but there is the eyebrow of high gray-scale value in fact also likely to be decided to be eyeball, in addition, the scale of candidate search window has limited the size of training sample, and this also will cause the discrimination of low-resolution image is reduced.
Summary of the invention
The object of the present invention is to provide the localization method of pupil in a kind of people's face video in order to overcome above-mentioned deficiency.
The technical scheme that the present invention takes is as follows:
In a kind of people's face video, the localization method of pupil comprises people's face detection module, image pretreatment module, human eye coarse positioning module and the thin locating module of pupil; The video image of input finally obtains the position of pupil center through remarkable face detection module, image pretreatment module, human eye coarse positioning module and the thin locating module of pupil.
The principle of the invention and beneficial effect: the method that background technology is mentioned is not utilized the characteristic of pupil, so its performance all is subject to the restriction of eye image quality, if picture quality is not good, its locating accuracy will descend.When the human eye pupil is located, glasses, eyebrow, eyelash, hair etc. all can produce and disturb it.In order to improve the performance of pupil location, need to take full advantage of the characteristic of pupil image, as the circle that is shaped as of pupil, color is darker, and the corresponding grey scale value is lower, and the first half that pupil is positioned at people's face grades.The present invention utilizes the radial symmetry characteristic of pupil, proposes a kind of pupil positioning method based on integral projection and radial symmetry conversion, to improve the pupil positioning performance.
The accompanying drawing explanation
Fig. 1 radial symmetry transform method block diagram.
The mapping relations figure of Fig. 2 pixel.
The process flow diagram of " iris based on the human eyes structure classification and the localization method of pupil " used in the Chinese patent that the yellow Sunyu of Fig. 3 and Yang Ruigang are 201210393147.2 at publication number.
Fig. 4 Yefei Chen and Jianbo Su are at paper " Fast eye localization based on a new haar-like feature " (10th World Congress on Intelligent Control and Automation, Beijing, China.2012,4825-4830) in the fast human-eye positioning flow figure of the class haar feature based on new used.
Fig. 5 technical solution of the present invention functional block diagram.
Fig. 6 is used Mikael Nilsson, J.Nordberg, Ingvar Claesson is at paper " Face detection using local SMQT features and split up SNOW classifier " (IEEE International Conference on Acoustics, Speech, and Signal Processing, Honolulu, USA.2007,589-592) in people's face testing result figure of method for detecting human face.
The people face figure of Fig. 7 in Fig. 6.
The image of Fig. 8 after to Fig. 7 facial image medium filtering.
Fig. 9 carries out the image after histogram equalization to Fig. 8 image.
Image vertical projection curve in Figure 10 Fig. 9.
Left eye, right eye region in Figure 11 Fig. 9.
Mono-human eye area image of Figure 12.
Mono-human eye pupil positioning result of Figure 13.
Two eye pupil positioning results of Figure 14.
Embodiment
Below in conjunction with accompanying drawing, the present invention will be further described:
(1) radial symmetry conversion
The radial symmetry conversion is to develop on the basis of Generalized Symmetric Transformation, is a kind of target detection operator based on gradient, and it can detect the pixel with radial symmetry characteristic simply, rapidly, realizes the effective detection to circular target.
In general, the radius n of given circular target (n ∈ N, N is the radius set that needs the radial symmetry feature of detection), just can obtain the corresponding result that radial symmetry converts.This result is in a value at P place, characterized in the image degree in this some place radial symmetry, image have much may comprise one take a P as the center of circle, the n of the take circle that is radius.Along with the increase of detection radius n, there is high symmetric regional available radial symmetry characteristic and accumulate rapidly larger radial symmetry intensity level S, realize the detection to border circular areas.Radial symmetry transform method block diagram as shown in Figure 1.
Image I is carried out to convolution with the horizontal operator of Sobel and vertical operator respectively, calculate the edge gradient image
g(p)=[g
x(p),g
y(p)]=|[I*Sobel
hor,I*Sobel
ver]|,
Wherein, ' * ' means convolution, Sobel
horfor the horizontal operator of Sobel, Sobel
verfor the Sobel vertical operator.
For each radius n, can calculate corresponding direction projection image O
nwith amplitude projected image M
n.As seen from Figure 2, for set point P, O
nand M
nby positive negatively influencing pixel P
+ veand P
-vecalculate, and P
+ veand P
-veit is the function of gradient g (p).The definition difference of positive negatively influencing pixel, just affecting pixel is the coordinate of the point that gradient vector g (p) points to, the length of range points P is n; The negatively influencing pixel is the coordinate of the length of the range points P that points to of gradient vector g (p) the negative sense point that is n.
By gradient vector g (p), can calculate and just affect pixel P
+ vewith negatively influencing pixel P
-ve,
Wherein, " round " means all elements in vector is all got to the integer that approaches them most.|| mean vectorial delivery.
Direction projection image O
nwith the gradient projection image M
nall be initialized as 0.For every a pair of affected pixel, direction projection image O
nat P
+ vethe value at some place adds 1, amplitude projected image M
nat P
+ vethe value at some place increases | g (p) |; Correspondingly, direction projection image O
nat P
-vethe value at some place subtracts 1, amplitude projected image M
nat P
-vethe value at some place deducts | g (p) | and,
O
n[P
+ve(p)]=O
n[P
+ve(p)]+1,
O
n[P
-ve(p)]=O
n[P
-ve(p)]-1,
M
n[P
+ve(p)]=M
n[P
+ve(p)]+|g(p)|,
M
n[P
-ve(p)]=M
n[P
-ve(p)]-|g(p)|,
As can be seen here, O
nreflected that the pixel around the P point is mapped to the number of pixels on this aspect, M along its gradient direction
nreflected the gradient magnitude stack at that point of P point point on every side.
When detection radius is n, radial symmetry intensity level S
nbe defined as
S
n=F
n*A
n,
Wherein, k
nbe used for the O that normalization obtains under different radii
nand M
nscale factor.α radially controls parameter.' * ' means convolution, A
nit is the dimensional Gaussian battle array.
Final radial symmetry conversion S is the radial symmetry corresponding with all detection radius n S as a result
naverage
The present invention will convert good circle by radial symmetry and detect characteristic and carry out pupil and accurately locate.
Prior art one related to the present invention
The technical scheme of prior art one
In the Chinese invention patent that yellow Sunyu and Yang Ruigang are 201210393147.2 at publication number " iris based on the human eyes structure classification and the localization method of pupil ", proposed a kind of iris and pupil positioning method based on the human eyes structure classification, method flow diagram as shown in Figure 3.This invention, before accurately locating iris and pupil boundary, utilizes the unsupervised learning technology, and eye image is carried out to automatic textural classification.Iris and pupil can be roughly located in this classification, estimate effective iris region size, effectively remove the Outlier Data of non-iris and pupil boundary, and reduce for iris pupil position and big or small search volume.In the search volume dwindled, this invention is further carried out constrained optimization according to the iris pupil inherent feature, searches for optimum iris and pupil boundary.This invention can increase stability and the accuracy of iris and pupil location, and the iris that especially is applicable to remote non-intrusion type obtains system.
The shortcoming of prior art one
The method need to be trained, and is strict with abundant training sample, but has the eyebrow of high gray-scale value in fact also likely to be decided to be eyeball, thereby can cause discrimination to reduce.
Prior art two related to the present invention
The technical scheme of prior art two
Yefei Chen and Jianbo Su are at paper " Fast eye localization based on a new haar-like feature " (10th World Congress on Intelligent Control and Automation, Beijing, China.2012,4825-4830), the method for a kind of quick pupil location is proposed.According to the priori proportionate relationship of facial characteristics, the method is selected suitable candidate's window in the human face region detected first; Then in this candidate region, use the histogram equalization technology to remove photechic effect; Finally, the method also proposes a kind of new class Haar feature for locating rapidly and accurately the pupil of candidate region.The method is simple, without training, and can be located in the interference that reason eyebrow, hair, glasses etc. cause by robust.Its method flow diagram as shown in Figure 4.
The shortcoming of prior art two
The subject matter that the human eye method for rapidly positioning of the class haar feature based on new exists has: (1), under the reflection or the dense black situations such as eyebrow of glasses, the error rate of location is higher; (2) the method is still lower to the robustness of illumination; (3) facial image that the method is only ± 20 degree to posture change is effective.
Elaborating of technical solution of the present invention
Technical matters to be solved by this invention
The present invention is processed facial image, removes the impact that glasses, eyebrow, hair, illumination etc. are disturbed, and automatically orients the pupil position of target eye image robust, thereby provides basic data accurately for follow-up studies such as recognition of face, eye trackings.
Complete skill scheme provided by the invention
At first the present invention obtains the human face region in image by method for detecting human face; Then, according to prioris such as pupil positions, application integral projection method obtains eyebrow and eye regions roughly, i.e. the coarse positioning of human eye; Finally, utilize radial symmetry conversion circular target detection method accurately to locate human eye.As shown in Figure 5, this scheme mainly comprises people's face detection module, image pretreatment module, human eye coarse positioning module and the thin locating module of pupil to technical scheme block diagram of the present invention.
People's face detection module, the disposal route of people's face detection module is: this module be input as target image Iorig, utilize Mikael Nilsson, J.Nordberg, Ingvar Claesson is at paper " Face detection using local SMQT features and split up SNOW classifier " (IEEE International Conference on Acoustics, Speech, and Signal Processing, Honolulu, USA.2007, what 589-592), provide converts (Successive Mean Quantization Transform based on the local continuous mean quantization, SMQT) and sparse screening network (Sparse Network of Winnows, SNoW) method for detecting human face of sorter, detect the facial image Iface that comprises eyes, output using it as this module.
The image pretreatment module, the disposal route of image pretreatment module is: the input of this module is people's face testing result Iface, after Iface being carried out to the pre-service such as medium filtering and histogram equalization, exports pretreated image I.
(1) medium filtering
Adopt 3 * 3 windows to carry out medium filtering to facial image, the window that each point in this sampled images can corresponding 3 * 3, this point is positioned at window center.Press gray-scale value by 8 around this window center point and sort from big to small, finally use the intermediate value of each point gray-scale value in window, i.e. sequence is the average of two the some gray-scale values in centre afterwards, replaces the gray-scale value of window center point, completes medium filtering, and obtaining image is I
temp.
(2) histogram equalization
To the image I after medium filtering
tempcarry out histogram equalization.Histogram equalization is by greyscale transformation, piece image to be mapped as to another width to have balanced histogram, and mapping function is a cumulative distribution function.Its concrete steps are as follows:
1) at first calculate I
tempnormalized Grey Level histogram H.
Wherein, n
ifor the number of pixels that is i of gray level in image, i=0,1 ..., the sum that 255, n is pixel in image.
2) compute histograms integration H ' then,
3) image I after last compute histograms equilibrium.
I(x,y)=H′[I
temp(x,y)]
Human eye coarse positioning module, the disposal route of human eye coarse positioning module is: this module be input as pretreated image I, I is carried out to vertical projection, obtain the coordinate of vertical projection curve 1/3~1/2 place's peak value, be people's nose middle part.Utilize this coordinate in vertical direction Iface to be intercepted, can obtain eye image human eye area roughly.Last center of take the human eye area width is cut apart as the bound pair human eye area, just two eyes can be separated, and obtains output left eye region Ileft and the right eye region Iright of this module.
The detailed process of human eye coarse positioning is as follows:
(1) utilize formula (20) to calculate the vertical Gray Projection curve P of I
y(x).
Wherein, the width that w is image.
(2) P
y(x) at the coordinate of 1/3 to 1/2 place's peak value, be the nose middle part of people's face herein, utilize this coordinate in vertical direction to I
faceintercepted, obtained rough human eye area.
(3) take the center of human eye area width is boundary, and the human eye area in the vertical direction is half-and-half cut apart, and just two eyes can be separated, and obtains left eye region I
leftwith right eye region I
right.
The pupil locating module
This module be input as the eye areas I that the human eye coarse positioning obtains
leftand I
right, utilize radial symmetry transfer pair I
leftand I
rightcarry out respectively the accurate location of pupil, obtain final pupil positioning result.The radial symmetry conversion is an effective operator in radial symmetry zone context of detection, and it can detect respectively bright spot and dim spot by the direction of adjusting shade of gray.Gradient direction has positive dirction and negative direction, and positive dirction is to point to bright spot from dim spot, and negative direction is to point to dim spot from bright spot, and this meets the characteristic of pupil.In order effectively to detect pupil region, the round symmetry characteristic that need to take full advantage of pupil calculates the pixel that can have influence on its negative direction for each pixel, because they are directed upwards towards pupil at losing side.Along with the growth of detection radius, all pixels of pupil edge on negative direction all polymerization in pupil center.Yet, the pixel at bright spot edge on negative direction, be can polymerization in center, because they are that the back of the body is excentric on negative direction.In addition, eyelashes, hair and spectacle-frame do not have the radial symmetry feature, and their impacts on negative direction can not be detected.This detection technique can be avoided the interference of above-mentioned noise effectively.Concrete calculation procedure is as follows:
(1) eye image I human eye coarse positioning module obtained
leftor I
rightcarry out convolution with the horizontal operator of Sobel and vertical operator respectively, calculate edge gradient image g (p)=[g
x(p), g
y(p)].
(2) the initial search radius N=5 of initialization, S
n_max=0, S
max=0.
(3) if meet S
max>=S
n_max, carry out the following step:
(a)S
max=S
n_max;
(b) for n=2:N
At first calculate P according to formula (5)
-ve(p), can obtain O according to formula (7) and formula (9) simultaneously
n(P
-ve) and M (p)
n(P
-ve(p)); Then calculate F according to formula (11)
n(p), parameter alpha=2 wherein, k
n=9.9; Finally according to formula (10), obtain S
n, wherein, A
nfor the dimensional Gaussian battle array, its size is n * n, variances sigma=0.25n;
(c) calculate S according to formula (13), select a wherein maximum value as pupil center;
(d)N=N+2。
(4) if S
max<S
n_max, iterative search finishes, and obtains the position of pupil center.
The beneficial effect that technical solution of the present invention is brought
The CAS-PEAL face database is created by CAS Computer Tech Service Co., it has comprised 1040 Chinese totally 99450 width head shoulder images, all images gather in special collection environment, have contained attitude, expression, jewelry and 4 kinds of Main change conditions of illumination.The IMM face database is created by Technical University Of Denmark, the facial image that it has comprised 240 different attitudes, expression, illumination.Choose 500 images from IMM face database and CAS-PEAL face database the present invention is assessed, these 500 images have comprised from positive attitude to lateral attitude, from long hair to cap, never wear a pair of spectacles is to the various people's faces such as glasses with different.
Utilize Mikael Nilsson, J.Nordberg, Ingvar Claesson is at document " Face detection using local SMQT features and split up SNOW classifier " (IEEE International Conference on Acoustics, Speech, and Signal Processing, Honolulu, USA.2007, the method for detecting human face proposed 589-592) detects people's face, and result as shown in Figure 6.The facial image detected is carried out to the pre-service such as medium filtering and histogram equalization, and wherein, Fig. 7 is the facial image before processing, and Fig. 8 is the image after medium filtering, and Fig. 9 is the image after histogram equalization.
The vertical projection curve of the image after the compute histograms equilibrium, as shown in figure 10, because the eye image gray-scale value is lower, the nose image gray-scale value is higher, can on the vertical projection curve, form significantly " paddy-peak-paddy ", corresponding to human eye area; The horizontal ordinate that " peak " is corresponding is exactly mesonasal position; Two " paddy " are corresponding to the images of left and right eyes zone.As shown in figure 11.
Utilize single human eye area shown in radial symmetry transfer pair Figure 12 to carry out the pupil location, result as shown in figure 13.After two eyes are positioned respectively, the final positioning result obtained as shown in figure 14.Figure Green cross mark is final pupil positioning result.
Table 1 has shown the performance of pupil location, and as can be seen from Table 1, the method that the present invention proposes almost at least can detect a pupil in all pictures, only has two pupils in 8 pictures all to locate mistake.In addition, two pupils all the ratio of accurate positioning be 95.4%.Table 2 has shown with accurate pupil position to be compared, the regularity of distribution of the detected pupil position of the present invention.Here, we no longer consider whole pictures, but the left and right eyes are separately considered, the quantity of so all eyes is 500*2=1000.As can be seen from Table 2, the pupil quantity of positioning error within 5 pixels has accounted for 95.5%, and positioning error surpasses the pupil quantity of 10 pixels and only accounted for 2.6%.
Table 1 pupil positioning performance
The eyes location | Locating accuracy |
At least one pupil accurate positioning | 98.2%(491/500) |
Two all accurate positionings of pupil | 95.4%(477/500) |
Two pupil location are all inaccurate | 1.8%(9/500) |
The verification and measurement ratio of the different positioning errors of table 2
Distance with accurate pupil position | Quantity |
Within 5 pixels | 955(95.5%) |
5 to 10 pixels | 19(1.9%) |
10 to 15 pixels | 8(0.8%) |
15 to 20 pixels | 6(0.6%) |
20 more than pixel | 12(1.2%) |
The above; it is only preferably embodiment of the present invention; but protection scope of the present invention is not limited to this; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses; be equal to replacement or changed according to technical scheme of the present invention and inventive concept thereof, within all should being encompassed in protection scope of the present invention.
Abbreviation and the Key Term that the present invention relates to are defined as follows:
RST:Radial Symmetry Transform, the radial symmetry conversion.
SMQT:Successive Mean Quantization Transform, successive mean quantization transform.
SNoW:Sparse Network of Winnows, sparse screening network.
Claims (5)
1. the localization method of pupil in people's face video, is characterized in that: comprise people's face detection module, image pretreatment module, human eye coarse positioning module and the thin locating module of pupil; The video image of input finally obtains the position of pupil center through remarkable face detection module, image pretreatment module, human eye coarse positioning module and the thin locating module of pupil.
2. the localization method of pupil in people's face video, it is characterized in that: the disposal route of people's face detection module is:
This module be input as target image I
orig, utilize the method for detecting human face based on the conversion of local continuous mean quantization and sparse screening network classifier, detect the facial image I that comprises eyes
face, the output using it as this module.
3. the localization method of pupil in people's face video, it is characterized in that: the disposal route of image pretreatment module is:
(1) medium filtering
Adopt 3 * 3 windows to carry out medium filtering to facial image, the window that each point in this sampled images can corresponding 3 * 3, this point is positioned at window center; Press gray-scale value by 8 around this window center point and sort from big to small, finally use the intermediate value of each point gray-scale value in window, i.e. sequence is the average of two the some gray-scale values in centre afterwards, replaces the gray-scale value of window center point, completes medium filtering, and obtaining image is I
temp;
(2) histogram equalization
To the image I after medium filtering
tempcarry out histogram equalization; Histogram equalization is by greyscale transformation, piece image to be mapped as to another width to have balanced histogram, and mapping function is a cumulative distribution function; Its concrete steps are as follows:
1) at first calculate I
tempnormalized Grey Level histogram H;
Wherein, n
ifor the number of pixels that is i of gray level in image, i=0,1 ..., the sum that 255, n is pixel in image;
2) compute histograms integration H ' then,
3) image I after last compute histograms equilibrium;
I(x,y)=H′[I
temp(x,y)]。
4. the localization method of pupil in people's face video, it is characterized in that: the disposal route of human eye coarse positioning module is:
(1) calculate the vertical Gray Projection curve P of I
y(x),
Wherein, the width that w is image;
(2) P
y(x) at the coordinate of 1/3 to 1/2 place's peak value, be the nose middle part of people's face herein, utilize this coordinate in vertical direction to I
faceintercepted, obtained rough human eye area;
(3) take the center of human eye area width is boundary, and the human eye area in the vertical direction is half-and-half cut apart, and just two eyes can be separated, and obtains left eye region I
leftwith right eye region I
right.
5. the localization method of pupil in people's face video, it is characterized in that: the disposal route of human eye coarse positioning module is:
(1) eye image I human eye coarse positioning module obtained
leftor I
rightcarry out convolution with the horizontal operator of Sobel and vertical operator respectively, calculate edge gradient image g (p)=[g
x(p), g
y(p)];
g(p)=[g
x(p),g
y(p)]=|[I*Sobel
hor,I*Sobel
ver]|,
Wherein, ' * ' means convolution, Sobel
horfor the horizontal operator of Sobel
Sobel
verfor the Sobel vertical operator
(2) the initial search radius N=5 of initialization, S
n_max=0, S
max=0;
(3) if meet S
max>=S
n_max, carry out the following step:
(a)S
max=S
n_max;
(b) for n=2,3 ..., N, can calculate corresponding direction projection image O
nwith amplitude projected image M
n; For set point P, O
nand M
nby positive negatively influencing pixel P
+ veand P
-vecalculate, and P
+ veand P
-veit is the function of gradient g (p); The definition difference of positive negatively influencing pixel, just affecting pixel is the coordinate of the point that gradient vector g (p) points to, the length of range points P is n; The negatively influencing pixel is the coordinate of the length of the range points P that points to of gradient vector g (p) the negative sense point that is n;
By gradient vector g (p), can calculate negatively influencing pixel P
-veand O
n[P
-ve(p)], M
n[P
-ve(p)],
O
n[P
-ve(p)]=O
n[P
-ve(p)]-1, (7)
M
n[P
-ve(p)]=M
n[P
-ve(p)]-|g(p)|, (9)
Wherein, " round " means all elements in vector is all got to the integer that approaches them most; || mean vectorial delivery;
Then calculate F
n(p),
Wherein, α radially controls parameter; Parameter alpha=2, k
n=9.9;
Wherein, k
nbe used for the O that normalization obtains under different radii
nand M
nscale factor;
When detection radius is n, radial symmetry intensity level S
nfor
S
n=F
n*A
n, (10)
Wherein, ' * ' means convolution, A
nbe the dimensional Gaussian battle array, its size is n * n, variances sigma=0.25n;
(c) final radial symmetry conversion S is the radial symmetry corresponding with all detection radius n S as a result
naverage, select a wherein maximum value as pupil center;
(d)N=N+2;
(4) if S
max<S
n_max, iterative search finishes, and obtains the position of pupil center.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013103764510A CN103440476A (en) | 2013-08-26 | 2013-08-26 | Locating method for pupil in face video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013103764510A CN103440476A (en) | 2013-08-26 | 2013-08-26 | Locating method for pupil in face video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103440476A true CN103440476A (en) | 2013-12-11 |
Family
ID=49694169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013103764510A Pending CN103440476A (en) | 2013-08-26 | 2013-08-26 | Locating method for pupil in face video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103440476A (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463081A (en) * | 2013-09-16 | 2015-03-25 | 展讯通信(天津)有限公司 | Detection method of human eye state |
CN104463080A (en) * | 2013-09-16 | 2015-03-25 | 展讯通信(天津)有限公司 | Detection method of human eye state |
CN104657722A (en) * | 2015-03-10 | 2015-05-27 | 无锡桑尼安科技有限公司 | Eye parameter detection equipment |
CN104766059A (en) * | 2015-04-01 | 2015-07-08 | 上海交通大学 | Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning |
CN104835156A (en) * | 2015-05-05 | 2015-08-12 | 浙江工业大学 | Non-woven bag automatic positioning method based on computer vision |
CN105184269A (en) * | 2015-09-15 | 2015-12-23 | 成都通甲优博科技有限责任公司 | Extraction method and extraction system of iris image |
CN105205480A (en) * | 2015-10-31 | 2015-12-30 | 潍坊学院 | Complex scene human eye locating method and system |
CN105893916A (en) * | 2014-12-11 | 2016-08-24 | 深圳市阿图姆科技有限公司 | New method for detection of face pretreatment, feature extraction and dimensionality reduction description |
CN106063702A (en) * | 2016-05-23 | 2016-11-02 | 南昌大学 | A kind of heart rate detection system based on facial video image and detection method |
CN106127160A (en) * | 2016-06-28 | 2016-11-16 | 上海安威士科技股份有限公司 | A kind of human eye method for rapidly positioning for iris identification |
CN106203375A (en) * | 2016-07-20 | 2016-12-07 | 济南大学 | A kind of based on face in facial image with the pupil positioning method of human eye detection |
CN106326880A (en) * | 2016-09-08 | 2017-01-11 | 电子科技大学 | Pupil center point positioning method |
CN106919933A (en) * | 2017-03-13 | 2017-07-04 | 重庆贝奥新视野医疗设备有限公司 | The method and device of Pupil diameter |
CN107808397A (en) * | 2017-11-10 | 2018-03-16 | 京东方科技集团股份有限公司 | Pupil positioning device, pupil positioning method and Eye-controlling focus equipment |
CN108090463A (en) * | 2017-12-29 | 2018-05-29 | 腾讯科技(深圳)有限公司 | Object control method, apparatus, storage medium and computer equipment |
CN108182380A (en) * | 2017-11-30 | 2018-06-19 | 天津大学 | A kind of flake pupil intelligent measurement method based on machine learning |
CN108427926A (en) * | 2018-03-16 | 2018-08-21 | 西安电子科技大学 | A kind of pupil positioning method in gaze tracking system |
CN108648201A (en) * | 2018-05-14 | 2018-10-12 | 京东方科技集团股份有限公司 | Pupil positioning method and device, storage medium, electronic equipment |
CN109558825A (en) * | 2018-11-23 | 2019-04-02 | 哈尔滨理工大学 | A kind of pupil center's localization method based on digital video image processing |
CN110472521A (en) * | 2019-07-25 | 2019-11-19 | 中山市奥珀金属制品有限公司 | A kind of Pupil diameter calibration method and system |
CN111428680A (en) * | 2020-04-07 | 2020-07-17 | 深圳市华付信息技术有限公司 | Pupil positioning method based on deep learning |
CN113366491A (en) * | 2021-04-26 | 2021-09-07 | 华为技术有限公司 | Eyeball tracking method, device and storage medium |
CN114020155A (en) * | 2021-11-05 | 2022-02-08 | 沈阳飞机设计研究所扬州协同创新研究院有限公司 | High-precision sight line positioning method based on eye tracker |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1686051A (en) * | 2005-05-08 | 2005-10-26 | 上海交通大学 | Canthus and pupil location method based on VPP and improved SUSAN |
CN101751551A (en) * | 2008-12-05 | 2010-06-23 | 比亚迪股份有限公司 | Method, device, system and device for identifying face based on image |
US20110164825A1 (en) * | 2005-11-25 | 2011-07-07 | Quantum Signal, Llc | Dot templates for object detection in images |
-
2013
- 2013-08-26 CN CN2013103764510A patent/CN103440476A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1686051A (en) * | 2005-05-08 | 2005-10-26 | 上海交通大学 | Canthus and pupil location method based on VPP and improved SUSAN |
US20110164825A1 (en) * | 2005-11-25 | 2011-07-07 | Quantum Signal, Llc | Dot templates for object detection in images |
CN101751551A (en) * | 2008-12-05 | 2010-06-23 | 比亚迪股份有限公司 | Method, device, system and device for identifying face based on image |
Non-Patent Citations (2)
Title |
---|
刘翠响 等: "基于连续均值量化变换的人脸检测算法", 《电视技术》 * |
唐坤: "面部特征点定位算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463080A (en) * | 2013-09-16 | 2015-03-25 | 展讯通信(天津)有限公司 | Detection method of human eye state |
CN104463081A (en) * | 2013-09-16 | 2015-03-25 | 展讯通信(天津)有限公司 | Detection method of human eye state |
CN105893916A (en) * | 2014-12-11 | 2016-08-24 | 深圳市阿图姆科技有限公司 | New method for detection of face pretreatment, feature extraction and dimensionality reduction description |
CN104657722A (en) * | 2015-03-10 | 2015-05-27 | 无锡桑尼安科技有限公司 | Eye parameter detection equipment |
CN104657722B (en) * | 2015-03-10 | 2017-03-08 | 吉林大学 | Eye parameter detection equipment |
CN104766059A (en) * | 2015-04-01 | 2015-07-08 | 上海交通大学 | Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning |
CN104766059B (en) * | 2015-04-01 | 2018-03-06 | 上海交通大学 | Quick accurate human-eye positioning method and the gaze estimation method based on human eye positioning |
CN104835156A (en) * | 2015-05-05 | 2015-08-12 | 浙江工业大学 | Non-woven bag automatic positioning method based on computer vision |
CN104835156B (en) * | 2015-05-05 | 2017-10-17 | 浙江工业大学 | A kind of non-woven bag automatic positioning method based on computer vision |
CN105184269A (en) * | 2015-09-15 | 2015-12-23 | 成都通甲优博科技有限责任公司 | Extraction method and extraction system of iris image |
CN105205480A (en) * | 2015-10-31 | 2015-12-30 | 潍坊学院 | Complex scene human eye locating method and system |
CN105205480B (en) * | 2015-10-31 | 2018-12-25 | 潍坊学院 | Human-eye positioning method and system in a kind of complex scene |
CN106063702A (en) * | 2016-05-23 | 2016-11-02 | 南昌大学 | A kind of heart rate detection system based on facial video image and detection method |
CN106127160A (en) * | 2016-06-28 | 2016-11-16 | 上海安威士科技股份有限公司 | A kind of human eye method for rapidly positioning for iris identification |
CN106203375A (en) * | 2016-07-20 | 2016-12-07 | 济南大学 | A kind of based on face in facial image with the pupil positioning method of human eye detection |
CN106326880A (en) * | 2016-09-08 | 2017-01-11 | 电子科技大学 | Pupil center point positioning method |
CN106919933A (en) * | 2017-03-13 | 2017-07-04 | 重庆贝奥新视野医疗设备有限公司 | The method and device of Pupil diameter |
CN107808397A (en) * | 2017-11-10 | 2018-03-16 | 京东方科技集团股份有限公司 | Pupil positioning device, pupil positioning method and Eye-controlling focus equipment |
CN107808397B (en) * | 2017-11-10 | 2020-04-24 | 京东方科技集团股份有限公司 | Pupil positioning device, pupil positioning method and sight tracking equipment |
CN108182380A (en) * | 2017-11-30 | 2018-06-19 | 天津大学 | A kind of flake pupil intelligent measurement method based on machine learning |
CN108182380B (en) * | 2017-11-30 | 2023-06-06 | 天津大学 | Intelligent fisheye pupil measurement method based on machine learning |
CN108090463A (en) * | 2017-12-29 | 2018-05-29 | 腾讯科技(深圳)有限公司 | Object control method, apparatus, storage medium and computer equipment |
CN108427926A (en) * | 2018-03-16 | 2018-08-21 | 西安电子科技大学 | A kind of pupil positioning method in gaze tracking system |
CN108648201A (en) * | 2018-05-14 | 2018-10-12 | 京东方科技集团股份有限公司 | Pupil positioning method and device, storage medium, electronic equipment |
CN109558825A (en) * | 2018-11-23 | 2019-04-02 | 哈尔滨理工大学 | A kind of pupil center's localization method based on digital video image processing |
CN110472521A (en) * | 2019-07-25 | 2019-11-19 | 中山市奥珀金属制品有限公司 | A kind of Pupil diameter calibration method and system |
CN110472521B (en) * | 2019-07-25 | 2022-12-20 | 张杰辉 | Pupil positioning calibration method and system |
CN111428680A (en) * | 2020-04-07 | 2020-07-17 | 深圳市华付信息技术有限公司 | Pupil positioning method based on deep learning |
CN111428680B (en) * | 2020-04-07 | 2023-10-20 | 深圳华付技术股份有限公司 | Pupil positioning method based on deep learning |
CN113366491A (en) * | 2021-04-26 | 2021-09-07 | 华为技术有限公司 | Eyeball tracking method, device and storage medium |
WO2022226747A1 (en) * | 2021-04-26 | 2022-11-03 | 华为技术有限公司 | Eyeball tracking method and apparatus and storage medium |
CN114020155A (en) * | 2021-11-05 | 2022-02-08 | 沈阳飞机设计研究所扬州协同创新研究院有限公司 | High-precision sight line positioning method based on eye tracker |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103440476A (en) | Locating method for pupil in face video | |
CN100458831C (en) | Human face model training module and method, human face real-time certification system and method | |
CN104091147B (en) | A kind of near-infrared eyes positioning and eye state identification method | |
CN101923645B (en) | Iris splitting method suitable for low-quality iris image in complex application context | |
CN102902967B (en) | Method for positioning iris and pupil based on eye structure classification | |
CN103632136B (en) | Human-eye positioning method and device | |
CN101339607B (en) | Human face recognition method and system, human face recognition model training method and system | |
CN102708361B (en) | Human face collecting method at a distance | |
CN103093215B (en) | Human-eye positioning method and device | |
CN103902962B (en) | One kind is blocked or the adaptive face identification method of light source and device | |
CN102521575B (en) | Iris identification method based on multidirectional Gabor and Adaboost | |
CN104268598B (en) | Human leg detection method based on two-dimensional scanning lasers | |
CN103886589A (en) | Goal-oriented automatic high-precision edge extraction method | |
CN103136504A (en) | Face recognition method and device | |
Kang et al. | A new multi-unit iris authentication based on quality assessment and score level fusion for mobile phones | |
Rouhi et al. | A review on feature extraction techniques in face recognition | |
CN110728185B (en) | Detection method for judging existence of handheld mobile phone conversation behavior of driver | |
CN103279752B (en) | A kind of eye locating method based on improving Adaboost algorithm and Face geometric eigenvector | |
CN106203375A (en) | A kind of based on face in facial image with the pupil positioning method of human eye detection | |
CN103810491A (en) | Head posture estimation interest point detection method fusing depth and gray scale image characteristic points | |
CN104616319A (en) | Multi-feature selection target tracking method based on support vector machine | |
CN102411709A (en) | Iris segmentation recognition method | |
CN103186790A (en) | Object detecting system and object detecting method | |
CN103605993B (en) | Image-to-video face identification method based on distinguish analysis oriented to scenes | |
CN103632137A (en) | Human iris image segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20131211 |