CN102930278A - Human eye sight estimation method and device - Google Patents

Human eye sight estimation method and device Download PDF

Info

Publication number
CN102930278A
CN102930278A CN2012103929754A CN201210392975A CN102930278A CN 102930278 A CN102930278 A CN 102930278A CN 2012103929754 A CN2012103929754 A CN 2012103929754A CN 201210392975 A CN201210392975 A CN 201210392975A CN 102930278 A CN102930278 A CN 102930278A
Authority
CN
China
Prior art keywords
window
face
eye
people
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012103929754A
Other languages
Chinese (zh)
Inventor
车明
常轶松
刘学毅
李维超
秦超
黎贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN2012103929754A priority Critical patent/CN102930278A/en
Publication of CN102930278A publication Critical patent/CN102930278A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a human eye sight estimation method and device and relates to the field of man-machine interaction. The method comprises the following steps of: carrying out eye rough positioning on a human face region; roughly selecting a 1/2-7/8 part of a face part image as an eye part region; determining a human eye vertical coordinate position through a mixed integral projection function to obtain a human eye accurate boundary; carrying out pupil scanning on the human eye accurate boundary; determining a frame which is similar with an eyeball in size according to an experience value and carrying out window scanning on the inner part of the human eye accurate boundary; selecting a window with the smallest gray value sum as a pupil and taking the center of the window as a pupil center; utilizing a corner matching algorithm to obtain an inner eye contact coordinate inside the human eye accurate boundary; and transmitting a pupil center coordinate and an inner eye contact coordinate value into a sight estimation model to determine a sight direction. According to the human eye sight estimation method and device, the accuracy of the sight estimation is improved through the sight estimation model; and a hardware structure is designed to consume less resource under the condition of high detection rate.

Description

A kind of people's an eye line method of estimation and device thereof
Technical field
The present invention relates to field of human-computer interaction, relate in particular to a kind of people's an eye line method of estimation and device thereof.
Background technology
HCI (Human-Computer Interaction, man-machine interaction) is research Human's technology, and research purpose is to improve man-machine communication's naturality and high efficiency.Eyes are as the most significant feature of human face, its motion information representation with exchange in played very important effect.Therefore, by photographic images the information of eye is extracted and is parsed into and be the hot research problem in the field of human-computer interaction.
It is position and the size that determines each feature of people's face in the piece image that facial feature points detects, and it has a wide range of applications: monitoring, tracing, man-machine interaction, intelligent robot and sight line estimation etc.Nineteen ninety-five, Freund and Schapire propose the AdaBoost algorithm, and this algorithm is adjusted the error rate of hypothesis adaptively according to the feedback of weak study, makes in the situation that efficient does not reduce, and detects accuracy and is greatly improved.The Viola proposition is applied to integrogram among the calculating of eigenwert, and its computing velocity is significantly increased.
Sight line estimates in a lot of fields such as man-machine interaction, medical diagnosis, aviation and helps the disabled etc. that vast potential for future development is arranged, and therefore this technology has obtained paying close attention to widely in recent years.Research method commonly used is to realize the sight line estimation by reference point source and three-dimensional reconstruction are set.
The inventor finds to exist at least in the prior art following shortcoming and defect in realizing process of the present invention:
1) because the characteristic of above-mentioned algorithm itself, so that it is longer to get access to the time of people's face testing result;
2) because reference point source needs complicated experiment setting and strict photoenvironment, limited range of application.
Summary of the invention
The invention provides a kind of people's an eye line method of estimation and device thereof, the present invention has shortened the time of obtaining people's face testing result, has enlarged range of application, sees for details hereinafter to describe:
A kind of people's an eye line method of estimation is characterized in that, said method comprising the steps of:
(1) obtains go forward side by side line position figure conversion of the image that comprises people's face, obtain the RGB bitmap; Be gray-scale map and colour of skin binary map with the RGB bitmap-converted;
(2) gray-scale map is carried out convergent-divergents at different levels, the image behind every one-level convergent-divergent is carried out window scanning, the scanning window size is 20 * 20, and the gray-scale map in the scanning window is carried out integrogram and square integrogram calculating;
(3) carry out Weak Classifier by integrogram data and square integrogram data and calculate, the strong classifier threshold ratio that Weak Classifier result of calculation at the same level is cumulative and corresponding is eliminated non-face window; If window to be selected is by all strong classifiers then be judged to be facial image;
(4) window to be selected that will identify as people's face of Nios II core processor merges, and obtains final human face region; Carry out face area according to final human face region, colour of skin binary map and complexion model and accurately locate, obtain human face region;
(5) carry out the eye coarse positioning according to human face region, the 1/2-7/8 place that chooses roughly first face image is ocular; Determine human eye ordinate position by mixing the integration projection function again, to obtain the human eye exact boundary;
(6) the human eye exact boundary is carried out window scanning, rule of thumb value determines that one and the close frame of eyeball size scan the human eye exact boundary, and the window of choosing gray-scale value and minimum is pupil, and with window center as pupil center;
(7) take the pupil center that obtains as benchmark, intercepting comprises the inner eye corner window of inner eye corner and the inner eye corner window is carried out gray shade scale stretching pre-service, then utilize Susan operator and Corner Detection operator, in the inner eye corner window, extract candidate's inner eye corner point, filter out at last correct inner eye corner point coordinate;
(8) coordinate and the correct inner eye corner point coordinate with pupil center imports in the sight line estimation model of PC, determines direction of visual lines.
The described window to be selected that will identify as people's face merges and is specially:
1) when second people's face frame and first man face frame at a distance of less than first man face frame wide 1/2 the time, first man face frame and second people's face frame are merged into a class, when satisfying condition, carry out successively the merging of other people face frame and first man face frame;
2) people's face frame number is carried out the calculating of people's face final area greater than the class of threshold value.
The described calculating that people's face frame number is carried out people's face final area greater than the class of threshold value is specially:
With the calculating of averaging of the corresponding upper left corner point coordinate of all frames, with the upper left corner point coordinate of result of calculation as the integration frame, carry out successively the calculating of upper right angle point, lower-left angle point and lower right corner point coordinate, so far above-mentioned 4 coordinate can be determined people's face final area again.
Described complexion model is specially: (Cg, Cb, Cr are colour of skin binary map);
( Cg - 107 ) 2 + ( Cb - 110 ) 2 12.25 2 ≤ 1
Cr∈[260-Cg,280-Cg]
Cg∈[85,135]
Described sight line estimation model is specially:
p x s y s 1 = H x p y P 1
(x s, y s) be screen blinkpunkt coordinate, (x p, y p) be the pupil coordinate in the photo, H is screen and the interplanar matrix of photographic projection, P is the pupil center in the photo.
Described candidate's inner eye corner point that extracts in the inner eye corner window filters out at last correct inner eye corner point coordinate and is specially:
1) if only have a candidate angular, then candidate angular is correct inner eye corner point;
2) if two candidate angular are arranged, then selecting apart from pupil center's angle point farthest is correct inner eye corner point;
3) if three or three above candidate angular are arranged, then screen according to following algorithm:
X max = max ( x , y ) ∈ S S x Y min = min ( x , y ) ∈ S S y
T={(x,y)|(X max-x)<5∩(y-Y min)<5,(x,y)∈S}
C x=mean( Tx) C y=mean(T y)
Wherein, S is the candidate angular set, X MaxBe the maximal value of some horizontal ordinates all among the S, Y MinBe the minimum value of some ordinates all among the S, T be among the S with point (X Max, Y Min) the transverse and longitudinal coordinate all differ the set of the point that is not more than 5 pixels, point (C x, C y) be selected correct inner eye corner point coordinate; Mean represents to average.
A kind of human eye line-of-sight estimation device comprises:
The facial image acquisition module is used for obtaining facial image;
The bitmap-converted module is used for facial image is carried out bitmap-converted, obtains the RGB bitmap;
The gray-scale map module, being used for the RGB bitmap-converted is gray-scale map;
Colour of skin binary map module, being used for the RGB bitmap-converted is colour of skin binary map;
The integrogram module is used for carrying out integrogram and square integrogram calculates, and obtains integrogram data and square integrogram data;
The Weak Classifier computing module is used for the integrogram data and square integrogram data are calculated and eliminate non-face window;
Nios II core processor is used for the merging of people's face window, and obtains final human face region; Carry out face area according to final human face region, colour of skin binary map and complexion model and accurately locate, obtain human face region;
Eye feature point detects module, is used for carrying out the eye coarse positioning according to human face region, and the 1/2-7/8 place that chooses face image is ocular; Determine the human eye coordinate position by mixing the integration projection function, obtain the human eye exact boundary; The human eye exact boundary is carried out window scanning, rule of thumb value determines that one and the close frame of eyeball size scan the human eye exact boundary, choose gray-scale value and minimum window is pupil, and with window center as pupil center, obtain correct inner eye corner point coordinate; Coordinate and the correct inner eye corner point coordinate of pupil center are imported in the sight line estimation model of PC, determine direction of visual lines.
Described colour of skin binary map module is that the result by Cg-Cb and Cg-Cr two component cutters carries out logical and and obtains, hardware design is: Cg, Cb are done the absolute value subtraction with 107,110 respectively, realize through two level production lines, the result represents with 1bit, the result of 32 pixels is spliced, and stores in the buffer memory.
Described integrogram module uses the level Four streamline to realize, at first, the integrogram calculating section postpones one-period, carries out the square operation of gray-scale value within this cycle; In the second level, value and the left accumulator register of current pixel are done addition, the result is saved in the left accumulator register, and the address of relevant position is delivered in the address register of row cache; In the third level, data and the left accumulator register read are done addition; In the fourth stage, the result of calculation of current location is exported and write back in the row cache, calculate for next line and use.
Described Weak Classifier computing module has adopted three grades of parallel hardware structures:
(1) task level is parallel between window: four windows to be checked scan simultaneously, first window is set streamline cutting sequential and is read Weak Classifier information, its excess-three window aligns with first window on sequential, and shares the Weak Classifier information that first window is read;
(2) task level is parallel in the window: each window interior has three streamlines to calculate simultaneously Weak Classifier, number according to two kinds of Weak Classifiers, by the Weak Classifier of two rectangles of two streamlines calculating, calculated the Weak Classifier of three rectangles by the 3rd streamline;
(3) data level is parallel: the wall scroll pipeline organization is divided into 7 grades, Weak Classifier of each computation of Period.
The beneficial effect of technical scheme provided by the invention is: the present invention has improved the precision that sight line is estimated by the sight line estimation model; The designed hardware configuration of the present invention is under the condition that guarantees the high detection rate, and the resource of consumption is less, and hardware system has consumed 12,181 logical blocks (LE), 91 9 multiplier, 1,507,176 storage spaces; For the image of 640 * 480 resolution, detection rates was 12 frame/seconds, and for the image of 320 * 240 resolution, detection rates was 41 frame/seconds.
Description of drawings
Fig. 1 is the synoptic diagram of sight line estimation model;
Fig. 2 is a kind of process flow diagram of people's an eye line method of estimation;
Fig. 3 human eye line-of-sight estimation device structural drawing;
Fig. 4 bitmap-converted module hardware structure;
Fig. 5 colour of skin binary map conversion hardware structure;
The pipeline organization figure that Fig. 6 integrogram calculates;
The pipeline organization figure that Fig. 7 Weak Classifier calculates.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, embodiment of the present invention is described further in detail below in conjunction with accompanying drawing.
In order to shorten the time of obtaining people's face testing result, enlarge range of application, the embodiment of the invention provides a kind of people's an eye line method of estimation and device thereof, sees for details hereinafter to describe:
Arrival along with post-PC era, SoC (System On-Chip, Embedded SoC) significant progress and progress have been obtained, can be integrated on less chip more, the sophisticated functions unit take processor as core faster, this is so that the more high performance SoC of design progressively becomes possibility, so the hardware of AdaBoost algorithm realizes it being a kind of effective way that improves computing velocity.
A kind of people's an eye line method of estimation referring to Fig. 1 and Fig. 2, may further comprise the steps:
101: obtain go forward side by side line position figure conversion of the image that comprises people's face, obtain the RGB bitmap; Be gray-scale map and colour of skin binary map with the RGB bitmap-converted;
102: gray-scale map is carried out convergent-divergents at different levels, the image behind every one-level convergent-divergent is carried out window scanning, the scanning window size is 20 * 20, and the gray-scale map in the scanning window is carried out integrogram and square integrogram calculating;
Wherein, it is 1.25 that this method adopts zoom factor, during specific implementation, sets according to the needs in the practical application, and the embodiment of the invention does not limit this.
103: carry out Weak Classifier by integrogram data and square integrogram data and calculate, the strong classifier threshold ratio that Weak Classifier result of calculation at the same level is cumulative and corresponding is eliminated non-face window; If window to be selected is by all strong classifiers then be judged to be facial image;
The Weak Classifier that this method uses the OpenCV storehouse to train, in the process of training, sample set comprises a large amount of facial images and non-face image, all positions with every width of cloth image in the complete sample set of every kind of size scanning of every kind of Haar feature, pick out facial image and non-face image are had the Haar feature of best discrimination, finally obtain Weak Classifier, obtain strong classifiers at different levels by several Weak Classifier combinations, the threshold value of strong classifier is determined jointly by Weak Classifier.
The Weak Classifier that this method is used is based on 20 * 20 sample set image, finally obtain 1775 of the Weak Classifiers that two rectangles represent, 360 of the Weak Classifiers that three rectangles represent, these Weak Classifiers form 22 grades of strong classifiers, if window to be selected is by all strong classifiers then be judged to be facial image.
The window to be selected that 104:Nios II core processor will be identified as people's face merges, and obtains final human face region; Carry out face area according to final human face region, colour of skin binary map and complexion model and accurately locate, obtain human face region;
Wherein, after a sub-picture detected through step 103, the result of output tended to occur the phenomenons such as alternation sum comprises, and the position that has namely comprised little people's face frame or two people's face frames in large people's face frame differs and not quite and all comprised the same person face.In this case, just need a kind of integration algorithm that the people's face frame that comprises the same person face is combined, in order to obtain final testing result.
Wherein, identification for merging, the window to be selected of people's face is specially:
1) when second people's face frame and first man face frame at a distance of less than first man face frame wide 1/2 the time, first man face frame and second people's face frame are merged into a class, when satisfying condition, carry out successively the merging of other people face frame and first man face frame;
2) people's face frame number is carried out the calculating of people's face final area greater than the class of threshold value.
This step is specially: with the calculating of averaging of the corresponding upper left corner point coordinate of all frames, with the upper left corner point coordinate of result of calculation as the integration frame, carry out successively the calculating of upper right angle point, lower-left angle point and lower right corner point coordinate, so far above-mentioned 4 coordinate can be determined people's face final area again.
Specific implementation is during practical application: at first set a decision condition and judge that who face frame is close people's face frame, then people's face frame that these are close is classified as a class, this method think if second people's face frame and first man face frame apart less than first man face frame wide 1/2, they are exactly close so, just should merge into a class.(this method describes take 5 as example to set simultaneously a threshold value, during specific implementation, set according to the needs in the practical application, the embodiment of the invention does not limit this), show if at least five people's face frames are arranged in a certain class, this method thinks that it has been exactly the position of people's face, otherwise, just abandon this class people face frame.Then people's face frame number is carried out joint account greater than 5 class, computing method are with the calculating of averaging of the corresponding upper left corner point coordinate of all frames, with the upper left corner point coordinate of result of calculation as the integration frame, carry out successively the calculating of upper right angle point, lower-left angle point and lower right corner point coordinate, so far above-mentioned 4 coordinate can be determined people's face final area again.
Wherein, complexion model is specially: (Cg, Cb, Cr are colour of skin binary map)
( Cg - 107 ) 2 + ( Cb - 110 ) 2 12.25 2 &le; 1
Cr∈[260-Cg,280-Cg]
Cg∈[85,135]
Namely satisfy the human face region that is judged to be of complexion model, do not satisfy the non-face zone that is of complexion model.
105: carry out the eye coarse positioning according to human face region, the 1/2-7/8 place that chooses roughly first face image is ocular; Determine human eye ordinate position by mixing the integration projection function again, to obtain the human eye exact boundary;
Wherein, mixing the integration projection function is specially:
The horizontal mixed projection function that is located in the interval [y1, y2] is H (y), and then its expression formula is:
H (y)=0.4×(1-M′ (y))+0.6×D′ (y)
M ( y ) &prime; = M ( y ) - min ( M ( y ) ) max ( M ( y ) ) - min ( M ( y ) ) M ( y ) = 1 x 2 - x 1 &Integral; x 1 x 2 I ( x , y ) dx
D ( y ) &prime; = D ( y ) - min ( D ( y ) ) max ( M ( y ) ) - min ( M ( y ) ) D ( y ) = &Sigma; x i = x 1 x 2 | I ( x , y ) - I ( x - 1 , y ) |
Wherein, parameter 0.4 and 0.6 is specified according to the actual computation effect, sets according to the needs in the practical application during specific implementation.
106: the human eye exact boundary is carried out window scanning, and rule of thumb value determines that one and the close frame of eyeball size scan the human eye exact boundary, and the window of choosing gray-scale value and minimum is pupil, and with window center as pupil center;
Wherein, when the human eye exact boundary was scanned, can adopt step-length was 1 pixel, from left to right, scan from top to bottom, until cover whole human eye exact boundary.During specific implementation, can also adopt other scanning sequency, as long as cover whole scanning area.
107: take the pupil center that obtains as benchmark, intercepting comprises the inner eye corner window of inner eye corner and the inner eye corner window is carried out gray shade scale stretching pre-service, then utilize Susan operator and Corner Detection operator, in the inner eye corner window, extract candidate's inner eye corner point, filter out at last correct inner eye corner point coordinate;
Wherein, this step is specially:
1) intercepting comprises the inner eye corner window of inner eye corner and the inner eye corner window is carried out gray shade scale stretching pre-service;
If the eyes rectangular area is of a size of M * N (the capable N row of M), the coordinate of pupil center is (x0, y0), and pupil radius r=M/8 sets arbitrary parameter fixY=N/3.
Take left eye as example, set the coboundary top=y0+fixY of inner eye corner window in conjunction with the eyes priori rules; Lower boundary bottom=y0-fixY; Left margin left=x0+r-1; Right margin right=left+N/3.
In like manner, it is as follows to set right eye inner eye corner window: top=y0+fixY; Bottom=y0-fixY; Left=right-N/3; Right=x0-r+1.
Eyes move and can cause canthus brightness to change, and affect the implementation effect of Susan operator, and for this carries out the gray shade scale stretching conversion to the canthus window, formula is as follows
g ( x , y ) = b &prime; - a &prime; b - a [ f ( x , y ) - a ] + a &prime;
Wherein, f (x, y) is former gray level image, and its tonal range is [a, b]; The tonal range of image g (x, y) was after gray shade scale stretched Get a=10 herein, b=140 can obtain preferably effect.
2) determine the poor thresholding t of gray scale intensities and non-maximum value suppression threshold g in the Susan algorithm;
Utilize mean square deviation to determine the poor thresholding t of gray scale intensities, for the image of different illumination intensity, the t value can be carried out the self-adaptation adjustment.Suppose to put mistake after I (x, y) expression gray shade scale stretches! Do not find Reference source.The gray-scale value of place's pixel, average μ and variances sigma 2Formulation as follows:
&mu; = 1 MN &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 I ( x , y )
&sigma; 2 = 1 MN &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 [ I ( x , y ) - &mu; ] 2
Wherein, M is picture altitude, and N is picture traverse, and unit is pixel.Set t=σ herein,
Figure BDA00002263414200085
n MaxN (x 0) maximal value that can reach, it is 7 * 7 that this paper chooses the circular shuttering size, i.e. n Max=37.Then utilize the Susan operator to detect the outline map (SA) of inner eye corner window, then utilize inner eye corner to detect operator (CF) inner eye corner window edge figure is carried out convolution algorithm, get the maximal value place as the position candidate at canthus.
Inner eye corner detects operator and is specially:
- 1 - 1 1 1 - 1 - 1 - 1 - 1 - 1 1 - - 1 - 1 - 1 - 1 - 1 1 1 1 1 1 1 1 1 - 1 - 1 1 1 - 1 - 1 - 1 1 1 - 1 - 1 - 1 1 1 - 1 - 1 - 1 1 1 1 1 1 1 1
A. the left eye angle is detected operator b. right eye angle and is detected operator
The convolutional calculation formula is: Corners = max ( x , y ) &Element; SA ( SA &CircleTimes; CF )
At last, draw the inner eye corner point coordinate by inner eye corner point extraction algorithm.
Correct inner eye corner point extraction algorithm specifically describes and is:
Because usually more than two, screen correct inner eye corner point according to the position feature of inner eye corner in the canthus window by the inner eye corner candidate point of inner eye corner Locating operator location, concrete methods of realizing is as follows for this reason:
(1) if only have a candidate angular, then candidate angular is correct inner eye corner point;
(2) if two candidate angular are arranged, then selecting apart from pupil center's angle point farthest is correct inner eye corner point;
(3) if three or three above candidate angular are arranged, then screen according to following algorithm:
X max = max ( x , y ) &Element; S S x Y min = min ( x , y ) &Element; S S y
T={(x,y)|(X max-x)<5∩(y-Y min)<5,(x,y)∈S}
C x=mean(T x) C y=mean(T y)
Wherein, S is the candidate angular set, X MaxBe the maximal value of some horizontal ordinates all among the S, Y MinBe the minimum value of some ordinates all among the S, T be among the S with point (X Max, Y Min) the transverse and longitudinal coordinate all differ the set of the point that is not more than 5 pixels, point (C x, C y) be selected correct inner eye corner point coordinate.
108: coordinate and the correct inner eye corner point coordinate of pupil center are imported in the sight line estimation model of PC, determine direction of visual lines.
This sight line estimation model is based on following two hypothesis: eyeball front one side is regarded as the plane and aspheric surface, and namely no matter which the user seeing, the locus of pupil center all the time in one plane; And it is motionless that head keeps.
The sight line estimation model as shown in Figure 1, O is the eyeball center, N is camera pinhole imaging system central point, S is the blinkpunkt on the screen, E is pupil center's (plane, place is the eyeball almost plane), P is the pupil center in the photo, computing formula is as follows:
p x s y s 1 = H x p y P 1
(x s, y s) be screen blinkpunkt coordinate, (x p, y p) be the pupil coordinate in the photo, H is screen and the interplanar matrix of photographic projection, p is zoom factor, solves H, just can go out the position of blinkpunkt on screen according to the position calculation of pupil in photo.
The matrix H computing method are as follows: compute matrix H needs several to coordinate data, and namely the user will watch the point of several definite coordinates on the screen successively attentively, and each point is taken a photo, then extracts the coordinate of pupil center from photo.This process is called demarcation.H is one 3 * 3 matrix, because the existence of zoom factor p, this matrix has 8 independently elements, for reducing error, gets the coordinate of 9 points, adopts a kind of maximum likelihood method to estimate.
If M i=(x Pi, y Pi) T, m i=(x Si, y Si) T, i=1to 9 be on the photo and screen on respective coordinates, 9 elements of H are with h=(H 11, H 12, H 13, H 21, H 22, H 23, H 31, H 32, H 33) TExpression, the AME of establishing data is that 0 covariance matrix is Then getting the maximum likelihood objective function is:
Wherein m ^ i = 1 H 31 x pi + H 32 y pi + H 33 H 11 x pi + H 12 y pi + H 13 H 31 x pi + H 32 y pi + H 33 . Because being sampled as independently of each point is desirable in the actual computation
Figure BDA00002263414200102
I is unit matrix.Then the problems referred to above are actually a nonlinear least square method problem, order
Figure BDA00002263414200103
Minimum can solve H.When user's head moved, the H matrix that then calculates was with no longer valid.
That is, realized the determining of direction of visual lines shortened detection time by step 101-step 108.
In order to verify the accuracy of sight line estimation model, this method has been carried out experimental verification.Experimental facilities comprises the camera of 2048 * 1536 pixels and one 14 inches, the display of resolution 1280 * 800.The Intrinsic Matrix of camera can watch successively by the user that the point of several definite coordinates calculates on the screen attentively, and the camera Intrinsic Matrix is:
A = 1942 0 1013 0 1948 770 0 0 1
Through the demarcation of one group of 9 point, this experimental test 8 groups every group 16 points, the calculating of totally 128 points, wherein head moves to 4 diverse locations, the result is substantially satisfactory less than one centimetre average error, can better meet in the application of HCI equipment.
A kind of human eye line-of-sight estimation device referring to Fig. 3, comprising:
The facial image acquisition module is used for obtaining facial image;
The bitmap-converted module is used for facial image is carried out bitmap-converted, obtains the RGB bitmap;
The gray-scale map module, being used for the RGB bitmap-converted is gray-scale map;
Colour of skin binary map module, being used for the RGB bitmap-converted is colour of skin binary map;
The integrogram module is used for carrying out integrogram and square integrogram calculates, and obtains integrogram data and square integrogram data;
The Weak Classifier computing module is used for the integrogram data and square integrogram data are calculated and eliminate non-face window;
Nios II core processor is used for the merging of people's face window, and obtains final human face region; Carry out face area according to final human face region, colour of skin binary map and complexion model and accurately locate, obtain human face region;
Eye feature point detects module, is used for carrying out the eye coarse positioning according to human face region, and the 1/2-7/8 place that chooses face image is ocular; Determine the human eye coordinate position by mixing the integration projection function, obtain the human eye exact boundary; The human eye exact boundary is carried out window scanning, rule of thumb value determines that one and the close frame of eyeball size scan the human eye exact boundary, choose gray-scale value and minimum window is pupil, and with window center as pupil center, and obtain correct inner eye corner point coordinate; Coordinate and the correct inner eye corner point coordinate of pupil center are imported in the sight line estimation model of PC, determine direction of visual lines.
Wherein, this device uses high-resolution camera, the maximum image of supporting 2,592 * 1,944 resolution, the RGB bitmap that obtains is 12, in conjunction with actual conditions, use the image of 640 * 480 specifications when realizing people's face detection algorithm, improve the stage in real-time, can adjust parameter according to practical situations, the specification of initial pictures is adjusted to 400 * 300.Eye feature point detect to use is 1600 * 1200 image, and bitmap uses 8 bit formats.
This device utilizes hardware to realize the high characteristics of step-by-step operation efficient, uses table look-up method--the distributed algorithm of tabling look-up of address of a kind of step-by-step combination producing when design bitmap-converted module hardware structure.
Bitmap turns the gray-scale map computing formula: Y=0.2990 * R+0.5870 * G+0.1140 * B
The Y component can be expressed as form:
Y=a 0X 0+a 1X 1+a 2X 2+a 3
A wherein i(i=0,1,2) representation conversion coefficient, a 3Be constant (0.5, round up); X i(i=0,1,2) represents respectively R, G, three color components of B, and it is represented and can get with binary mode:
Y + &Sigma; i = 0 2 a i ( &Sigma; m = 0 7 X i , m &times; 2 m ) + a 3 = &Sigma; m = 0 7 ( &Sigma; i = 0 2 a i X i , m &times; 2 m ) + a 3 = &Sigma; m = 0 7 PS m &times; 2 m + a 3
Wherein
Figure BDA00002263414200112
Be called partial product because X I, mCan only get 0 or 1, partial product only has 8 kinds of possible values, can these values be stored in the register by calculating in advance, and when changing, structure vector (X 0, m, X 1, m' X 2, m) as table address to obtain corresponding partial product.In order to simplify list structure and to save storage space, in look-up table, only store PS m* 2 7, to satisfy the precision of the highest binary weights position (MostSignificant Bit, MSB).The partial product computing of all the other weight positions can realize except 2 operations by moving to right after tabling look-up.Look-up table thes contents are as follows shown in the table.
Figure BDA00002263414200121
By above-mentioned embedded software and hardware cooperating design method, can greatly improve the computation rate of bitmap-converted, the hardware design of this part is as shown in Figure 4.
Colour of skin binary map module is that the result by Cg-Cb and Cg-Cr two component cutters carries out logical and and obtains.The area of skin color in Cg-Cb space can be expressed as border circular areas, and its equation form is as follows
( Cg - 107 ) 2 + ( Cb - 110 ) 2 12.25 2 &le; 1 - - - ( 1 )
The area of skin color in Cg-Cr space can be expressed as strip-type, and mathematic(al) representation is as follows:
Cr∈[260-Cg,280-Cg] (2)
Cg∈[85,135] (3)
If Cb, Cg and Cr satisfy above-mentioned three formula, can judge that then current pixel belongs to area of skin color, otherwise be judged to be non-area of skin color.By rgb color space to the conversion in CbCgCr space with to the photograph in gray-scale value space seemingly, also be to use the method for tabling look-up by bit pattern, through behind the three class pipeline, obtain Cb, Cg, Cr component, then carry out colour of skin judgement.
Formula (1) be judged to be a border circular areas, if calculate separately this formula, then need to use multiplication and floating-point division, calculated amount is large, resource consumption is high, this device uses the method for tabling look-up to realize, at first conversion is carried out in the coordinate of Cg, Cb, and it is transformed to the position of true origin, then judge whether this coordinate is seated in the circle, radius of a circle is 12.5, in order further to dwindle storage space, decision process is focused in the first quartile of coordinate system.The size of table is 13 * 13Bits, and whether the coordinate points on the value representation relevant position of each is seated in the circle.
The hardware design of this part is at first done the absolute value subtraction with 107,110 respectively with Cg, Cb as shown in Figure 5, then judges that by the method for tabling look-up this point whether in circle, judges with this whether the Cg-Cb space meets above-mentioned formula.The judgement in Cg-Cr space is relatively simple, only needs determine type (2).Can finish this process through two level production lines, the result represents that with 1bit the result of 32 pixels is spliced, and stores in the buffer memory, writes back in due course the DDR storer, for subsequent module.
The integrogram module after finishing the obtaining an of two field picture, the calculating of two kinds of integrograms of beginning, in order to improve the computing velocity of Haar eigenwert, computing formula is as follows, wherein image is gray-scale map:
Integrogram sum (X, Y)=∑ X≤X, y≤YImage (x, y)
Integrated square figure sqrsum (X, Y)=∑ X≤X, y≤YImage (x, y) 2
Calculate required data--gray-scale map, partly obtain by the bus interface of this module, two dual port RAM: RAM_L and RAM_ L_sq are set, store respectively the value of integrogram of lastrow and the value of square integrogram; Two left accumulator registers are set, respectively storage all pixels of left side (comprising current pixel) gray-scale value of going together with current pixel and gray-scale value square cumulative and.The value of current location integrogram is the value of the gray-scale value of current pixel, left accumulator register and value three partial summations of lastrow correspondence position.
The calculating of two kinds of integrograms uses the level Four streamline to realize, at first, synchronous for what keep two kinds of integrograms to calculate, the integrogram calculating section postpones one-period, carries out the square operation of gray-scale value within this cycle.In the second level, value and the left accumulator register of current pixel are done addition, the result is saved in the left accumulator register, and the address of relevant position is delivered in the address register of row cache.In the third level, data (value of lastrow correspondence position) and the left accumulator register read are done addition.In the fourth stage, the result of calculation of current location is exported and write back in the row cache, calculate for next line and use.This pipeline organization is in operational process, and each cycle can be calculated two kinds of integrograms of a pixel.Result of calculation is saved in windows cache and calculates in the buffer memory, and supply AdaBoost algorithm uses, and pipeline organization as shown in Figure 6.
Because the present invention adopts the strategy of image scaling, the calculating of all Haar eigenwerts all is confined to a scanning window (21 * 21) inside to be carried out, thus to the result of calculation of two kinds of integrograms of entire image respectively to 2 17With 2 25Carry out delivery, only keep low 17 and 25 of binary mode, otherwise need to preserve 27 and 35, this can save storage space to a great extent, in addition, because delivery needs to proofread and correct in the computation process of back, otherwise can make mistakes.
Wherein, the Weak Classifier computing module has adopted three grades of parallel hardware structures:
(1) task level is parallel between window: four windows to be checked scan simultaneously, wherein first window is set streamline cutting sequential and is read Weak Classifier information, its excess-three window aligns with first window on sequential, and shares the Weak Classifier information that first window is read.The storage organization synoptic diagram as shown in Figure 7.The judgement time of four windows is depended on the maximum window of calculating strong classifier progression, and after four windows all judged, this end of scan upgraded four column datas simultaneously to calculate next group window.
(2) task level is parallel in the window: each window interior has three streamlines to calculate simultaneously Weak Classifier, number according to two kinds of Weak Classifiers (the rectangle number is for dividing foundation), by the Weak Classifier of two rectangles of two streamlines calculating, calculated the Weak Classifier of three rectangles by the 3rd streamline.
(3) data level is parallel: the wall scroll pipeline organization is divided into 7 grades, and each cycle of this structure can be calculated a Weak Classifier.
Wherein, Nios II core processor is the soft nuclear microprocessors of configurable general 32 RISC of user, the II core processor is responsible for finishing the scheduling between modules on the bus, modules and Nios II core processor communicate, Nios II core processor is finished a small amount of calculation task in addition, such as final human face region similarity combination, this part calculated amount is little, but process is complicated, when using hardware to realize, logic function design is difficulty relatively, and can not bring into play the advantage of hardware design, so native system uses the mode of software to realize.
It will be appreciated by those skilled in the art that accompanying drawing is the synoptic diagram of a preferred embodiment, the invention described above embodiment sequence number does not represent the quality of embodiment just to description.
The above only is preferred embodiment of the present invention, and is in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. people's an eye line method of estimation is characterized in that, said method comprising the steps of:
(1) obtains go forward side by side line position figure conversion of the image that comprises people's face, obtain the RGB bitmap; Be gray-scale map and colour of skin binary map with the RGB bitmap-converted;
(2) gray-scale map is carried out convergent-divergents at different levels, the image behind every one-level convergent-divergent is carried out window scanning, the scanning window size is 20 * 20, and the gray-scale map in the scanning window is carried out integrogram and square integrogram calculating;
(3) carry out Weak Classifier by integrogram data and square integrogram data and calculate, the strong classifier threshold ratio that Weak Classifier result of calculation at the same level is cumulative and corresponding is eliminated non-face window; If window to be selected is by all strong classifiers then be judged to be facial image;
(4) window to be selected that will identify as people's face of Nios II core processor merges, and obtains final human face region; Carry out face area according to final human face region, colour of skin binary map and complexion model and accurately locate, obtain human face region;
(5) carry out the eye coarse positioning according to human face region, the 1/2-7/8 place that chooses roughly first face image is ocular; Determine human eye ordinate position by mixing the integration projection function again, to obtain the human eye exact boundary;
(6) the human eye exact boundary is carried out window scanning, rule of thumb value determines that one and the close frame of eyeball size scan the human eye exact boundary, and the window of choosing gray-scale value and minimum is pupil, and with window center as pupil center;
(7) take the pupil center that obtains as benchmark, intercepting comprises the inner eye corner window of inner eye corner and the inner eye corner window is carried out gray shade scale stretching pre-service, then utilize Susan operator and Corner Detection operator, in the inner eye corner window, extract candidate's inner eye corner point, filter out at last correct inner eye corner point coordinate;
(8) coordinate and the correct inner eye corner point coordinate with pupil center imports in the sight line estimation model of PC, determines direction of visual lines.
2. a kind of people's an eye line method of estimation according to claim 1 is characterized in that, the described window to be selected that will identify as people's face merges and is specially:
1) when second people's face frame and first man face frame at a distance of less than first man face frame wide 1/2 the time, first man face frame and second people's face frame are merged into a class, when satisfying condition, carry out successively the merging of other people face frame and first man face frame;
2) people's face frame number is carried out the calculating of people's face final area greater than the class of threshold value.
3. a kind of people's an eye line method of estimation according to claim 2 is characterized in that, the described calculating that people's face frame number is carried out people's face final area greater than the class of threshold value is specially:
With the calculating of averaging of the corresponding upper left corner point coordinate of all frames, with the upper left corner point coordinate of result of calculation as the integration frame, carry out successively the calculating of upper right angle point, lower-left angle point and lower right corner point coordinate, so far above-mentioned 4 coordinate can be determined people's face final area again.
4. a kind of people's an eye line method of estimation according to claim 1 is characterized in that described complexion model is specially: (Cg, Cb, Cr are colour of skin binary map);
( Cg - 107 ) 2 + ( Cb - 110 ) 2 12.25 2 &le; 1
Cr∈[260 -Cg,280 -Cg]
Cg∈[85,135]。
5. a kind of people's an eye line method of estimation according to claim 1 is characterized in that described sight line estimation model is specially:
p x s y s 1 = H x p y P 1
(x s, y s) be screen blinkpunkt coordinate, (x p, y p) be the pupil coordinate in the photo, H is screen and the interplanar matrix of photographic projection, P is the pupil center in the photo.
6. a kind of people's an eye line method of estimation according to claim 1 is characterized in that, described candidate's inner eye corner point that extracts in the inner eye corner window filters out at last correct inner eye corner point coordinate and is specially:
1) if only have a candidate angular, then candidate angular is correct inner eye corner point;
2) if two candidate angular are arranged, then selecting apart from pupil center's angle point farthest is correct inner eye corner point;
3) if three or three above candidate angular are arranged, then screen according to following algorithm:
X max = max ( x , y ) &Element; S S x Y min = min ( x , y ) &Element; S S y
T={(x,y)|(X max-x)<5∩(y-Y min)<5,(x,y)∈S}
C x=mean(T x) C y=mean(T y)
Wherein, S is the candidate angular set, X MaxBe the maximal value of some horizontal ordinates all among the S, Y MinBe the minimum value of some ordinates all among the S, T be among the S with point (X Max, Y Min) the transverse and longitudinal coordinate all differ the set of the point that is not more than 5 pixels, point (C x, C y) be selected correct inner eye corner point coordinate; Mean represents to average.
7. a human eye line-of-sight estimation device is characterized in that, comprising:
The facial image acquisition module is used for obtaining facial image;
The bitmap-converted module is used for facial image is carried out bitmap-converted, obtains the RGB bitmap;
The gray-scale map module, being used for the RGB bitmap-converted is gray-scale map;
Colour of skin binary map module, being used for the RGB bitmap-converted is colour of skin binary map;
The integrogram module is used for carrying out integrogram and square integrogram calculates, and obtains integrogram data and square integrogram data;
The Weak Classifier computing module is used for the integrogram data and square integrogram data are calculated and eliminate non-face window;
Nios II core processor is used for the merging of people's face window, and obtains final human face region; Carry out face area according to final human face region, colour of skin binary map and complexion model and accurately locate, obtain human face region;
Eye feature point detects module, is used for carrying out the eye coarse positioning according to human face region, and the 1/2-7/8 place that chooses face image is ocular; Determine the human eye coordinate position by mixing the integration projection function, obtain the human eye exact boundary; The human eye exact boundary is carried out window scanning, rule of thumb value determines that one and the close frame of eyeball size scan the human eye exact boundary, choose gray-scale value and minimum window is pupil, and with window center as pupil center, obtain correct inner eye corner point coordinate; Coordinate and the correct inner eye corner point coordinate of pupil center are imported in the sight line estimation model of PC, determine direction of visual lines.
8. a kind of human eye line-of-sight estimation device according to claim 7, it is characterized in that, described colour of skin binary map module is that the result by Cg-Cb and Cg-Cr two component cutters carries out logical and and obtains, hardware design is: Cg, Cb are done the absolute value subtraction with 107,110 respectively, realize through two level production lines, the result represents that with 1bit the result of 32 pixels is spliced, and stores in the buffer memory.
9. a kind of human eye line-of-sight estimation device according to claim 7 is characterized in that, described integrogram module uses the level Four streamline to realize, at first, the integrogram calculating section postpones one-period, carries out the square operation of gray-scale value within this cycle; In the second level, value and the left accumulator register of current pixel are done addition, the result is saved in the left accumulator register, and the address of relevant position is delivered in the address register of row cache; In the third level, data and the left accumulator register read are done addition; In the fourth stage, the result of calculation of current location is exported and write back in the row cache, calculate for next line and use.
10. a kind of human eye line-of-sight estimation device according to claim 7 is characterized in that, described Weak Classifier computing module has adopted three grades of parallel hardware structures:
(1) task level is parallel between window: four windows to be checked scan simultaneously, first window is set streamline cutting sequential and is read Weak Classifier information, its excess-three window aligns with first window on sequential, and shares the Weak Classifier information that first window is read;
(2) task level is parallel in the window: each window interior has three streamlines to calculate simultaneously Weak Classifier, number according to two kinds of Weak Classifiers, by the Weak Classifier of two rectangles of two streamlines calculating, calculated the Weak Classifier of three rectangles by the 3rd streamline;
(3) data level is parallel: the wall scroll pipeline organization is divided into 7 grades, Weak Classifier of each computation of Period.
CN2012103929754A 2012-10-16 2012-10-16 Human eye sight estimation method and device Pending CN102930278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012103929754A CN102930278A (en) 2012-10-16 2012-10-16 Human eye sight estimation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012103929754A CN102930278A (en) 2012-10-16 2012-10-16 Human eye sight estimation method and device

Publications (1)

Publication Number Publication Date
CN102930278A true CN102930278A (en) 2013-02-13

Family

ID=47645075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012103929754A Pending CN102930278A (en) 2012-10-16 2012-10-16 Human eye sight estimation method and device

Country Status (1)

Country Link
CN (1) CN102930278A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835156A (en) * 2015-05-05 2015-08-12 浙江工业大学 Non-woven bag automatic positioning method based on computer vision
CN104156643B (en) * 2014-07-25 2017-02-22 中山大学 Eye sight-based password inputting method and hardware device thereof
CN106922192A (en) * 2014-12-10 2017-07-04 英特尔公司 Using the type of face detection method and device of look-up table
CN108268858A (en) * 2018-02-06 2018-07-10 浙江大学 A kind of real-time method for detecting sight line of high robust
CN108427503A (en) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 Human eye method for tracing and human eye follow-up mechanism
CN109344802A (en) * 2018-10-29 2019-02-15 重庆邮电大学 A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net
CN109409298A (en) * 2018-10-30 2019-03-01 哈尔滨理工大学 A kind of Eye-controlling focus method based on video processing
CN109788219A (en) * 2019-01-18 2019-05-21 天津大学 A kind of high-speed cmos imaging sensor reading scheme for human eye sight tracking
CN109858310A (en) * 2017-11-30 2019-06-07 比亚迪股份有限公司 Vehicles and Traffic Signs detection method
WO2020029444A1 (en) * 2018-08-10 2020-02-13 初速度(苏州)科技有限公司 Method and system for detecting attention of driver while driving
CN110969084A (en) * 2019-10-29 2020-04-07 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN112257696A (en) * 2020-12-23 2021-01-22 北京万里红科技股份有限公司 Sight estimation method and computing equipment
CN112464829A (en) * 2020-12-01 2021-03-09 中航航空电子有限公司 Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system
CN113011393A (en) * 2021-04-25 2021-06-22 中国民用航空飞行学院 Human eye positioning method based on improved hybrid projection function
CN113239754A (en) * 2021-04-23 2021-08-10 泰山学院 Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles
WO2021169637A1 (en) * 2020-02-28 2021-09-02 深圳壹账通智能科技有限公司 Image recognition method and apparatus, computer device and storage medium
CN113781290A (en) * 2021-08-27 2021-12-10 北京工业大学 Vectorization hardware device for FAST corner detection
CN115330756A (en) * 2022-10-11 2022-11-11 天津恒宇医疗科技有限公司 Light and shadow feature-based guide wire identification method and system in OCT image
CN115471552A (en) * 2022-09-15 2022-12-13 江苏至真健康科技有限公司 Shooting positioning method and system for portable mydriasis-free fundus camera
CN115862124A (en) * 2023-02-16 2023-03-28 南昌虚拟现实研究院股份有限公司 Sight estimation method and device, readable storage medium and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1700242A (en) * 2005-06-15 2005-11-23 北京中星微电子有限公司 Method and apparatus for distinguishing direction of visual lines

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1700242A (en) * 2005-06-15 2005-11-23 北京中星微电子有限公司 Method and apparatus for distinguishing direction of visual lines

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘学毅: "基于嵌入式SoC硬件架构的视线估计算法实现研究", 《中国优秀硕士学位论文全文数据库》 *
常轶松: "面向眼部特征检测算法的嵌入式SoC硬件架构研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156643B (en) * 2014-07-25 2017-02-22 中山大学 Eye sight-based password inputting method and hardware device thereof
CN106922192A (en) * 2014-12-10 2017-07-04 英特尔公司 Using the type of face detection method and device of look-up table
CN106922192B (en) * 2014-12-10 2021-08-24 英特尔公司 Face detection method and apparatus using lookup table
CN104835156A (en) * 2015-05-05 2015-08-12 浙江工业大学 Non-woven bag automatic positioning method based on computer vision
CN104835156B (en) * 2015-05-05 2017-10-17 浙江工业大学 A kind of non-woven bag automatic positioning method based on computer vision
CN109858310A (en) * 2017-11-30 2019-06-07 比亚迪股份有限公司 Vehicles and Traffic Signs detection method
CN108268858A (en) * 2018-02-06 2018-07-10 浙江大学 A kind of real-time method for detecting sight line of high robust
CN108268858B (en) * 2018-02-06 2020-10-16 浙江大学 High-robustness real-time sight line detection method
CN108427503A (en) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 Human eye method for tracing and human eye follow-up mechanism
CN108427503B (en) * 2018-03-26 2021-03-16 京东方科技集团股份有限公司 Human eye tracking method and human eye tracking device
WO2020029444A1 (en) * 2018-08-10 2020-02-13 初速度(苏州)科技有限公司 Method and system for detecting attention of driver while driving
CN109344802A (en) * 2018-10-29 2019-02-15 重庆邮电大学 A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net
CN109344802B (en) * 2018-10-29 2021-09-10 重庆邮电大学 Human body fatigue detection method based on improved cascade convolution neural network
CN109409298A (en) * 2018-10-30 2019-03-01 哈尔滨理工大学 A kind of Eye-controlling focus method based on video processing
CN109788219B (en) * 2019-01-18 2021-01-15 天津大学 High-speed CMOS image sensor reading method for human eye sight tracking
CN109788219A (en) * 2019-01-18 2019-05-21 天津大学 A kind of high-speed cmos imaging sensor reading scheme for human eye sight tracking
CN110969084A (en) * 2019-10-29 2020-04-07 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
WO2021169637A1 (en) * 2020-02-28 2021-09-02 深圳壹账通智能科技有限公司 Image recognition method and apparatus, computer device and storage medium
CN112464829B (en) * 2020-12-01 2024-04-09 中航航空电子有限公司 Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system
CN112464829A (en) * 2020-12-01 2021-03-09 中航航空电子有限公司 Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system
CN112257696A (en) * 2020-12-23 2021-01-22 北京万里红科技股份有限公司 Sight estimation method and computing equipment
CN113239754A (en) * 2021-04-23 2021-08-10 泰山学院 Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles
CN113011393A (en) * 2021-04-25 2021-06-22 中国民用航空飞行学院 Human eye positioning method based on improved hybrid projection function
CN113011393B (en) * 2021-04-25 2022-06-03 中国民用航空飞行学院 Human eye positioning method based on improved hybrid projection function
CN113781290B (en) * 2021-08-27 2023-01-31 北京工业大学 Vectorization hardware device for FAST corner detection
CN113781290A (en) * 2021-08-27 2021-12-10 北京工业大学 Vectorization hardware device for FAST corner detection
CN115471552A (en) * 2022-09-15 2022-12-13 江苏至真健康科技有限公司 Shooting positioning method and system for portable mydriasis-free fundus camera
CN115330756A (en) * 2022-10-11 2022-11-11 天津恒宇医疗科技有限公司 Light and shadow feature-based guide wire identification method and system in OCT image
CN115862124A (en) * 2023-02-16 2023-03-28 南昌虚拟现实研究院股份有限公司 Sight estimation method and device, readable storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN102930278A (en) Human eye sight estimation method and device
EP3323249B1 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
CN110807364B (en) Modeling and capturing method and system for three-dimensional face and eyeball motion
CN104317391B (en) A kind of three-dimensional palm gesture recognition exchange method and system based on stereoscopic vision
EP3576017A1 (en) Method, apparatus, and device for determining pose of object in image, and storage medium
CN102520796B (en) Sight tracking method based on stepwise regression analysis mapping model
CN107395958B (en) Image processing method and device, electronic equipment and storage medium
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
CN108345869A (en) Driver&#39;s gesture recognition method based on depth image and virtual data
CN104091155B (en) The iris method for rapidly positioning of illumination robust
CN105389554A (en) Face-identification-based living body determination method and equipment
CN106910242A (en) The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN106796449A (en) Eye-controlling focus method and device
CN103430218A (en) Method of augmented makeover with 3d face modeling and landmark alignment
CN104408462B (en) Face feature point method for rapidly positioning
CN103839223A (en) Image processing method and image processing device
CN103870843B (en) Head posture estimation method based on multi-feature-point set active shape model (ASM)
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
CN104143086A (en) Application technology of portrait comparison to mobile terminal operating system
Vezhnevets Face and facial feature tracking for natural HumanComputer Interface
CN105389553A (en) Living body detection method and apparatus
CN111160291B (en) Human eye detection method based on depth information and CNN
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN106981078A (en) Sight line correction method and device, intelligent conference terminal and storage medium
CN111079625A (en) Control method for camera to automatically rotate along with human face

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130213