CN103176607B - A kind of eye-controlled mouse realization method and system - Google Patents
A kind of eye-controlled mouse realization method and system Download PDFInfo
- Publication number
- CN103176607B CN103176607B CN201310130392.9A CN201310130392A CN103176607B CN 103176607 B CN103176607 B CN 103176607B CN 201310130392 A CN201310130392 A CN 201310130392A CN 103176607 B CN103176607 B CN 103176607B
- Authority
- CN
- China
- Prior art keywords
- eye
- eyes image
- screen
- pupil
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Processing (AREA)
Abstract
The present invention proposes a kind of eye-controlled mouse realization method and system, and the method comprises the steps: to obtain eyes image;Pupil position in eyes image is detected and positions;Set up the mapping relations of eye direction of visual lines and screen;Monitoring eye direction of visual lines, according to the mapping relations of eye direction of visual lines Yu screen, determines eye sight line point of fixation on screen in real time;Determine the mode of operation of mouse;Screen shows at the point of fixation of eye sight line mouse and is operated according to described mode of operation.The present invention can utilize eye sight line direction to control the motion of mouse exactly, it is achieved the function of eye-controlled mouse.
Description
Technical field
The present invention relates to Visual Tracking field, be specifically related to a kind of eye-controlled mouse realization method and system.
Background technology
The man machine relation exploring natural harmony has become a key areas of computer research, natural, efficient, intelligent
Human-computer interaction interface be computer nowadays development important trend.But, for disabled, if trick campaign not side
Just, conventional mouse is utilized to realize man-machine interaction the most extremely difficult.
In field of human-computer interaction, eyes are as important information interaction passage, and the attention direction of sight line reaction people, because of
And line-of-sight applications is had in field of human-computer interaction the features such as its naturality, substantivity and interactivity, enjoy the concern of people, because of
This eye-controlled mouse how realizing utilizing eye sight line direction controlling to realize mouse function is to need badly to solve the technical problem that.
Summary of the invention
In order to overcome defect present in above-mentioned prior art, it is an object of the invention to provide a kind of eye-controlled mouse realization side
Method and system, it is possible to utilize eye sight line direction controlling to realize mouse function.
In order to realize the above-mentioned purpose of the present invention, according to an aspect of the present invention, the invention provides a kind of eye control Mus
Mark implementation method, comprises the steps:
S1: obtain eyes image;
S2: the pupil position in described eyes image is detected and positions;
S3: set up the mapping relations of described eye direction of visual lines and screen;
S4: monitoring eye direction of visual lines in real time, according to the mapping relations of described eye direction of visual lines Yu screen, determines described
Eye sight line point of fixation on the screen;
S5: determine that the mode of operation of mouse, described mode of operation include mouse Move Mode, click pattern and double-click mould
Formula;
S6: show mouse on the screen at the point of fixation of eye sight line and be operated according to described mode of operation.
The eye-controlled mouse implementation method of the present invention by setting up the mapping relations of eye direction of visual lines and screen, it is then determined that
Pupil center, according to the mapping relations of eye direction of visual lines Yu screen, determines eye sight line point of fixation on the screen,
Show mouse on screen at the point of fixation of eye sight line and be operated according to default mode of operation, realizing eye control rapidly and accurately
The function of mouse.
In one preferred embodiment of the invention, in described step S2, the localization method of pupil is:
S21: described eyes image is carried out pretreatment, increases the overall brightness of described eyes image, and filters described
The noise of eyes image;
S22: add up the maximum between-cluster variance of all pixels of described eyes image;
S23: according to described maximum between-cluster variance, calculate the optimal threshold of described eyes image binaryzation, it is achieved eye figure
As binary conversion treatment;
S24: extract pupil edge information;
S25: determine pupil center and radius.
Pupil can be positioned by the pupil positioning method of the present invention rapidly and accurately, even if obscure at eye image
In the case of, also quickly can position out by the center of pupil.
In another preferred embodiment of the invention, in the step that described eyes image is carried out pretreatment, increase
The method of the overall brightness of described eyes image is:
S211: described eyes image is converted into the rectangular histogram of correspondence;
S212: go out the gray value of all pixels in described eyes image according to described statistics with histogram;
S213: determine the benchmark of grey scale pixel value, by the gray value of the pixel of described eyes image according to described benchmark
Carry out light compensation.
The present invention, by increasing the overall brightness of eyes image, also is able to for some vague image vegetarian refreshments or broad image
Carry out through hole location rapidly and accurately, improve accuracy.
In the still another preferable embodiment of the present invention, the method for the described benchmark determining grey scale pixel value is: select institute
There is in pixel the meansigma methods of the gray value of the pixel of 5%-10% as benchmark.
The present invention selects the meansigma methods of the gray value of the partial pixel point in all pixels as benchmark, the ratio chosen
Can adjust according to actual needs, improve the motility of method.
In one preferred embodiment of the invention, in the step that described eyes image is carried out pretreatment, use two
The median filter of dimension filters the noise of described eyes image.
The present invention uses the preprocess method of medium filtering and light compensation, has fuzzy, the darkest, by noise
Eyes image carries out promoting brightness and filtering the ability of noise, improves the accuracy of Pupil diameter.
In another preferred embodiment of the invention, described step S22 is added up all pixels of described eyes image
The method of the maximum between-cluster variance of point is:
S221: obtain the rectangular histogram of described eyes image;
S222: determine that in described rectangular histogram in described eyes image, number of pixels is the grey scale pixel value of zero;
S223: determine the grey scale pixel value that in described eyes image, number of pixels is not zero, calculates described number of pixels not
It it is the maximum between-cluster variance of the grey scale pixel value of zero.
The maximum between-cluster variance computational methods of the present invention, it is only necessary to calculate the grey scale pixel value that is not zero of number of pixels
Big inter-class variance, improves calculating speed.
In one preferred embodiment of the invention, choose the segmentation threshold of the foreground and background of described eyes image, when
During described maximum between-cluster variance maximum, described segmentation threshold is the optimal threshold of segmentation.
The present invention utilizes follow-on maximum variance between clusters to realize adaptive threshold and extracts, and has threshold value suitable, adaptive
Should, fireballing advantage.
In one preferred embodiment of the invention, the method extracting pupil edge information is:
If the pixel value of central pixel point is 255, no matter then why the pixel value of remaining 8 adjacent pixel is worth, one
It is 255 that rule retains the pixel value of central pixel point;
If the pixel value of central pixel point is 0, and the pixel value of 8 adjacent pixels is 0, then by central pixel point
Pixel value change into 255;
During remaining situation, all change the pixel value of central pixel point into 0.
In another preferred embodiment of the invention, method of least square is utilized to determine pupil center and radius.
The extracting method of pupil edge information of the present invention and determination pupil center based on method of least square and radius ellipse
Circle approximating method realizes being accurately positioned of pupil, has feature quickly and accurately.
In one preferred embodiment of the invention, the step of described eye direction of visual lines and the mapping relations of screen is set up
For:
S31: show M fixed point on screen successively, described M is positive integer;
S32: when eyes are look at each fixed point, the light that video camera detection infrared light supply sends is through pupil
N number of corresponding point on screen after foveal reflex, described N is positive integer;
S33: the N number of corresponding point detecting each fixed point are analyzed, obtains equivalent corresponding point;
S34: set up the corresponding relation of pupil center and fixed point.
Thus determine eyes point of fixation on screen by detection eye sight line direction, it is achieved the merit of eyelet mouse
Energy.
In order to realize the above-mentioned purpose of the present invention, according to two aspects of the present invention, the invention provides a kind of eye control Mus
Mark realizes system, including: infrared light supply, video camera, screen and control unit, described control unit includes that eyes image obtains mould
Block and core processing module;The mapping relations of eye direction of visual lines and screen set up by described infrared light supply and video camera;Described eye
Portion's image collection module is used for gathering eyes image;Described core processing module is connected with described eyes image acquisition module, uses
In receiving described eyes image and judging the center of pupil, and realize the work of eye-controlled mouse and shown by screen.
The eye-controlled mouse of the present invention realizes system and eye sight line direction can be utilized to control the motion of mouse exactly, it is achieved
The function of eye-controlled mouse.
In one preferred embodiment of the invention, described core processing module includes pupil center's judge module, demarcation
Module and eye tracking module, described pupil center judge module is connected with described eyes image acquisition module, is used for receiving eye
Portion's image also judges the center of pupil, described demarcating module for setting up the mapping relations of described eye direction of visual lines and screen,
Described eye tracking module is connected with pupil center's judge module and described demarcating module respectively, described eye tracking module according to
Pupil center and the mapping relations of described eye direction of visual lines and screen that described pupil center judge module judges determine eye
Eyeball point of fixation on screen analog mouse realize following the tracks of operation.Thus utilize eye sight line direction to control mouse exactly
Motion, it is achieved the function of eye-controlled mouse.
In one preferred embodiment of the invention, also include data conversion module, described data conversion module respectively with
Described eyes image acquisition module is connected with described core processing module, for realizing the form conversion of described eyes image data
And by the eyes image transmission after conversion to described core processing module.
Data conversion module of the present invention carries out form conversion, when the eye of the data acquisition that eyes image acquisition module gathers
When view data is different from the data type that core processing module can process, utilize this data conversion module that eyes image is obtained
The data form of delivery block collection is changed, and improves the compatibility of Pupil diameter system.
The additional aspect of the present invention and advantage will part be given in the following description, and part will become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or the additional aspect of the present invention and advantage are from combining the accompanying drawings below description to embodiment and will become
Substantially with easy to understand, wherein:
Fig. 1 is the flow chart of eye-controlled mouse implementation method of the present invention;
Fig. 2 is the flow chart of pupil positioning method of the present invention;
Fig. 3 is the rectangular histogram in a kind of preferred implementation of the present invention corresponding to eyes image;
Fig. 4 is the eyes image figure gathered in a kind of preferred implementation of the present invention;
Fig. 5 is the pretreating effect figure to eyes image shown in Fig. 4;
Fig. 6 is the binaryzation design sketch of eyes image shown in Fig. 4;
Fig. 7 is pupil center's locating effect figure of eyes image shown in Fig. 4.
Detailed description of the invention
Embodiments of the invention are described below in detail, and the example of described embodiment is shown in the drawings, the most from start to finish
Same or similar label represents same or similar element or has the element of same or like function.Below with reference to attached
The embodiment that figure describes is exemplary, is only used for explaining the present invention, and is not considered as limiting the invention.
In describing the invention, unless otherwise prescribed and limit, it should be noted that term " is installed ", " being connected ",
" connect " and should be interpreted broadly, for example, it may be mechanically connected or electrical connection, it is also possible to be the connection of two element internals, can
Being to be joined directly together, it is also possible to be indirectly connected to by intermediary, for the ordinary skill in the art, can basis
Concrete condition understands the concrete meaning of above-mentioned term.
The invention provides a kind of eye-controlled mouse implementation method, as it is shown in figure 1, comprise the steps:
S1: obtain eyes image;
S2: the pupil position in eyes image is detected and positions;
S3: set up the mapping relations of eye direction of visual lines and screen;
S4: monitoring eye direction of visual lines in real time, according to the mapping relations of eye direction of visual lines Yu screen, determines eye sight line
Point of fixation on screen;
S5: determining the mode of operation of mouse, wherein, mode of operation includes mouse Move Mode, clicks pattern and double-click mould
Formula;
S6: show mouse on screen at the point of fixation of eye sight line and be operated according to described mode of operation.
In the present embodiment, the method that the pupil position in eyes image is detected and positions, as in figure 2 it is shown,
Comprise the steps:
S21: described eyes image is carried out pretreatment, increases the overall brightness of described eyes image, and filters described
The noise of eyes image;
S22: add up the maximum between-cluster variance of all pixels of described eyes image;
S23: according to described maximum between-cluster variance, calculate the optimal threshold of described eyes image binaryzation, it is achieved eye figure
As binary conversion treatment;
S24: extract pupil edge information;
S25: determine pupil center and radius.
In the preferred embodiment of the present invention, obtain eyes image, gathered the eye of eye rotation by eye tracker
Portion's video image, and be stored in computer;Then step S21 is utilized to gather each frame eye video image to step S23, it is achieved
The pretreatment of image and self-adaption binaryzation process;Finally utilize step S24 to step S25, pupil to be positioned, specifically carry
Take pupil edge information and determine pupil center and radius.Pupil can be entered by the pupil positioning method of the present invention rapidly and accurately
Row location, even if in the case of eye image is fuzzy, also quickly can position out by the center of pupil.
In the preferred embodiment of the present invention, the concretely comprising the following steps of this pupil positioning method:
The first step: obtain eyes image, in the present embodiment, utilize the video camera of eye tracker to obtain eyes image video
And be stored in the memorizer of computer, Fig. 4 shows the eyes image of the three width different angles gathered by eye tracker.
Eyes image is carried out pretreatment, the overall brightness of increase eyes image, and filters the noise of eyes image, place
Image after reason as it is shown in figure 5, in the present embodiment, is gathered each frame eyes image and does Image semantic classification, mended by light
Repay increase image overall brightness and filter noise.In the step that eyes image is carried out pretreatment, increased by light compensation
The method of the overall brightness of eyes image is:
S211: eyes image is converted into the rectangular histogram of correspondence, in the present embodiment, as it is shown on figure 3, regarded by image
Feeling that storehouse combines software Visual studio2010 and eyes image is converted into the rectangular histogram of correspondence, its abscissa is gray value, its
Vertical coordinate is the number of pixels corresponding to gray value.
S212: go out the gray value of all pixels in eyes image according to statistics with histogram.
S213: determine the benchmark of grey scale pixel value, carries out light by the gray value benchmark of the pixel of eyes image
Compensate.In the present embodiment, the method determining the benchmark of grey scale pixel value is: select a number of picture in all pixels
The meansigma methods of the gray value of vegetarian refreshments, as benchmark, in an embodiment more preferably of the present invention, chooses 5%-10%'s
The meansigma methods of the gray value of pixel is as benchmark.Preferably employ the meansigma methods of gray value of the pixel of 8% as benchmark.This
Invention selects the meansigma methods of the gray value of the partial pixel point in all pixels as benchmark, and the ratio chosen can be according to reality
Border needs to adjust, and improves the motility of method.
After determining benchmark, the gray value benchmark of the pixel of eyes image is carried out light compensation.In this enforcement
In mode, the gray value of all of pixel can be carried out light compensation, it is also possible to except determining the pixel selected by benchmark
The gray value of other all pixels beyond Dian carries out light compensation.
In the present embodiment, the method for concrete light compensation can be but be not limited to the side that coefficient in proportion compensates
Method, the certain proportion coefficient taking benchmark gray value is added on the gray value of all of pixel, such as, take the 50% of benchmark gray value
It is added on the gray value of all of pixel, thus realizes the light compensation of the gray value to all of pixel, enhance eye
The brightness of portion's image.The present invention is by increasing the overall brightness of eyes image, for some vague image vegetarian refreshments or broad image
It also is able to carry out rapidly and accurately through hole location, improves accuracy.
After eyes image is carried out light compensation, filtering the noise of eyes image, in the present embodiment, wave filter is adopted
With nonlinear filter, can use but be not limited to mean filter or and median filter, it is preferred to use median filter,
In a kind of embodiment more preferably of the present invention, the median filter of two dimension is used to filter the noise of eyes image, specifically
To realize process as follows:
Slide on image, by the ash of window center point correspondence image pixel with a sliding window containing odd point
Intermediate value in angle value window replaces.
Owing to eyes image is two-dimentional, using two dimensional filter, this wave filter can be expressed from the next:
Wherein, S represents filter window, { fi,jIt is the sequence of image pixel, yi,jOutput for wave filter.
The present invention uses the preprocess method of medium filtering and light compensation, has fuzzy, the darkest, by noise
Eyes image carries out promoting brightness and filtering the ability of noise, improves the accuracy of Pupil diameter.
Second step: the maximum between-cluster variance of all pixels of statistics eyes image, the most real at the another kind of the present invention
Execute in mode, it would however also be possible to employ the maximum between-cluster variance computational methods of improvement, be not i.e. the maximum kind calculating all pixels
Between variance, but calculate the maximum between-cluster variance of certain one part of pixel point, concrete method can be but be not limited to according to eye
Image histogram, counts the grey scale pixel value not occurred in eyes image, calculates the maximum between-cluster variance of remaining pixel, at this
In embodiment, the method calculating maximum between-cluster variance is:
S231: obtain the rectangular histogram of eyes image;
S232: determine that in rectangular histogram in eyes image, number of pixels is the grey scale pixel value of zero;
S233: determine the grey scale pixel value that in eyes image, number of pixels is not zero, calculates the picture that number of pixels is not zero
The maximum between-cluster variance of element gray value.
The maximum between-cluster variance computational methods of the present invention, it is only necessary to calculate the grey scale pixel value that is not zero of number of pixels
Big inter-class variance, improves calculating speed.
In one preferred embodiment of the invention, choose the segmentation threshold of the foreground and background of eyes image, work as maximum
During inter-class variance maximum, segmentation threshold is the optimal threshold of segmentation.The present invention utilizes follow-on maximum variance between clusters to realize
Adaptive threshold extracts, and has that threshold value is suitable, self adaptation, fireballing advantage.
In the present embodiment, after the eyes image to pretreatment does histogram treatment in the first step, then use and change
The maximum variance between clusters entered asks for counting the grey scale pixel value not occurred in image, only calculates between the maximum kind of remaining pixel
Variance.As it is shown on figure 3, this rectangular histogram is in this two parts gray value institute that image intensity value is between 0-35 and between 210-255
Corresponding number of pixels is all zero, therefore when asking for maximum between-cluster variance, first by this two parts pixel grey scale primary system in image
Meter out, does not calculate its corresponding inter-class variance value, only calculates remaining pixel.
The principle calculating maximum between-cluster variance is as follows: meter t is the segmentation threshold of picture foreground and background, and wherein prospect is counted
Accounting for whole image scaled is w0, and its image averaging gray value is u0, and background is counted and accounted for image scaled is w1, its image averaging gray scale
Value is u1, and then can calculate image grand mean gray scale is u.From minimum gradation value to maximum gradation value, traversal is t, when t makes
During inter-class variance value g maximum, now t is the optimal threshold of segmentation.
If image size is M*N pixel, L is the rank of image intensity value, and in the present embodiment, the value of L is 1-
255, if the gray value of the pixel number of pixels less than t is denoted as N0 in image, the pixel grey scale number of pixels more than t is denoted as N1,
Can calculate:
N0+N1=M × N (3)
w0+w1=1 (4)
U=w0u0+w1u1(5)
G=w0(u0-u)2+w1(u1-u)2(6)
In formula (5) is brought into formula (6), obtain formula (7):
G=w0w1(u0-u1)2(7)
T will be from 0 to L-1 successively value, the optimal threshold that the t value when g takes maximum is split for image.
3rd step: according to maximum between-cluster variance, after calculating the optimal threshold of eyes image segmentation, carry out binary conversion treatment,
Gray value is taken as p1 more than the gray value of the pixel of t value, and gray value is taken as p2, and p1 less than the gray value of the pixel of t value >
p2.Thus realizing eyes image binary conversion treatment, result is as shown in Figure 5.In the present embodiment, according to the motion of eye
Situation dynamically obtains the optimal threshold of eyes image segmentation, and carries out dynamic binary conversion treatment according to optimal threshold.In this reality
Executing in mode, it is also possible to the eyes image after binary conversion treatment is carried out filter and makes an uproar and repair, concrete method is:
Extract, through adaptive threshold, the binaryzation eyes image obtained and can there is the pupil mesh caused owing to cornea is reflective
Target hole, eyelash block noise and the situation of defect that shade, burr of bringing etc. cause, it is therefore desirable to binaryzation
Eyes image carries out image and filters repairing of making an uproar, and in the present embodiment, uses etching operation, expansive working, opening operation and closed operation
Realize filter to make an uproar repairing, in a kind of embodiment more preferably of the present invention, concrete use 5 etching operation, expand behaviour 5 times
Work, 1 opening operation, 1 closed operation realize preferably filtering repair efficiency of making an uproar, and wherein, the concrete function used and parameter are set to:
Etching operation function is: cvErode (threshold, threshold, NULL, 5);
Expansive working function is: cvDilate (threshold, threshold, NULL, 5);
Opening operation function is:
cvMorphologyEx(threshold,threshold,0,0,CV_MOP_OPEN,1);
Closed operation function is:
cvMorphologyEx(threshold,threshold,0,0,CV_MOP_CLOSE,1)。
4th step: extract pupil edge information on the eyes image after binary conversion treatment.In the present embodiment, extract
The method of pupil edge information is:
If the pixel value of central pixel point is 255, no matter then why the pixel value of remaining 8 adjacent pixel is worth, one
It is 255 that rule retains the pixel value of central pixel point;
If the pixel value of central pixel point is 0, and the pixel value of 8 adjacent pixels is 0, then by central pixel point
Pixel value change into 255;
During remaining situation, all change the pixel value of central pixel point into 0.
5th step: determine pupil center and radius.In the present embodiment, as it is shown in fig. 7, utilize method of least square to determine
Pupil center and radius.The basic thought that employing realizes the link of the matching to marginal point based on method of least square is: seek each time
Reconnaissance is minimum to the quadratic sum of the upper distance of circle.If the elliptic equation of pupil is:
Ax2+Bxy+Cy2+ Dx+Ey+F=0 (8)
Its constraints is:
B2-4AC < 0 (9)
Can be obtained by principle of least square method:
By extreme value theorem, the value of f to be made (A, B, C, D, E) is minimum, must have:
Thus can get a system of linear equations, then in conjunction with constraints, equation coefficient A can be solved, B, C, D, E, F's
Value.Then (a b) all can be asked by following formula, thus obtain pupil oval centre coordinate (xp, yp), major axis and short axle
The radius in hole and centre coordinate value:
The extracting method of pupil edge information of the present invention and determination pupil center based on method of least square and radius ellipse
Circle approximating method realizes being accurately positioned of pupil, has feature quickly and accurately.
The pupil positioning method based on modified model maximum variance between clusters of the present invention is by image pretreatment operation and right
The method that present in image, grey scale pixel value calculates maximum between-cluster variance, has quickly and accurately asked for the optimal of image segmentation
Threshold value, the intact extraction for pupil edge information lays the foundation;Last use contour extraction method the most on this basis and based on
The ellipse fitting method of little square law realizes being accurately positioned and improve arithmetic speed of pupil.
S3: set up the mapping relations of eye direction of visual lines and screen.In the present embodiment, step S3 can be in step S1
Carrying out, the step specifically setting up eye direction of visual lines and the mapping relations of screen is before:
S31: show M fixed point on screen successively, described M is positive integer, in the present embodiment, it is preferred to use 5
Individual fixed point, is i.e. respectively provided with a fixed point at four angles of screen and central authorities;In the preferred implementation that the present invention is other
In, using 9 fixed points, sphere of movements for the elephants shape will be divided into by screen, the intersection point of every two lines is as a fixed point.
S32: when eyes are look at each fixed point, the light that video camera detection infrared light supply sends is through pupil
N number of corresponding point on screen after foveal reflex, described N is positive integer, in the present embodiment, it is preferred to use 30-50 right
Ying Dian, in a kind of embodiment more preferably of the present invention, uses 50 corresponding point, improves corresponding point and takes accuracy a little.
S33: the N number of corresponding point detecting each fixed point are analyzed, obtains equivalent corresponding point, in this enforcement
In mode, the coordinate of N number of corresponding point is averaged, obtain equivalence corresponding point, simultaneously corresponding to N number of corresponding point pupil center
Also averaging in position, obtains equivalence pupil center, thus the movement of pupil is reflected as the equivalence correspondence that screen coordinate is fastened
Point is mobile.
S34: set up the corresponding relation of equivalence pupil center and fixed point.
After setting up the eye direction of visual lines mapping relations with screen, it is saved in calibration matrix, for realizing follow-up sight line
Tracking provide real time data.
S4: monitoring eye direction of visual lines in real time, according to the mapping relations of eye direction of visual lines Yu screen, determines eye sight line
Point of fixation on screen.
S5: determining the mode of operation of mouse, wherein, mode of operation includes mouse Move Mode, clicks pattern and double-click mould
Formula.In the present embodiment, can define the movement of pupil uniform motion correspondence mouse, twice nictation of interval M1 time is corresponding
Single-click operation, interval the M2 time twice nictation correspondence single-click operation, in an embodiment more preferably of the present invention,
M1 is 1 second, and M2 is 2 seconds.
S6: show mouse on screen at the point of fixation of eye sight line and be operated according to the mode of operation determined.
Present invention also offers a kind of eye-controlled mouse and realize system, it includes infrared light supply, video camera, screen and control list
Unit, control unit includes eyes image acquisition module and core processing module.The light that infrared light supply sends is for irradiating pupil
Center, video camera detects the light of pupil center's reflection pip on screen and reflective information is transferred to control unit,
Control unit shows pip information on screen, and sets up the mapping relations of eye direction of visual lines and screen.Eyes image obtains
Delivery block is used for gathering eyes image;Core processing module is connected with described eyes image acquisition module, is used for receiving described eye
Portion's image also judges the center of pupil, it is achieved the work of eye-controlled mouse is also shown by screen.
In the present embodiment, core processing module includes pupil center's judge module, demarcating module and eye tracking mould
Block, described pupil center judge module is connected with described eyes image acquisition module, is used for receiving eyes image and judging pupil
Center, described demarcating module is for setting up the mapping relations of described eye direction of visual lines and screen, described eye tracking module
Being connected with pupil center's judge module and described demarcating module respectively, described eye tracking module judges according to described pupil center
Pupil center and the mapping relations of described eye direction of visual lines and screen that module judges determine eyes watching attentively on screen
Point analog mouse realize following the tracks of operation.
In the present embodiment, core processing module is positioned at computer, also has memorizer, be used for depositing in computer
The eyes image that storage eyes image acquisition module gathers, user can call the eye figure in memorizer by core processing module
Picture is also checked at any time by image display.
In the preferred embodiment of the present invention, also include data conversion module, data conversion module respectively with eye
Portion's image collection module is connected with core processing module, for realizing the form conversion of eyes image data and by the eye after conversion
Portion's image transmitting is to described core processing module.Data conversion module of the present invention carries out form conversion, when eyes image obtains mould
When the eyes image data of the data acquisition of block collection are different from the data type that core processing module can process, utilize this number
According to modular converter, the data form of eyes image acquisition module collection is changed, improve the compatibility of Pupil diameter system
Property, concrete conversion method can be carried out according to prior art.
In the present embodiment, eyes image acquisition module is eye tracker or optical sensor, it is preferred to use eye tracker, this
Invention uses eye tracker to gather eyes image, it is ensured that eyes image is accurate, improves accuracy and is easy to process in time.
In the preferred implementation that the present invention is other, this system also includes power supply and peripheral circuit, is used for as infrared light
Source, video camera, screen and control unit provide electric energy, it is ensured that the normal work of infrared light supply, video camera, screen and control unit
Make.
In the present embodiment, eyes image acquisition module selects the MT9V011 chip of MICRON company as optics sense
Answering SENSOR, image resolution ratio is 640*480, and the frame per second of the eyes image of shooting is 15 frames/second, either the fineness of image
The usual oculomotor requirement of the mankind is all met with filming frequency.Data conversion module selects CP2102, has the complete of USB2.0
Speed interface, meets the transmission requirement that data volume is big, and compatible with process chip below.Core processing module selects Samsung public
The ARM11 series of processes chip S3C6410 of department, speed and performance can meet the requirement of image and Algorithm Analysis.Power supply is with outer
Enclose circuit and can select existing power supply chip, require as long as the running voltage provided meets.
In the description of this specification, reference term " embodiment ", " some embodiments ", " example ", " specifically show
Example " or the description of " some examples " etc. means to combine this embodiment or example describes specific features, structure, material or spy
Point is contained at least one embodiment or the example of the present invention.In this manual, to the schematic representation of above-mentioned term not
Necessarily refer to identical embodiment or example.And, the specific features of description, structure, material or feature can be any
One or more embodiments or example in combine in an appropriate manner.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that: not
These embodiments can be carried out multiple change in the case of departing from the principle of the present invention and objective, revise, replace and modification, this
The scope of invention is limited by claim and equivalent thereof.
Claims (7)
1. an eye-controlled mouse implementation method, it is characterised in that comprise the steps:
S1: obtain eyes image;
S2: the pupil position in described eyes image is detected and positions,
The localization method of pupil is:
S21: described eyes image is carried out pretreatment, increases the overall brightness of described eyes image, and filters described eye
The noise of image, carries out in the step of pretreatment, the method increasing the overall brightness of described eyes image to described eyes image
For:
S211: described eyes image is converted into the rectangular histogram of correspondence;
S212: go out the gray value of all pixels in described eyes image according to described statistics with histogram;
S213: determine the benchmark of grey scale pixel value, is carried out the gray value of the pixel of described eyes image according to described benchmark
Light compensation;
S22: add up the maximum between-cluster variance of certain one part of pixel point of described eyes image;
Concrete method is according to eyes image rectangular histogram, counts the grey scale pixel value not occurred in eyes image, calculates remaining
The maximum between-cluster variance of lower pixel, method is:
S221: obtain the rectangular histogram of eyes image;
S222: determine that in rectangular histogram in eyes image, number of pixels is the grey scale pixel value of zero;
S223: determine the grey scale pixel value that in eyes image, number of pixels is not zero, calculates the pixel ash that number of pixels is not zero
The maximum between-cluster variance of angle value;
S23: according to described maximum between-cluster variance, calculate the optimal threshold of described eyes image binaryzation, it is achieved eyes image two
Value processes, and the eyes image after binary conversion treatment is carried out filter and makes an uproar and repair, and concrete method is: use 5 etching operation,
5 expansive workings, 1 opening operation, 1 closed operation realize filter and make an uproar repairing, and wherein, the concrete function used and parameter are set to:
Etching operation function is: cvErode (threshold, threshold, NULL, 5);
Expansive working function is: cvDilate (threshold, threshold, NULL, 5);
Opening operation function is:
cvMorphologyEx(threshold,threshold,0,0,CV_MOP_OPEN,1);
Closed operation function is:
cvMorphologyEx(threshold,threshold,0,0,CV_MOP_CLOSE,1);
S24: extract pupil edge information;
S25: determine pupil center and radius;
S3: set up the mapping relations of described eye direction of visual lines and screen;
S4: monitoring eye direction of visual lines in real time, according to the mapping relations of described eye direction of visual lines Yu screen, determines described eye
Sight line point of fixation on the screen;
S5: determine that the mode of operation of mouse, described mode of operation include mouse Move Mode, click pattern and double-click pattern;
S6: show mouse on the screen at the point of fixation of eye sight line and be operated according to described mode of operation.
2. eye-controlled mouse implementation method as claimed in claim 1, it is characterised in that the described benchmark determining grey scale pixel value
Method is: select in all pixels the meansigma methods of the gray value of the pixel of 5%-10% as benchmark.
3. eye-controlled mouse implementation method as claimed in claim 1, it is characterised in that choose prospect and the back of the body of described eyes image
The segmentation threshold of scape, when described maximum between-cluster variance maximum, described segmentation threshold is the optimal threshold of segmentation.
4. eye-controlled mouse implementation method as claimed in claim 1, it is characterised in that the method extracting pupil edge information is:
If the pixel value of central pixel point is 255, no matter then why the pixel value of remaining 8 adjacent pixel is worth, Yi Lvbao
The pixel value staying central pixel point is 255;
If the pixel value of central pixel point is 0, and the pixel value of 8 adjacent pixels is 0, then by the picture of central pixel point
Element value changes into 255;
During remaining situation, all change the pixel value of central pixel point into 0.
5. eye-controlled mouse implementation method as claimed in claim 1, it is characterised in that set up described eye direction of visual lines and screen
The step of mapping relations be:
S31: show M fixed point on screen successively, described M is positive integer;
S32: when eyes are look at each fixed point, the light that video camera detection infrared light supply sends is through pupil center
N number of corresponding point on screen after reflection, described N is positive integer;
S33: the N number of corresponding point detecting each fixed point are analyzed, obtains equivalent corresponding point;
S34: set up the corresponding relation of pupil center and fixed point.
6. an eye-controlled mouse realizes system, it is characterised in that including: infrared light supply, video camera, screen and control unit, institute
State control unit and include eyes image acquisition module and core processing module;
Described infrared light supply and video camera set up the mapping pass of eye direction of visual lines and screen according to the method shown in claim 5
System;
Described eyes image acquisition module is used for gathering eyes image;
Described core processing module is connected with described eyes image acquisition module, is used for receiving described eyes image and judging pupil
Center, and realize the work of eye-controlled mouse according to the method one of claim 1-4 Suo Shu and shown by screen.
7. eye-controlled mouse as claimed in claim 6 realizes system, it is characterised in that: described core processing module includes in pupil
Heart judge module, demarcating module and eye tracking module, described pupil center judge module and described eyes image acquisition module
Be connected, for receiving eyes image and judge the center of pupil, described demarcating module be used for setting up described eye direction of visual lines with
The mapping relations of screen, described eye tracking module is connected with pupil center's judge module and described demarcating module respectively, described
Pupil center that eye tracking module judges according to described pupil center judge module and described eye direction of visual lines and screen
Mapping relations determine that eyes point of fixation on screen analog mouse realize following the tracks of operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310130392.9A CN103176607B (en) | 2013-04-16 | 2013-04-16 | A kind of eye-controlled mouse realization method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310130392.9A CN103176607B (en) | 2013-04-16 | 2013-04-16 | A kind of eye-controlled mouse realization method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103176607A CN103176607A (en) | 2013-06-26 |
CN103176607B true CN103176607B (en) | 2016-12-28 |
Family
ID=48636542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310130392.9A Expired - Fee Related CN103176607B (en) | 2013-04-16 | 2013-04-16 | A kind of eye-controlled mouse realization method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103176607B (en) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103412643B (en) * | 2013-07-22 | 2017-07-25 | 深圳Tcl新技术有限公司 | Terminal and its method for remote control |
CN103336581A (en) * | 2013-07-30 | 2013-10-02 | 黄通兵 | Human eye movement characteristic design-based human-computer interaction method and system |
CN103425970A (en) * | 2013-08-29 | 2013-12-04 | 大连理工大学 | Human-computer interaction method based on head postures |
CN104680552B (en) * | 2013-11-29 | 2017-11-21 | 展讯通信(天津)有限公司 | A kind of tracking and device based on Face Detection |
CN104680122B (en) * | 2013-11-29 | 2019-03-19 | 展讯通信(天津)有限公司 | A kind of tracking and device based on Face Detection |
CN104238120A (en) * | 2013-12-04 | 2014-12-24 | 全蕊 | Smart glasses and control method |
CN104898823B (en) * | 2014-03-04 | 2018-01-23 | 中国电信股份有限公司 | The method and apparatus for controlling sighting target motion |
CN104007820B (en) * | 2014-05-26 | 2017-09-29 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN105094604A (en) * | 2015-06-30 | 2015-11-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105138965B (en) * | 2015-07-31 | 2018-06-19 | 东南大学 | A kind of near-to-eye sight tracing and its system |
CN105137601B (en) * | 2015-10-16 | 2017-11-14 | 上海斐讯数据通信技术有限公司 | A kind of intelligent glasses |
CN105425967B (en) * | 2015-12-16 | 2018-08-28 | 中国科学院西安光学精密机械研究所 | Sight tracking and human eye region-of-interest positioning system |
CN105676458A (en) * | 2016-04-12 | 2016-06-15 | 王鹏 | Wearable calculation device and control method thereof, and wearable equipment with wearable calculation device |
CN106020461A (en) * | 2016-05-13 | 2016-10-12 | 陈盛胜 | Video interaction method based on eyeball tracking technology |
CN107436675A (en) * | 2016-05-25 | 2017-12-05 | 深圳纬目信息技术有限公司 | A kind of visual interactive method, system and equipment |
CN106527705A (en) * | 2016-10-28 | 2017-03-22 | 努比亚技术有限公司 | Operation realization method and apparatus |
CN106502423B (en) * | 2016-11-21 | 2019-04-30 | 武汉理工大学 | Automation microoperation method based on human eye vision positioning |
CN106774862B (en) * | 2016-12-03 | 2020-07-31 | 学能通(山东)智能设备有限公司 | VR display method based on sight and VR equipment |
CN106774863B (en) * | 2016-12-03 | 2020-07-07 | 西安中科创星科技孵化器有限公司 | Method for realizing sight tracking based on pupil characteristics |
CN107067441B (en) * | 2017-04-01 | 2020-02-11 | 海信集团有限公司 | Camera calibration method and device |
CN107609516B (en) * | 2017-09-13 | 2019-10-08 | 重庆爱威视科技有限公司 | Adaptive eye movement method for tracing |
CN107831900B (en) * | 2017-11-22 | 2019-12-10 | 中国地质大学(武汉) | human-computer interaction method and system of eye-controlled mouse |
CN108282699A (en) * | 2018-01-24 | 2018-07-13 | 北京搜狐新媒体信息技术有限公司 | A kind of display interface processing method and processing device |
CN108427503B (en) * | 2018-03-26 | 2021-03-16 | 京东方科技集团股份有限公司 | Human eye tracking method and human eye tracking device |
CN109656373B (en) * | 2019-01-02 | 2020-11-10 | 京东方科技集团股份有限公司 | Fixation point positioning method and positioning device, display equipment and storage medium |
CN109960405A (en) * | 2019-02-22 | 2019-07-02 | 百度在线网络技术(北京)有限公司 | Mouse operation method, device and storage medium |
CN110780739B (en) * | 2019-10-18 | 2023-11-03 | 天津理工大学 | Eye control auxiliary input method based on gaze point estimation |
CN112183200B (en) * | 2020-08-25 | 2023-10-17 | 中电海康集团有限公司 | Eye movement tracking method and system based on video image |
CN114020155A (en) * | 2021-11-05 | 2022-02-08 | 沈阳飞机设计研究所扬州协同创新研究院有限公司 | High-precision sight line positioning method based on eye tracker |
CN113891002B (en) * | 2021-11-12 | 2022-10-21 | 维沃移动通信有限公司 | Shooting method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101359365A (en) * | 2008-08-07 | 2009-02-04 | 电子科技大学中山学院 | Iris positioning method based on Maximum between-Cluster Variance and gray scale information |
CN102012742A (en) * | 2010-11-24 | 2011-04-13 | 广东威创视讯科技股份有限公司 | Method and device for correcting eye mouse |
CN102509095A (en) * | 2011-11-02 | 2012-06-20 | 青岛海信网络科技股份有限公司 | Number plate image preprocessing method |
CN102830797A (en) * | 2012-07-26 | 2012-12-19 | 深圳先进技术研究院 | Man-machine interaction method and system based on sight judgment |
CN102938058A (en) * | 2012-11-14 | 2013-02-20 | 南京航空航天大学 | Method and system for video driving intelligent perception and facing safe city |
-
2013
- 2013-04-16 CN CN201310130392.9A patent/CN103176607B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101359365A (en) * | 2008-08-07 | 2009-02-04 | 电子科技大学中山学院 | Iris positioning method based on Maximum between-Cluster Variance and gray scale information |
CN102012742A (en) * | 2010-11-24 | 2011-04-13 | 广东威创视讯科技股份有限公司 | Method and device for correcting eye mouse |
CN102509095A (en) * | 2011-11-02 | 2012-06-20 | 青岛海信网络科技股份有限公司 | Number plate image preprocessing method |
CN102830797A (en) * | 2012-07-26 | 2012-12-19 | 深圳先进技术研究院 | Man-machine interaction method and system based on sight judgment |
CN102938058A (en) * | 2012-11-14 | 2013-02-20 | 南京航空航天大学 | Method and system for video driving intelligent perception and facing safe city |
Also Published As
Publication number | Publication date |
---|---|
CN103176607A (en) | 2013-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103176607B (en) | A kind of eye-controlled mouse realization method and system | |
CN103136512A (en) | Pupil positioning method and system | |
CN102830797B (en) | A kind of man-machine interaction method based on sight line judgement and system | |
CN102749991B (en) | A kind of contactless free space sight tracing being applicable to man-machine interaction | |
CN102520796B (en) | Sight tracking method based on stepwise regression analysis mapping model | |
CN107193383A (en) | A kind of two grades of Eye-controlling focus methods constrained based on facial orientation | |
WO2019237942A1 (en) | Line-of-sight tracking method and apparatus based on structured light, device, and storage medium | |
CN101344919B (en) | Sight tracing method and disabled assisting system using the same | |
CN113678435A (en) | Lightweight low-power consumption cross reality device with high temporal resolution | |
US11889209B2 (en) | Lightweight cross reality device with passive depth extraction | |
CN104133548A (en) | Method and device for determining viewpoint area and controlling screen luminance | |
CN102547123A (en) | Self-adapting sightline tracking system and method based on face recognition technology | |
CN104615978A (en) | Sight direction tracking method and device | |
CN106325510A (en) | Information processing method and electronic equipment | |
CN104410851A (en) | Device for observing 3D display and method for controlling active shutter glasses | |
DE102014019637A1 (en) | Display switching method, data processing method and electronic device | |
CN102789326B (en) | Non-contact man-machine interaction method based on electrostatic detection | |
CN106326880A (en) | Pupil center point positioning method | |
CN204288121U (en) | A kind of 3D gesture recognition controller | |
CN113160260B (en) | Head-eye double-channel intelligent man-machine interaction system and operation method | |
CN106325480A (en) | Line-of-sight tracing-based mouse control device and method | |
CN116449947B (en) | Automobile cabin domain gesture recognition system and method based on TOF camera | |
Zhang | 2D Computer Vision | |
CN111857338A (en) | Method suitable for using mobile application on large screen | |
TW201234284A (en) | Power-saving based on the human recognition device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20161228 Termination date: 20180416 |
|
CF01 | Termination of patent right due to non-payment of annual fee |