CN103927014A - Character input method and device - Google Patents

Character input method and device Download PDF

Info

Publication number
CN103927014A
CN103927014A CN201410160803.3A CN201410160803A CN103927014A CN 103927014 A CN103927014 A CN 103927014A CN 201410160803 A CN201410160803 A CN 201410160803A CN 103927014 A CN103927014 A CN 103927014A
Authority
CN
China
Prior art keywords
coordinate
eye
interface
eye pattern
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410160803.3A
Other languages
Chinese (zh)
Inventor
倪伟俊
林凡
黄建青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GCI Science and Technology Co Ltd
Original Assignee
GCI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GCI Science and Technology Co Ltd filed Critical GCI Science and Technology Co Ltd
Priority to CN201410160803.3A priority Critical patent/CN103927014A/en
Publication of CN103927014A publication Critical patent/CN103927014A/en
Pending legal-status Critical Current

Links

Landscapes

  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a character input method. The method comprises the first step of establishing an interface view and dividing the interface view into M divided areas with the known boundary values, the second step of collecting an eye pattern of the left eye of a user under the irradiation of an infrared LED light, the third step of collecting the eye pattern of the left eye when the user watches the interface view in real time to obtain the coordinate value of the watched scene, the fourth step of establishing the corresponding relation between an interface coordinate system and a scene image coordinate system, the fifth step of obtaining the coordinate value of the watched interface, the sixth step of detecting the divided area where the detecting interface coordinate value is located in the interface view and displaying corresponding input buttons on a screen, and the seventh step of receiving a button pressing order sent by the user, judging whether the button pressing order is 'enter' or 'cancel', and inputting a character if the button pressing order is 'enter'. By means of the character input method, the input operation of the character can be finished through watching conducted by the eyes of the user, the watching accuracy of character input based on sight line catching is high, and operation is easier and more convenient.

Description

A kind of characters input method and device
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of characters input method and device.
Background technology
Man-machine interaction (Human-Computer Interaction, writes a Chinese character in simplified form HCI) refers between people and computing machine and uses certain conversational language, with certain interactive mode, for completing the information exchanging process between people and the computing machine of determining task.Man-machine interaction method based on sight line capture technique is a kind of man-machine interaction mode of natural harmony.Existing sight line capture technique is on computer screen, to draw keyboard, and the character of watching attentively by the analysis of sight line capture systems feedback user determines whether input by user, thereby realizes man-machine interaction.
The patented technology " a kind of character entry apparatus based on eye tracking and P300 brain electric potential " (number of patent application 200910080852.5, Granted publication CN101515199) that Beijing Institute of Technology has discloses a kind of character entry apparatus based on eye tracking and P300 brain electric potential.This device is determined candidate characters collection according to user's sight line in fact, and control character is concentrated all character keys flickers, excites P300 brain electric potential, determines that according to current potential generation time and sight line location user expects character keys, realizes character input operation.
Academic dissertation " Visual Trace Technology research and the application in the input of man-machine interaction character thereof " (the domestic academic dissertation incomparably of the Jiang Chun of Shanghai Communications University swallow, on 02 01st, 2008), use a kind of line-of-sight detection characters input method of single camera, catch user's iris center by video camera, the sight line that judges user is stared at and is looked content, then makes corresponding feedback and operation by system.
Patent " disabled assisting system of sight tracing and application the method " (number of patent application 200810030010.4 of South China Science & Engineering University's application, publication number CN101344919A), disclose a kind of sight tracing and application the method disabled assisting system.This system is sent four kinds of different directions in four regions of people's eye fixation screen four kinds of control informations as user, the four direction key of simulating keyboard, completes the simple operations of controlling wheelchair and controlling computing machine.The deficiency that this patented claim exists is, due to sight line, to watch precision attentively low, only can utilize four kinds of control informations of sight line input, caused the limitation of human-computer interaction function.
The patent " a kind of Password Input control method based on eye tracking " (number of patent application 201110067148.3, publication number CN102129554A) of Shandong University's application, discloses a kind of Password Input control method based on eye tracking.First the method processes facial image, extract human eye feature parameter, the two light source eye trackings that re-use based on similar triangles are realized the estimation from human eye feature parameter to current blinkpunkt position, finally according to blinkpunkt position, utilize time threshold and sound feedback to control Password Input operation.
But, 2 deficiencies that " a kind of character entry apparatus based on eye tracking and P300 brain electric potential " patented technology exists: the one, watch precision attentively limited.The 2nd, character input process, confirmation process complexity.
There are 4 weak points in " Visual Trace Technology research and the application in the input of man-machine interaction character thereof ": the one, and it is limited that sight line is watched precision attentively; The 2nd, head movement is large on the impact of precision; The 3rd, complicated operation, easily causes user's visual fatigue; The 4th, the limitation of human-computer interaction function.
" sight tracing and application the method disabled assisting system " patented claim exist deficiency be, the limitation of human-computer interaction function.
There are 2 deficiencies in " a kind of Password Input control method based on eye tracking ", and the one, it is limited that the Password Input sight line that the method realizes is watched precision attentively; The 2nd, the limitation of the method human-computer interaction function.
Summary of the invention
The embodiment of the present invention provides a kind of characters input method and device, can be by the input operation of watching and confirm to have moved character attentively of eyes of user, based on sight line catch character input to watch precision attentively high, head movement scope is large, operates simpler and more direct.
The embodiment of the present invention provides a kind of characters input method, comprising:
S1, an interface view of establishment, be divided into M the segmented areas that boundary value is known by described interface view, makes the corresponding enter key of each segmented areas; Light in described interface view 3 infrared LED lamps in different boundary as calibration marker point; Wherein, M >=2;
S2, be captured in the eye pattern of the user's left eye under described infrared LED light irradiation, and build scene image coordinate system according to described eye pattern;
S3, gather the eye pattern of the left eye of user while watching described interface view attentively in real time, and the coordinate figure of the sight line blinkpunkt that obtains user according to the eye pattern of watching attentively in described scene image coordinate system, watch scene coordinate figure attentively;
S4, build interface coordinate system according to described interface view, and set up the corresponding relation of described interface coordinate system and described scene image coordinate system;
S5, according to described scene coordinate figure and described corresponding relation value, obtain the coordinate figure of described sight line blinkpunkt in described interface coordinate system, watch interface coordinate value attentively;
S6, detect described interface coordinate value residing segmented areas in described interface view, and on screen, demonstrate the enter key corresponding with described segmented areas; Described enter key comprises ESC Escape Esc;
The key command that S7, reception user send according to the enter key showing, and judge that described key command is for " determining " still " cancellation "; If " determine ", and current enter key is while being not ESC Escape Esc, inputs the character that described enter key is corresponding, and returns to step S2, if " cancellation " returns to step S2.
Correspondingly, the embodiment of the present invention also provides a kind of character entry apparatus, comprises wearable device module, control module, eye pattern acquisition module, eye pattern processing module, coordinate processing module, confirms identification module and interface module;
Described eye pattern acquisition module is for gathering the eye pattern of user's left eye;
Described eye pattern processing module is for building scene image coordinate system according to described eye pattern, and the coordinate figure of the sight line blinkpunkt that obtains user in described scene image coordinate system, watches scene coordinate figure attentively; Set up the corresponding relation of the interface coordinate system of described scene image coordinate system and described interface module structure;
Described coordinate processing module is watched scene coordinate figure and described corresponding relation attentively described in basis, obtains the coordinate figure of described sight line blinkpunkt in described interface coordinate is, watches interface coordinate value attentively;
The key command that described wearable device module sends according to the enter key of described interface module demonstration for forwarding user;
Described confirmation identification module is used for receiving described key command, and judges that described key command is for " confirmation " still " cancellation ", if " confirmation " exports confirmation signal;
Described interface module is used for creating an interface view, and described interface view is divided into M the segmented areas that boundary value is known, makes the corresponding enter key of each segmented areas, and builds interface coordinate system according to described interface view; Wherein, M >=2; Described in detection, watch interface coordinate value residing segmented areas in described interface view attentively, demonstrate the enter key corresponding with described segmented areas; Receive the confirmation signal of described confirmation identification module output, and judge whether the character of described enter key is ESC Escape Esc, if not ESC Escape Esc inputs the character that described enter key is corresponding;
Described control module is used for controlling described wearable device module, eye pattern acquisition module and described eye pattern processing module.
Implement the embodiment of the present invention, there is following beneficial effect:
The characters input method that the embodiment of the present invention provides and device can be by divide the segmented areas of enter key in interface view, and the enter key of can one-time positioning watching attentively to user, possesses good practicality; Adopt the secondary extension astral ray method of threshold adaptive to process eye pattern, accurately located pupil center, improve user and watch precision attentively; Adopt the conversion of scene image coordinate system and interface coordinate system, realized user's sight line blinkpunkt and be tied to the conversion of interface coordinate system from scene image coordinate, thereby determined the enter key that user watches attentively, further improve and watch precision attentively; Confirmation identification module can be identified user's confirmation, possesses good ease for operation, improves character input rate simultaneously; In same width eye pattern, obtain calibration scene coordinate figure and center coordinate of eye pupil value, can reduce the quantity of image capture device, reduce the complexity of device.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of an embodiment of characters input method provided by the invention;
Fig. 2 is the structural representation of the interface view in embodiment illustrated in fig. 1;
Fig. 3 is the structural representation of an embodiment of character entry apparatus provided by the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiment.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
Referring to Fig. 1, be the schematic flow sheet of an embodiment of characters input method provided by the invention, the method comprises:
S1, an interface view of establishment, be divided into M the segmented areas that boundary value is known by interface view, makes the corresponding enter key of each segmented areas; Light in interface view 3 infrared LED lamps in different boundary as calibration marker point; Wherein, M >=2.It should be noted that, 3 calibration marker points are in different boundary, can be used for determining the length of interface view and wide.
S2, be captured in the eye pattern of the user's left eye under infrared LED light irradiation, and build scene image coordinate system according to eye pattern.Wherein, using the point in the eye pattern upper left corner as initial point, the row taking pixel on eye pattern number are X-axis coordinate figure, and the line number taking pixel on eye pattern is Y-axis coordinate figure, build scene image coordinate system.One preferred embodiment in, the frame per second of eye camera that gathers eye pattern is 25 frames/s, the resolution of eye pattern is 640*480.
S3, gather the eye pattern of the left eye of user while watching interface view attentively in real time, and the coordinate figure of the sight line blinkpunkt that obtains user according to the eye pattern of watching attentively in scene image coordinate system, watch scene coordinate figure attentively.
S4, build interface coordinate system according to interface view, and set up the corresponding relation of interface coordinate system and scene image coordinate system.
S5, according to scene coordinate figure and corresponding relation value, obtain the coordinate figure of sight line blinkpunkt in interface coordinate system, watch interface coordinate value attentively.
S6, detection interface coordinate value residing segmented areas in interface view, and on screen, demonstrate the enter key corresponding with segmented areas; Enter key comprises ESC Escape Esc.
The key command that S7, reception user send according to the enter key showing, and judge that key command is " determining " still " cancellation "; If " determine ", and current enter key is while being not ESC Escape Esc, inputs the character that enter key is corresponding, and returns to step S2, if " cancellation " returns to step S2.
As shown in Figure 2, enter key corresponding to segmented areas comprises at least two kinds in English alphabet keys, numerical key, punctuation mark key, operational symbol number key and usual function keys.In the time that user watches English alphabet keys, numerical key, punctuation mark key stroke operation symbolic key attentively, character corresponding to enter key that on screen, input is watched attentively; In the time that user watches usual function keys attentively, system is carried out its corresponding function according to the usual function keys of watching attentively.
Further, step S3 specifically comprises:
The eye pattern of the left eye that S31, collection user look at straight for the first time, and according to the secondary extension astral ray method of threshold adaptive, obtain the center coordinate of eye pupil value of the eye pattern of direct-view for the first time, i.e. initial center coordinate of eye pupil value; And perform step S32;
The eye pattern of S32, Real-time Collection user left eye, and judge whether to process real-time eye pattern, if not, perform step S33 and S34, if so, perform step S35 to S39;
The eye pattern of left eye when S33, collection user watch calibration marker point attentively respectively, and according to the secondary extension astral ray method of threshold adaptive, obtain the center coordinate of eye pupil value while watching calibration marker point attentively respectively, calibrate center coordinate of eye pupil value; Calibration center coordinate of eye pupil value is deducted respectively to initial center coordinate of eye pupil value, obtain " pupil center-initial pupil center " vector value;
S34, the eye pattern of watching calibration marker point attentively is carried out to adaptive threshold binarization segmentation, the coordinate figure of the reflection spot that obtains eye pattern alignment monumented point in scene image coordinate system, calibrates scene coordinate figure; And perform step S37 to S39;
S35, according to the secondary extension astral ray method of threshold adaptive, obtain the center coordinate of eye pupil value of real-time eye pattern, i.e. real-time center coordinate of eye pupil value; Real-time center coordinate of eye pupil value is deducted respectively to initial center coordinate of eye pupil value, obtain " pupil center-initial pupil center " vector value;
S36, real-time eye pattern is carried out to adaptive threshold binarization segmentation, the coordinate figure of the reflection spot that obtains real-time eye pattern alignment monumented point in scene image coordinate system, calibrates scene coordinate figure;
S37, will calibrate scene coordinate figure, and " pupil center-initial pupil center " vector value substitution two dimension calibration equation group corresponding with calibration scene coordinate figure, calculate the calibration coefficient of two-dimentional calibration equation group;
The eye pattern of left eye when S38, collection user watch interface view attentively in real time, according to the secondary extension astral ray method of threshold adaptive, obtains the center coordinate of eye pupil value while watching interface view attentively, watches center coordinate of eye pupil value attentively; To watch center coordinate of eye pupil value attentively and deduct respectively initial center coordinate of eye pupil value, obtain " watching pupil center-initial pupil center attentively " vector value;
S39, will " watch attentively pupil center-initial pupil center " vector value and calibration coefficient substitution two dimension calibration equation group, obtain sight line blinkpunkt while watching interface view attentively the coordinate figure in scene image coordinate system, watch scene coordinate figure attentively.
Further, step S4 specifically comprises:
S41, taking the point in the interface view upper left corner as initial point, the row taking pixel in interface view number are X-axis coordinate figure, the line number taking pixel in interface view is Y-axis coordinate figure, to build interface coordinate system;
S42, the coordinate figure of acquisition calibration marker point in interface coordinate system, i.e. Calibration interface coordinate figure;
S43, will calibrate scene coordinate figure and the substitution of Calibration interface coordinate figure and follow the tracks of equation, and calculate the coordinate conversion matrix of following the tracks of equation, coordinate conversion matrix is the corresponding relation value of interface coordinate system and scene image coordinate system.
Particularly, the secondary extension astral ray method of threshold adaptive specifically comprises:
S51, left eye eye pattern is carried out to gaussian filtering pre-service, obtain filtering eye pattern;
S52, obtain the optimum gradation threshold value of filtering eye pattern according to grey level histogram, and according to optimum gradation threshold value, filtering eye pattern is carried out to binarization segmentation, obtain the barycenter of the filtering eye pattern after cutting apart, barycenter is coarse positioning pupil center for the first time;
S53, as initial point, left eye eye pattern is carried out to pupil coarse positioning for the second time taking coarse positioning pupil center for the first time, obtain coarse positioning pupil center for the second time;
S54, obtain pupil boundary unique point in left eye eye pattern according to secondary extension astral ray method, according to the coordinate figure screening unique point of coarse positioning pupil center for the second time, the unique point of screening is divided in to 6 zoness of different, and chooses at random a unique point in each region;
The unique point fitted ellipse that S55, basis are chosen;
S56, calculate the Euclidean distance of all pupil boundary unique points to closest approach on ellipse, record the number that Euclidean distance is less than the unique point of n pixel, 1≤n≤5;
The number of S57, judgement record and the rate value of all pupil boundary unique point numbers, if rate value is greater than μ, 0.5≤μ≤0.9, ellipse fitting success, the coordinate figure of oval central point is the center coordinate of eye pupil value of eye pattern, otherwise the Grads threshold in change secondary extension astral ray method, and return to step S54.
Preferably, two-dimentional calibration equation group is
x s = a 0 + a 1 x e + a 2 y e + a 3 x e y e + a 4 x e 2 + a 5 y e 2 y s = b 0 + b 1 x e + b 2 y e + b 3 x e y e + b 4 x e 2 + b 5 y e 2
Wherein, x sand y srepresent respectively X-axis coordinate figure and the Y-axis coordinate figure of scene image coordinate system alignment monumented point, a 0, a 1, a 2, a 3, a 4, a 5and b 0, b 1, b 2, b 3, b 4, b 5represent calibration coefficient, x eand y erepresent respectively " pupil center-initial pupil center " vector X-axis coordinate figure and Y-axis coordinate figure.
Preferably, following the tracks of equation is
X c=(x c,y c,1) T
X s=(x s,y s,z s) T
X c=HX s
Wherein, c represents interface coordinate system, and s represents scene image coordinate system, x cand y crepresent respectively X-axis coordinate figure and the Y-axis coordinate figure of calibration marker point in interface coordinate system, x s, y sand z srepresent respectively X-axis coordinate figure, Y-axis coordinate figure and the Z axis coordinate figure of calibration marker point in scene image coordinate system, z s=x s+ y s-1, T represents transposition, X cand X srepresent respectively the coordinate vector of calibration marker point in interface coordinate system and scene image coordinate system, H denotation coordination transition matrix.
Correspondingly, the embodiment of the present invention also provides a kind of character entry apparatus, comprises control module 301, wearable device module 302, eye pattern acquisition module 303, eye pattern processing module 304, coordinate processing module 305, confirms identification module 306 and interface module 307.
Eye pattern acquisition module 303 is for gathering the eye pattern of user's left eye.One preferred embodiment in, it is 25 frames/s that eye camera gathers the frame per second of eye pattern, the resolution of eye pattern is 640x480.
Eye pattern processing module 304 is for building scene image coordinate system according to eye pattern, and the coordinate figure of the sight line blinkpunkt that obtains user in scene image coordinate system, watches scene coordinate figure attentively; Set up the corresponding relation of the interface coordinate system of scene image coordinate system and interface module structure.
Coordinate processing module 305, for according to watching scene coordinate figure and corresponding relation attentively, is obtained the coordinate figure of sight line blinkpunkt in interface coordinate system, watches interface coordinate value attentively.
The key command that wearable device module 302 sends for the enter key that forwards user and show according to interface module 307.
Confirm that identification module 306 is for receiving key command, and judge that key command is " confirmation " still " cancellation ", if " confirmation " exports confirmation signal.
Interface module 307 is for creating an interface view, and interface view is divided into M the segmented areas that boundary value is known, makes the corresponding enter key of each segmented areas, and builds interface coordinate system according to interface view; Wherein, M >=2; Interface coordinate value residing segmented areas in interface view is watched in detection attentively, demonstrates the enter key corresponding with segmented areas; The confirmation signal that confirmation of receipt identification module 306 is exported, and judge whether the character of enter key is ESC Escape Esc, if not ESC Escape Esc inputs the character that enter key is corresponding.Interface module 307 is positioned over 50~70cm place, user dead ahead, makes user can see clearly interface module 307.
Control module 301 is for controlling wearable device module 302, eye pattern acquisition module 303 and eye pattern processing module 304.
Further, eye pattern processing module 304 comprises sight line processing unit, demarcates unit and interface image processing unit;
Sight line processing unit is used for building scene image coordinate system according to eye pattern, and according to the secondary extension astral ray method of threshold adaptive, obtains " watching pupil center-initial pupil center attentively " vector value;
Demarcate unit for obtaining the calibration coefficient of two-dimentional calibration equation group, and according to " watching pupil center-initial pupil center attentively " vector value and calibration coefficient, obtain and watch scene coordinate figure attentively;
Interface image processing unit, for according to scene coordinate system and interface coordinate system, is set up the corresponding relation of interface coordinate system and scene image coordinate system.
Particularly, wearable device module 302 comprises the helmet, aluminium brackets and button ACK button; Eye pattern acquisition module 303 comprises eye camera;
The helmet is worn on user's head, and aluminium brackets is fixed on the dead ahead of the helmet, and eye camera is installed on a side of aluminium brackets, and button ACK button is positioned over user's hand.
The characters input method that the embodiment of the present invention provides and device can be by divide the segmented areas of enter key in interface view, and the enter key of can one-time positioning watching attentively to user, possesses good practicality; Adopt the secondary extension astral ray method of threshold adaptive to process eye pattern, accurately located pupil center, improve user and watch precision attentively; Adopt the conversion of scene image coordinate system and interface coordinate system, realized user's sight line blinkpunkt and be tied to the conversion of interface coordinate system from scene image coordinate, thereby determined the enter key that user watches attentively, further improve and watch precision attentively; Confirmation identification module can be identified user's confirmation, possesses good ease for operation, improves character input rate simultaneously; In same width eye pattern, obtain calibration scene coordinate figure and center coordinate of eye pupil value, can reduce the quantity of image capture device, reduce the complexity of device.
The above is the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications are also considered as protection scope of the present invention.

Claims (9)

1. a characters input method, is characterized in that, comprising:
S1, an interface view of establishment, be divided into M the segmented areas that boundary value is known by described interface view, makes the corresponding enter key of each segmented areas; Light in described interface view 3 infrared LED lamps in different boundary as calibration marker point; Wherein, M >=2;
S2, be captured in the eye pattern of the user's left eye under described infrared LED light irradiation, and build scene image coordinate system according to described eye pattern;
S3, gather the eye pattern of the left eye of user while watching described interface view attentively in real time, and the coordinate figure of the sight line blinkpunkt that obtains user according to the eye pattern of watching attentively in described scene image coordinate system, watch scene coordinate figure attentively;
S4, build interface coordinate system according to described interface view, and set up the corresponding relation of described interface coordinate system and described scene image coordinate system;
S5, according to described scene coordinate figure and described corresponding relation value, obtain the coordinate figure of described sight line blinkpunkt in described interface coordinate system, watch interface coordinate value attentively;
S6, detect described interface coordinate value residing segmented areas in described interface view, and on screen, demonstrate the enter key corresponding with described segmented areas; Described enter key comprises ESC Escape Esc;
The key command that S7, reception user send according to the enter key showing, and judge that described key command is for " determining " still " cancellation "; If " determine ", and current enter key is while being not ESC Escape Esc, inputs the character that described enter key is corresponding, and returns to step S2, if " cancellation " returns to step S2.
2. characters input method as claimed in claim 1, is characterized in that, described step S3 specifically comprises:
The eye pattern of the left eye that S31, collection user look at straight for the first time, and according to the secondary extension astral ray method of threshold adaptive, obtain the center coordinate of eye pupil value of the eye pattern of direct-view for the first time, i.e. initial center coordinate of eye pupil value; And perform step S32;
The eye pattern of S32, Real-time Collection user left eye, and judge whether to process real-time eye pattern, if not, perform step S33 and S34, if so, perform step S35 to S39;
The eye pattern of left eye when S33, collection user watch described calibration marker point attentively respectively, and according to the secondary extension astral ray method of threshold adaptive, obtain the center coordinate of eye pupil value while watching described calibration marker point attentively respectively, calibrate center coordinate of eye pupil value; Described calibration center coordinate of eye pupil value is deducted respectively to described initial center coordinate of eye pupil value, obtain " pupil center-initial pupil center " vector value;
S34, the eye pattern of watching described calibration marker point attentively is carried out to adaptive threshold binarization segmentation, the coordinate figure of the reflection spot that obtains calibration marker point described in eye pattern in described scene image coordinate system, calibrates scene coordinate figure; And perform step S37 to S39;
S35, according to the secondary extension astral ray method of threshold adaptive, obtain the center coordinate of eye pupil value of described real-time eye pattern, i.e. real-time center coordinate of eye pupil value; Described real-time center coordinate of eye pupil value is deducted respectively to described initial center coordinate of eye pupil value, obtain " pupil center-initial pupil center " vector value;
S36, described real-time eye pattern is carried out to adaptive threshold binarization segmentation, the coordinate figure of the reflection spot that obtains calibration marker point described in described real-time eye pattern in described scene image coordinate system, calibrates scene coordinate figure;
S37, by described calibration scene coordinate figure, and corresponding with described calibration scene coordinate figure " pupil center-initial pupil center " vector value substitution two dimension calibration equation group, calculates the calibration coefficient of described two-dimentional calibration equation group;
The eye pattern of left eye when S38, collection user watch described interface view attentively in real time, according to the secondary extension astral ray method of threshold adaptive, obtains the center coordinate of eye pupil value while watching described interface view attentively, watches center coordinate of eye pupil value attentively; Described in inciting somebody to action, watch center coordinate of eye pupil value attentively and deduct respectively described initial center coordinate of eye pupil value, obtain " watching pupil center-initial pupil center attentively " vector value;
S39, by two-dimentional calibration equation group described in described " watch attentively pupil center-initial pupil center " vector value and described calibration coefficient substitution, the coordinate figure of sight line blinkpunkt when described interface view is watched in acquisition attentively in described scene image coordinate system, watches scene coordinate figure attentively.
3. characters input method as claimed in claim 1, is characterized in that, described step S4 specifically comprises:
S41, taking the point in the described interface view upper left corner as initial point, build interface coordinate system;
S42, obtain the coordinate figure of described calibration marker point in described interface coordinate system, i.e. Calibration interface coordinate figure;
S43, described calibration scene coordinate figure and described Calibration interface coordinate figure substitution are followed the tracks of to equation, calculate the coordinate conversion matrix of described tracking equation, described coordinate conversion matrix is the corresponding relation value of described interface coordinate system and described scene image coordinate system.
4. characters input method as claimed in claim 2, is characterized in that, the secondary extension astral ray method of described threshold adaptive specifically comprises:
S51, left eye eye pattern is carried out to gaussian filtering pre-service, obtain filtering eye pattern;
S52, obtain the optimum gradation threshold value of described filtering eye pattern according to grey level histogram, and according to described optimum gradation threshold value, described filtering eye pattern is carried out to binarization segmentation, the barycenter of the filtering eye pattern after acquisition is cut apart, described barycenter is coarse positioning pupil center for the first time;
S53, taking described coarse positioning for the first time pupil center as initial point, described left eye eye pattern is carried out to pupil coarse positioning for the second time, obtain coarse positioning pupil center for the second time;
S54, obtain pupil boundary unique point in described left eye eye pattern according to secondary extension astral ray method, screen described unique point according to the coordinate figure of described coarse positioning for the second time pupil center, the unique point of screening is divided in to 6 zoness of different, and chooses at random a unique point in each region;
The unique point fitted ellipse that S55, basis are chosen;
S56, calculate the Euclidean distance of all pupil boundary unique points to closest approach on described ellipse, record the number that described Euclidean distance is less than the unique point of n pixel, 1≤n≤5;
The number of S57, judgement record and the rate value of all pupil boundary unique point numbers, if described rate value is greater than μ, 0.5≤μ≤0.9, described ellipse fitting success, the coordinate figure of the central point of described ellipse is the center coordinate of eye pupil value of eye pattern, otherwise the Grads threshold in change secondary extension astral ray method, and return to step S54.
5. characters input method as claimed in claim 2, is characterized in that, described two-dimentional calibration equation group is
x s = a 0 + a 1 x e + a 2 y e + a 3 x e y e + a 4 x e 2 + a 5 y e 2 y s = b 0 + b 1 x e + b 2 y e + b 3 x e y e + b 4 x e 2 + b 5 y e 2
Wherein, x sand y srepresent respectively X-axis coordinate figure and the Y-axis coordinate figure of scene image coordinate system alignment monumented point, a 0, a 1, a 2, a 3, a 4, a 5and b 0, b 1, b 2, b 3, b 4, b 5represent calibration coefficient, x eand y erepresent respectively " pupil center-initial pupil center " vector X-axis coordinate figure and Y-axis coordinate figure.
6. the access authentication method of wireless mesh network as claimed in claim 3, is characterized in that, described tracking equation is
X c=(x c,y c,1) T
X s=(x s,y s,z s) T
X c=HX s
Wherein, c represents interface coordinate system, and s represents scene image coordinate system, x cand y crepresent respectively X-axis coordinate figure and the Y-axis coordinate figure of calibration marker point in described interface coordinate is, x s, y sand z srepresent respectively X-axis coordinate figure, Y-axis coordinate figure and the Z axis coordinate figure of calibration marker point in described scene image coordinate system, z s=x s+ y s-1, T represents transposition, X cand X srepresent respectively the coordinate vector of calibration marker point in described interface coordinate system and described scene image coordinate system, H denotation coordination transition matrix.
7. a character entry apparatus, is characterized in that, comprises wearable device module, control module, eye pattern acquisition module, eye pattern processing module, coordinate processing module, confirms identification module and interface module;
Described eye pattern acquisition module is for gathering the eye pattern of user's left eye;
Described eye pattern processing module is for building scene image coordinate system according to described eye pattern, and the coordinate figure of the sight line blinkpunkt that obtains user in described scene image coordinate system, watches scene coordinate figure attentively; Set up the corresponding relation of the interface coordinate system of described scene image coordinate system and described interface module structure;
Described coordinate processing module is watched scene coordinate figure and described corresponding relation attentively described in basis, obtains the coordinate figure of described sight line blinkpunkt in described interface coordinate is, watches interface coordinate value attentively;
The key command that described wearable device module sends according to the enter key of described interface module demonstration for forwarding user;
Described confirmation identification module is used for receiving described key command, and judges that described key command is for " confirmation " still " cancellation ", if " confirmation " exports confirmation signal;
Described interface module is used for creating an interface view, and described interface view is divided into M the segmented areas that boundary value is known, makes the corresponding enter key of each segmented areas, and builds interface coordinate system according to described interface view; Wherein, M >=2; Described in detection, watch interface coordinate value residing segmented areas in described interface view attentively, demonstrate the enter key corresponding with described segmented areas; Receive the confirmation signal of described confirmation identification module output, and judge whether the character of described enter key is ESC Escape Esc, if not ESC Escape Esc inputs the character that described enter key is corresponding;
Described control module is used for controlling described wearable device module, eye pattern acquisition module and described eye pattern processing module.
8. character entry apparatus as claimed in claim 7, is characterized in that, described eye pattern processing module comprises sight line processing unit, demarcates unit and interface image processing unit;
Described sight line processing unit is used for building scene image coordinate system according to described eye pattern, and according to the secondary extension astral ray method of threshold adaptive, obtains " watching pupil center-initial pupil center attentively " vector value;
Described demarcation unit is for obtaining the calibration coefficient of two-dimentional calibration equation group, and according to described " watching pupil center-initial pupil center attentively " vector value and described calibration coefficient, watches scene coordinate figure attentively described in acquisition;
Described interface image processing unit, for according to described scene coordinate system and described interface coordinate system, is set up the corresponding relation of described interface coordinate system and described scene image coordinate system.
9. character entry apparatus as claimed in claim 8, is characterized in that, described wearable device module comprises the helmet, aluminium brackets and button ACK button; Described eye pattern acquisition module comprises eye camera;
The described helmet is worn on user's head, and described aluminium brackets is fixed on the dead ahead of the described helmet, and described eye camera is installed on a side of described aluminium brackets, and described button ACK button is positioned over user's hand.
CN201410160803.3A 2014-04-21 2014-04-21 Character input method and device Pending CN103927014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410160803.3A CN103927014A (en) 2014-04-21 2014-04-21 Character input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410160803.3A CN103927014A (en) 2014-04-21 2014-04-21 Character input method and device

Publications (1)

Publication Number Publication Date
CN103927014A true CN103927014A (en) 2014-07-16

Family

ID=51145267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410160803.3A Pending CN103927014A (en) 2014-04-21 2014-04-21 Character input method and device

Country Status (1)

Country Link
CN (1) CN103927014A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892631A (en) * 2015-11-16 2016-08-24 乐视致新电子科技(天津)有限公司 Method and device for simplifying operation of virtual reality application
CN104156643B (en) * 2014-07-25 2017-02-22 中山大学 Eye sight-based password inputting method and hardware device thereof
CN106708251A (en) * 2015-08-12 2017-05-24 天津电眼科技有限公司 Eyeball tracking technology-based intelligent glasses control method
CN109074441A (en) * 2016-04-29 2018-12-21 微软技术许可有限责任公司 Based on the certification watched attentively
CN110412257A (en) * 2019-07-22 2019-11-05 深圳市预防宝科技有限公司 A kind of combination is manually demarcated and the indicator paper block localization method of astral ray algorithm
CN111428634A (en) * 2020-03-23 2020-07-17 中国人民解放军海军特色医学中心 Human eye sight tracking and positioning method adopting six-point method block fuzzy weighting

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123027A1 (en) * 2001-12-28 2003-07-03 International Business Machines Corporation System and method for eye gaze tracking using corneal image mapping
CN101344919A (en) * 2008-08-05 2009-01-14 华南理工大学 Sight tracing method and disabled assisting system using the same
CN102129554A (en) * 2011-03-18 2011-07-20 山东大学 Method for controlling password input based on eye-gaze tracking
CN102231093A (en) * 2011-06-14 2011-11-02 伍斌 Screen locating control method and device
CN103076876A (en) * 2012-11-22 2013-05-01 西安电子科技大学 Character input device and method based on eye-gaze tracking and speech recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123027A1 (en) * 2001-12-28 2003-07-03 International Business Machines Corporation System and method for eye gaze tracking using corneal image mapping
CN101344919A (en) * 2008-08-05 2009-01-14 华南理工大学 Sight tracing method and disabled assisting system using the same
CN102129554A (en) * 2011-03-18 2011-07-20 山东大学 Method for controlling password input based on eye-gaze tracking
CN102231093A (en) * 2011-06-14 2011-11-02 伍斌 Screen locating control method and device
CN103076876A (en) * 2012-11-22 2013-05-01 西安电子科技大学 Character input device and method based on eye-gaze tracking and speech recognition

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156643B (en) * 2014-07-25 2017-02-22 中山大学 Eye sight-based password inputting method and hardware device thereof
CN106708251A (en) * 2015-08-12 2017-05-24 天津电眼科技有限公司 Eyeball tracking technology-based intelligent glasses control method
CN105892631A (en) * 2015-11-16 2016-08-24 乐视致新电子科技(天津)有限公司 Method and device for simplifying operation of virtual reality application
CN109074441A (en) * 2016-04-29 2018-12-21 微软技术许可有限责任公司 Based on the certification watched attentively
CN109074441B (en) * 2016-04-29 2021-06-04 微软技术许可有限责任公司 Gaze-based authentication
CN110412257A (en) * 2019-07-22 2019-11-05 深圳市预防宝科技有限公司 A kind of combination is manually demarcated and the indicator paper block localization method of astral ray algorithm
CN110412257B (en) * 2019-07-22 2022-05-03 深圳市预防宝科技有限公司 Test paper block positioning method combining manual calibration and star ray algorithm
CN111428634A (en) * 2020-03-23 2020-07-17 中国人民解放军海军特色医学中心 Human eye sight tracking and positioning method adopting six-point method block fuzzy weighting
CN111428634B (en) * 2020-03-23 2023-06-27 中国人民解放军海军特色医学中心 Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting

Similar Documents

Publication Publication Date Title
CN103076876B (en) Based on character entry apparatus and the method for eye tracking and speech recognition
CN103927014A (en) Character input method and device
Jain et al. Real-time upper-body human pose estimation using a depth camera
CN103106401B (en) Mobile terminal iris recognition device with human-computer interaction mechanism
CN104157107B (en) A kind of human posture's apparatus for correcting based on Kinect sensor
CN101661329B (en) Operating control method and device of intelligent terminal
WO2020125499A1 (en) Operation prompting method and glasses
US20150338651A1 (en) Multimodal interation with near-to-eye display
US20170024015A1 (en) Pointing interaction method, apparatus, and system
CN110084192B (en) Rapid dynamic gesture recognition system and method based on target detection
CN103761519A (en) Non-contact sight-line tracking method based on self-adaptive calibration
CN101577812A (en) Method and system for post monitoring
CN104423569A (en) Pointing position detecting device, method and computer readable recording medium
CN103207709A (en) Multi-touch system and method
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN103324284A (en) Mouse control method based on face and eye detection
CN106326860A (en) Gesture recognition method based on vision
CN103677274A (en) Interactive projection method and system based on active vision
CN109858457A (en) Cheating movement based on OpenPose assists in identifying method and system
CN106814853A (en) A kind of eye control tracking based on machine learning
CN104992192A (en) Visual motion tracking telekinetic handwriting system
CN106598356B (en) Method, device and system for detecting positioning point of input signal of infrared emission source
CN105741326B (en) A kind of method for tracking target of the video sequence based on Cluster-Fusion
CN106599873A (en) Figure identity identification method based on three-dimensional attitude information
CN104898971A (en) Mouse pointer control method and system based on gaze tracking technology

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140716

RJ01 Rejection of invention patent application after publication