CN106055095A - Method for display screen positioning and interaction of eyeballs - Google Patents
Method for display screen positioning and interaction of eyeballs Download PDFInfo
- Publication number
- CN106055095A CN106055095A CN201610340805.XA CN201610340805A CN106055095A CN 106055095 A CN106055095 A CN 106055095A CN 201610340805 A CN201610340805 A CN 201610340805A CN 106055095 A CN106055095 A CN 106055095A
- Authority
- CN
- China
- Prior art keywords
- screen
- information
- user
- eyeball
- enter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Abstract
The invention discloses a method for display screen positioning and interaction of eyeballs and relates to the technical field of positioning. The method comprises the steps that firstly, a high-pixel camera installed on a screen is used to obtain human face information, and motion information at the center of eyeball pupils is acquired; secondly, artificial intelligence algorithm software, image processing algorithm software and a chip circuit are adopted; the chip circuit carries out data processing according to input information, and a result obtained after the processing is sent to a host computer according to a protocol, so that the host computer can extract relevant position, motion and operation information; thirdly, during operations, a face directly faces the screen, the eyeballs move, and an auxiliary operation is used to assist in determining a screen positioning operation; fourthly, a virtual cursor is disposed on the screen to track the center of the eyeball pupils, and the position is confirmed as the operation position actually demanded on the screen through operations such as key pressing/blinking; and fifthly, the interaction of the position and motion information with relevant equipment is implemented. The method disclosed by the invention has the advantages that the rapid positioning and information interaction can be implemented; use is convenient; and working efficiency is high.
Description
Technical field:
The present invention relates to a kind of eyeball display screen location and mutual method, belong to field of locating technology.
Background technology:
The location, screen position of existing computer or mobile phone is mainly carried out on display (screen) by mouse or touch screen
Location, with to operation to carrying out next step operation (such as choose, replicate, open) as (such as file), but mouse or hands
Dynamic operation needs one to take a hands, and response speed is slow, reduces work efficiency.
Summary of the invention:
For the problems referred to above, the technical problem to be solved in the present invention is to provide a kind of eyeball display screen location and mutual side
Method.
A kind of eyeball display screen location of the present invention and mutual method, its method is:
Step one: use the high-pixel camera head being arranged on screen to ask for face information, obtain eyeball pupil center position
Put movable information;It needs to use dual camera/3-D scanning technology to position the eyeball/face position relative to screen;
Step 2: use intelligent algorithm and image processing algorithm software and chip circuit;Take outside plug type interface
Being connected with computer/mobile phone, premise determines that photographic head and fixes its position relative to screen;Chip circuit is according to being inputted letter
Breath carries out information data process, after process result be sent to main frame according to agreement in case main frame extract relevant position, motion and
Operation information;
Step 3: during operation, in addition to face is just to screen and ocular movement, is assisted by auxiliary operation and determines that screen is fixed
Bit manipulation;
Step 4: have virtual vernier to be tracked eyeball pupil center location on screen, treat the operation by button/nictation
Confirm as actually required operating position on screen;
Step 5: controlling to become its important embedded components when equipment is combined with virtual reality, augmented reality, E.E.G,
Move for catching eyeball, with realize and relevant device to carry out position, movable information mutual.
The invention have the benefit that and can realize quickly positioning with information alternately, easy to use, and work efficiency is high.
Accompanying drawing illustrates:
For ease of explanation, the present invention is embodied as and accompanying drawing is described in detail by following.
Fig. 1 is the structural representation of the present invention;
Fig. 2 is the flow chart of the present invention.
Detailed description of the invention:
For making the object, technical solutions and advantages of the present invention of greater clarity, concrete below by shown in accompanying drawing
Embodiment describes the present invention.However, it should be understood that these describe the most exemplary, and it is not intended to limit the model of the present invention
Enclose.Additionally, in the following description, the description to known features and technology is eliminated, to avoid unnecessarily obscuring the present invention's
Concept.
As shown in Figure 1-2, this detailed description of the invention is by the following technical solutions: its method is:
Step one: use the high-pixel camera head being arranged on screen to ask for face information, obtain eyeball pupil center position
Put movable information;It needs to use dual camera (two photographic head pixel resolutions can be different)/3-D scanning technology (as super
Ultra sonic scanner or 3D laser scanner) position the eyeball/face position relative to screen;
Step 2: use intelligent algorithm and image processing algorithm software and chip circuit;Take outside plug type interface
Being connected with computer/mobile phone, premise determines that photographic head and fixes its position relative to screen;Can also take to be embedded into accordingly
In equipment, depend on actual fabrication scheme;Chip circuit according to inputted information (such as image or three-dimensional object information, auxiliary
Help operation information) carry out information data process, after process, result is sent to main frame according to agreement so that main frame extracts relevant position
Put, move and operation information;
Step 3: during operation, in addition to face is just to screen and ocular movement, by auxiliary operation (such as button or nictation or
Other) assist and determine screen positioning action;
Step 4: have virtual vernier on screen, follows the tracks of eyeball pupil center location, treats by waiting other behaviour button/nictation
Confirm as actually required operating position on screen;
Step 5: controlling to become its important embedded portion when equipment is combined with virtual reality, augmented reality, E.E.G
Part, is used for catching eyeball and moves, with realize and relevant device to carry out position, movable information mutual.
Further, the flow process of described intelligent algorithm and image processing algorithm software is: opening equipment, entrance is sentenced
Disconnected have no user, without then returning, has and then enters face discriminating user, and face recognition user identifies whether to be new user, is to enter
Enter initialising subscriber information, otherwise enter user and use native system, user to use whether system identification enters, otherwise enter user
Whether leave, be, enter and reinitialize user profile, reinitialize user profile and identify whether to enter, be then entering surface
Portion's information initializing, otherwise enters analysis face and moves change → analytical calculation eyeball institute with screen relative status → analysis eyeball
Draw user operation → send whether user operation → user leaves to main frame depending on the auxiliary input analysis of screen coordinate → combination, use
Family chooses whether to leave, and is then to enter to judge either with or without user, otherwise enters and analyze face and screen relative status;New user enters
After entering initialising subscriber information, choose whether to enter initialising subscriber information, otherwise enter new user, be, enter facial information
Initialization → eyeball information initializing → preservation user initialization information → inspection initializes effect, enters when after the assay was approved and divides
Analysis face and screen relative status, defective, enter and whether abandon initializing, otherwise enter face information initializing, be to enter
Enter new user.
The ultimate principle of the present invention and principal character and advantages of the present invention have more than been shown and described.The technology of the industry
Personnel, it should be appreciated that the present invention is not restricted to the described embodiments, simply illustrating this described in above-described embodiment and description
The principle of invention, without departing from the spirit and scope of the present invention, the present invention also has various changes and modifications, and these become
Change and improvement both falls within scope of the claimed invention.Claimed scope by appending claims and
Equivalent defines.
Claims (2)
1. an eyeball display screen location and mutual method, it is characterised in that: its method is:
Step one: use the high-pixel camera head being arranged on screen to ask for face information, obtains eyeball pupil center location fortune
Dynamic information;It needs to use dual camera/3-D scanning technology to position the eyeball/face position relative to screen;
Step 2: use intelligent algorithm and image processing algorithm software and chip circuit;Take outside plug type interface and electricity
Brain/mobile phone is connected, and premise determines that photographic head and fixes its position relative to screen;Chip circuit according to input information enter
Row information data processes, and after process, result is sent to main frame according to agreement so that main frame extracts relevant position, moves and operate
Information;
Step 3: during operation, in addition to face is just to screen and ocular movement, is assisted by auxiliary operation and determines screen positional operand
Make;
Step 4: have virtual vernier to be tracked eyeball pupil center location on screen, treat the operation acknowledgement by button/nictation
For operating position actually required on screen;
Step 5: controlling to become its important embedded components when equipment is combined with virtual reality, augmented reality, E.E.G, be used for
Seizure eyeball moves, with realize and relevant device to carry out position, movable information mutual.
A kind of eyeball display screen the most according to claim 1 location and mutual method, it is characterised in that: described is artificial
The flow process of intelligent algorithm and image processing algorithm software is: open equipment, enters and determines whether user, without then returning, has, enters
Entering face recognition user, face recognition user identifies whether to be new user, is the initialising subscriber information that then enters, otherwise enters use
Family uses native system, and user uses whether system identification enters, and otherwise enters whether user leaves, is, enters and reinitialize
User profile, reinitializes user profile and identifies whether to enter, and is then to enter face information initializing, otherwise enters analysis face
Portion and screen relative status → analysis eyeball move the regarded screen coordinate of change → analytical calculation eyeball → combination auxiliary input point
Analysis draw user operation → to main frame send user operation → user whether leave, user chooses whether to leave, and is, enter sentence
Disconnected either with or without user, otherwise enter and analyze face and screen relative status;After new user enters initialising subscriber information, selection is
No entrance initialising subscriber information, otherwise enters new user, be then enter face information initializing → eyeball information initializing →
Preserve user's initialization information → inspection and initialize effect, enter analysis face and screen relative status when after the assay was approved, no
Qualified, enter and whether abandon initializing, otherwise enter face information initializing, be to enter new user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610340805.XA CN106055095A (en) | 2016-05-23 | 2016-05-23 | Method for display screen positioning and interaction of eyeballs |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610340805.XA CN106055095A (en) | 2016-05-23 | 2016-05-23 | Method for display screen positioning and interaction of eyeballs |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106055095A true CN106055095A (en) | 2016-10-26 |
Family
ID=57177311
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610340805.XA Pending CN106055095A (en) | 2016-05-23 | 2016-05-23 | Method for display screen positioning and interaction of eyeballs |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106055095A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106681514A (en) * | 2017-01-11 | 2017-05-17 | 广东小天才科技有限公司 | Virtual reality device and implementation method thereof |
CN113126762A (en) * | 2021-04-21 | 2021-07-16 | 惠东县人民医院 | Medical data checking device and method for monitoring medical behaviors |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1889016A (en) * | 2006-07-25 | 2007-01-03 | 周辰 | Eye-to-computer cursor automatic positioning controlling method and system |
US20110018804A1 (en) * | 2009-07-22 | 2011-01-27 | Sony Corporation | Operation control device and operation control method |
CN102693022A (en) * | 2011-12-12 | 2012-09-26 | 苏州科雷芯电子科技有限公司 | Vision tracking and voice identification mouse system |
CN102707793A (en) * | 2011-03-28 | 2012-10-03 | 宗鹏 | Eye-control mouse |
-
2016
- 2016-05-23 CN CN201610340805.XA patent/CN106055095A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1889016A (en) * | 2006-07-25 | 2007-01-03 | 周辰 | Eye-to-computer cursor automatic positioning controlling method and system |
US20110018804A1 (en) * | 2009-07-22 | 2011-01-27 | Sony Corporation | Operation control device and operation control method |
CN102707793A (en) * | 2011-03-28 | 2012-10-03 | 宗鹏 | Eye-control mouse |
CN102693022A (en) * | 2011-12-12 | 2012-09-26 | 苏州科雷芯电子科技有限公司 | Vision tracking and voice identification mouse system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106681514A (en) * | 2017-01-11 | 2017-05-17 | 广东小天才科技有限公司 | Virtual reality device and implementation method thereof |
CN113126762A (en) * | 2021-04-21 | 2021-07-16 | 惠东县人民医院 | Medical data checking device and method for monitoring medical behaviors |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021129064A1 (en) | Posture acquisition method and device, and key point coordinate positioning model training method and device | |
Zhou et al. | A novel finger and hand pose estimation technique for real-time hand gesture recognition | |
Zimmermann et al. | Learning to estimate 3d hand pose from single rgb images | |
CN108776773B (en) | Three-dimensional gesture recognition method and interaction system based on depth image | |
Wang et al. | Video analysis of human dynamics—a survey | |
Murthy et al. | A review of vision based hand gestures recognition | |
Wu et al. | Hand modeling, analysis and recognition | |
Kaur et al. | A review: Study of various techniques of Hand gesture recognition | |
Hasan et al. | Hand gesture modeling and recognition using geometric features: a review | |
CN110362210B (en) | Human-computer interaction method and device integrating eye movement tracking and gesture recognition in virtual assembly | |
Turk et al. | Perceptual interfaces | |
US20130335318A1 (en) | Method and apparatus for doing hand and face gesture recognition using 3d sensors and hardware non-linear classifiers | |
Yang et al. | Hand gesture recognition: An overview | |
Madhuri et al. | Vision-based sign language translation device | |
Badi et al. | Hand posture and gesture recognition technology | |
Caridakis et al. | Synthesizing gesture expressivity based on real sequences | |
Nooruddin et al. | HGR: Hand-gesture-recognition based text input method for AR/VR wearable devices | |
CN107329564B (en) | Man-machine finger guessing method based on gesture intelligent perception and man-machine cooperation mechanism | |
Aran et al. | Sign language tutoring tool | |
Dan et al. | Survey on hand gesture recognition approaches | |
CN106055095A (en) | Method for display screen positioning and interaction of eyeballs | |
Asaari et al. | Intelligent biometric group hand tracking (IBGHT) database for visual hand tracking research and development | |
Ueng et al. | Vision based multi-user human computer interaction | |
Tu et al. | Face and gesture based human computer interaction | |
Xu et al. | Bare hand gesture recognition with a single color camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161026 |
|
RJ01 | Rejection of invention patent application after publication |