CN109240510A - Augmented reality human-computer interaction device and control method based on Eye-controlling focus - Google Patents

Augmented reality human-computer interaction device and control method based on Eye-controlling focus Download PDF

Info

Publication number
CN109240510A
CN109240510A CN201811278631.4A CN201811278631A CN109240510A CN 109240510 A CN109240510 A CN 109240510A CN 201811278631 A CN201811278631 A CN 201811278631A CN 109240510 A CN109240510 A CN 109240510A
Authority
CN
China
Prior art keywords
eye
layer
module
interactive system
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811278631.4A
Other languages
Chinese (zh)
Other versions
CN109240510B (en
Inventor
崔笑宇
纪欣伯
陈卫兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201811278631.4A priority Critical patent/CN109240510B/en
Publication of CN109240510A publication Critical patent/CN109240510A/en
Priority to PCT/CN2019/088729 priority patent/WO2020087919A1/en
Application granted granted Critical
Publication of CN109240510B publication Critical patent/CN109240510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The augmented reality human-computer interaction device and control method that present invention relates particularly to a kind of based on Eye-controlling focus, belong to Eye-controlling focus and built-in field.The equipment includes: mirror holder, left interactive system and right interactive system;Each system includes miniature ocular pursuit camera, optical waveguide AR eyeglass, embeded processor, drive control plate and hub slot;The described method includes: 1) establish eye movement interactive system;2) training convolutional neural networks;3) data processing is carried out to acquisition image;4) eye motion is identified.One aspect of the present invention improves the approach and its efficiency that people obtain effective information, is on the other hand interacted by sight, compensates for the operation of voice and gesture, can still provide for interacting when both methods source is occupied.

Description

Augmented reality human-computer interaction device and control method based on Eye-controlling focus
Technical field
The invention belongs to Eye-controlling focus and built-in field, and in particular to a kind of augmented reality based on Eye-controlling focus is man-machine Interactive device and control method.
Background technique
As one by virtual with the real technology combined, augmented reality will be widely used in medical treatment, industrial design, The industries such as military, amusement, are expected to as following universal computing platform, and the Working Life mode that will change people.Machine intelligence The development of energy is so that computer is more and more reliable to the understanding of the natural consciousness of the mankind, so that intelligent interaction be made to have from laboratory Move towards practical opportunity.The development of GPU and other hardware greatly improves computing capability, not only makes deep learning and artificial intelligence There can be wider application, further promote the development of augmented reality.
With the appearance of interactive device, the mode that people interact with a computer is more and more.How efficient rapid and convenient Communicated with computing platform have become scientist research hot topic.For existing HoloLens, Magic leap, Its human-computer interaction rests on voice and gesture, a kind of interactive operation using sight of forming not yet occurs, this is to a certain degree The upper limit reduces the advantage of augmented reality.For the Eye-controlling focus glasses that the companies such as tobli and SMI develop, as just list Pure watches analysis attentively, does not rise to interactive and control plane.The technological accumulation and inheritance of AR and eye movement are compareed, sight is as a kind of interaction Mode has great compatible degree with augmented reality glasses, and the mode that effective information is obtained to improve people provides new opportunity.
Deep learning (deep learning) is the branch of machine learning, be one kind attempt using comprising labyrinth or The multiple process layers being made of multiple nonlinear transformation carry out the algorithm of higher level of abstraction to data.Typical DL framework may include being permitted The neuron of multilayer and millions of a parameters.In existing DL frame, convolutional neural networks (CNN) be most popular framework it One, its artificial neuron can respond the surrounding cells in a part of coverage area, and compare other depth, Feedforward Neural Networks Network shows preferably as a result, making a kind of deep learning structure for having much attraction image procossing.
Summary of the invention
For the above technical problems, the present invention provides a kind of augmented reality human-computer interaction based on Eye-controlling focus and sets It is standby characterized by comprising mirror holder, left interactive system and right interactive system;
The left interactive system is identical and symmetrical as the structure of right interactive system, and each system includes miniature ocular pursuit phase Machine, optical waveguide AR eyeglass, embeded processor, drive control plate and hub slot;
The hub slot is arranged on mirror holder;
The drive control plate is mounted on mirror holder, is connected with optical waveguide AR eyeglass, connecting line is accommodated in hub slot;
The embeded processor is mounted in drive control plate;
The optical waveguide AR eyeglass is used to show the output information of drive control plate;
The optical waveguide AR eyeglass and miniature ocular pursuit camera are mounted on mirror holder, are located within the scope of human visual.
The embeded processor has the GPU architecture of Pascal, while having independent operating system.
The miniature ocular pursuit camera is using the camera that can recorde original RGB triple channel image.
A kind of control method of the augmented reality human-computer interaction device based on Eye-controlling focus, is chased after using above-mentioned based on sight The augmented reality human-computer interaction device of track, comprising the following steps:
Step 1, eye movement interactive system is established in the interactive device;The eye movement interactive system, which uses, is based on CNN frame The convolutional neural networks of structure;
Step 2, the training convolutional neural networks:
The training set image that the model of the convolutional neural networks uses includes with the different colours of skin, ethnic group, iris color, eye The eyes such as ball size threedimensional model is in different angle, different illumination simulations, the eye analog image being truncated in different sight;
Processing is sharpened to the training set image, emphasize edge in order to learn, and adjustment of image size be 256x 256 pixels;
The model is according to ResNet network struction, training process are as follows:
Input picture successively passes through one layer of BatchNorm (BN) layer, convolution (CONV) layer of one layer of 7 convolution kernel of 7x, and one layer Alignment unit (relu) layer is corrected, into convolutional network;
The convolutional network includes the first module, the second module, third module and the 4th module, and input picture sequence is passed through 4 modules of convolutional network;
Any one module is all made of several networks, and each network is identical in the same module;
Each network in the module by one layer BN layers, one layer of 3x 3 CONV layer with one layer relu layers be sequentially connected and At;
The first network of first module is using received input picture as input quantity;The input of other networks of the first module Amount be all last network output quantity and input quantity and;
The input quantity of the first network of other modules is the output quantity and input quantity of the last one network of a upper module Sum;The input quantity of other networks of other modules be all last network output quantity and input quantity and;
On the one hand the output quantity of 4th module obtains 32 iris feature points by dimensionality reduction and by connecting (FC) layer entirely; On the other hand one layer BN layer are passed sequentially through, one layer of 3x 3 CONV layer and one layer relu layers, then dimensionality reduction obtain 33 by FC layers Other characteristic points;
Pupil center is obtained according to 32 iris features point;It is dynamic according to described 33 other Feature point recognition eyes Make;Using 55 whole characteristic points as input, 2 sight line vectors are obtained by 3 FC layers;It is determined with two sight line vector intersection points For the position of the sight focus of spatially human eye;
Using obtained pupil center, sight line vector and sight focus as training result, reaching eye movement interactive system makes With requiring;
Step 3, the eye movement interactive system passes through the original red of the left and right eye that 2 miniature ocular pursuit cameras acquire respectively Turquoise triple channel image, successively performs the following operation:
(1) histogram equalization is carried out to the red channel in image, enhances the image detail under most of scenes;
(2) contrast, the color difference of prominent skin and eyeball and the white of the eye and iris are improved;
(3) pass through Edge contrast, projecting edge feature;
(4) image is adjusted the dimensions to 256 pixel of 256x;
Step 4, the knowledge of sight motion track is carried out to by step 3 treated image by the eye movement interactive system Not, and then the various patterns that sight motion track is drawn are identified to carry out corresponding interactive action;Eye motion is carried out simultaneously Identification.
Beneficial effects of the present invention:
The present invention proposes a kind of augmented reality human-computer interaction device based on Eye-controlling focus and control method, on the one hand improves People obtain the approach and its efficiency of effective informations, are on the other hand interacted by sight, compensate for voice and gesture Operation, can still provide for interacting when both methods source is occupied.
The present invention uses the eye movement interactive system of the convolutional neural networks based on CNN framework, makes to be inferior to the general of infrared camera Logical camera is applied, and is improved the accuracy of Eye-controlling focus and has been saved cost.
The present invention has rational design, it is easy to accomplish, there is good practical value.
Detailed description of the invention
Fig. 1 is the structure of the augmented reality human-computer interaction device based on Eye-controlling focus described in the specific embodiment of the invention Schematic diagram.
In figure: 1, miniature ocular pursuit camera;2, optical waveguide AR eyeglass;3, drive control plate;4, mirror holder;5, hub slot.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing and embodiment, Further description is made to the present invention.It should be appreciated that described herein, specific examples are only used to explain the present invention, and It is not used in the restriction present invention.
The present invention proposes a kind of augmented reality human-computer interaction device based on Eye-controlling focus, as shown in Figure 1, comprising: mirror holder 4, left interactive system and right interactive system;
The left interactive system and right interactive system are separately mounted to left side and the right side of mirror holder 4;
The left interactive system is identical and symmetrical as the structure of right interactive system, and each system includes miniature ocular pursuit camera 1, optical waveguide AR eyeglass 2, embeded processor, drive control plate 3 and hub slot 5;
The hub slot 5 is arranged on mirror holder 4;
The drive control plate 3 is mounted on mirror holder 4, is connected with optical waveguide AR eyeglass 2, connecting line is accommodated in hub slot 5 In;
The embeded processor is mounted in drive control plate 3;
The embeded processor is control centre and the image processing center and to miniature ocular pursuit camera 1 of equipment The signal of passback is sent to the processing center that optical waveguide eyeglass is shown after being handled, the GPU architecture with Pascal, from And there is powerful image-capable, while also there is independent operating system;
The miniature ocular pursuit camera 1 is used to record the original RGB triple channel image of eye, is tracked by eyes real Existing human-computer interaction;
The optical waveguide AR eyeglass 2 is used to show the output information of drive control plate 3;
The optical waveguide AR eyeglass 2 and miniature ocular pursuit camera 1 are mounted on mirror holder 4, are located within the scope of human visual;
The present invention proposes the control method of augmented reality human-computer interaction device based on Eye-controlling focus a kind of, using above-mentioned Augmented reality human-computer interaction device based on Eye-controlling focus, comprising the following steps:
Step 1, eye movement interactive system is established in the interactive device;The eye movement interactive system, which uses, is based on CNN frame The convolutional neural networks of structure;
Step 2, the training convolutional neural networks:
The training set image that the model of the convolutional neural networks uses includes with the different colours of skin, ethnic group, iris color, eye The eyes such as ball size threedimensional model is in different angle, different illumination simulations, the eye analog image being truncated in different sight;
Processing is sharpened to the training set image, emphasize edge in order to learn, and adjustment of image size be 256x 256 pixels;
The model is according to ResNet network struction, training process are as follows:
Input picture successively passes through one layer of BatchNorm (BN) layer, convolution (CONV) layer of one layer of 7 convolution kernel of 7x, and one layer Alignment unit (relu) layer is corrected, into convolutional network;
The convolutional network includes the first module, the second module, third module and the 4th module, and input picture sequence is passed through 4 modules of convolutional network;
Any one module is all made of several networks, and each network is identical in the same module;
Each network in the module by one layer BN layers, one layer of 3x 3 CONV layer with one layer relu layers be sequentially connected and At;
The first network of first module is using received input picture as input quantity;The input of other networks of the first module Amount be all last network output quantity and input quantity and;
The input quantity of the first network of other modules is the output quantity and input quantity of the last one network of a upper module Sum;The input quantity of other networks of other modules be all last network output quantity and input quantity and;
On the one hand the output quantity of 4th module obtains 32 iris feature points by dimensionality reduction and by connecting (FC) layer entirely; On the other hand one layer BN layer are passed sequentially through, one layer of 3x 3 CONV layer and one layer relu layers, then dimensionality reduction obtain 33 by FC layers Other characteristic points;
Pupil center is obtained according to 32 iris features point;It is dynamic according to described 33 other Feature point recognition eyes Make;Using 55 whole characteristic points as input, 2 sight line vectors are obtained by 3 FC layers;It is determined with two sight line vector intersection points For the position of the sight focus of spatially human eye;
Using obtained pupil center, sight line vector and sight focus as training result, reaching eye movement interactive system makes With requiring;
Step 3, the eye movement interactive system passes through the original of the left and right eye that 2 miniature ocular pursuit cameras 1 acquire respectively RGB triple channel image, successively performs the following operation:
(1) histogram equalization is carried out to the red channel in image, enhances the image detail under most of scenes;
(2) contrast, the color difference of prominent skin and eyeball and the white of the eye and iris are improved;
(3) pass through Edge contrast, projecting edge feature;
(4) image is adjusted the dimensions to 256 pixel of 256x;
Step 4, the knowledge of sight motion track is carried out to by step 3 treated image by the eye movement interactive system Not, and then the various patterns that sight motion track is drawn are identified to carry out corresponding interactive action;Eye motion is carried out simultaneously Identification;
Wherein, using the blink movement in eye motion as the switch of the interactive action of eye movement interactive system.

Claims (4)

1. a kind of augmented reality human-computer interaction device based on Eye-controlling focus characterized by comprising mirror holder, left interactive system With right interactive system;
The left interactive system is identical and symmetrical as the structure of right interactive system, and each system includes miniature ocular pursuit camera, light Waveguide AR eyeglass, embeded processor, drive control plate and hub slot;
The hub slot is arranged on mirror holder;
The drive control plate is mounted on mirror holder, is connected with optical waveguide AR eyeglass, connecting line is accommodated in hub slot;
The embeded processor is mounted in drive control plate;
The optical waveguide AR eyeglass and miniature ocular pursuit camera are mounted on mirror holder, are located within the scope of human visual.
2. the augmented reality human-computer interaction device according to claim 1 based on Eye-controlling focus, which is characterized in that described embedding Enter the GPU architecture that formula processor has Pascal, while there is independent operating system.
3. the augmented reality human-computer interaction device according to claim 1 based on Eye-controlling focus, which is characterized in that described micro- Type ocular pursuit camera is using the camera that can recorde original RGB triple channel image.
4. a kind of control method of the augmented reality human-computer interaction device based on Eye-controlling focus, which is characterized in that wanted using right Augmented reality human-computer interaction device described in asking 3 based on Eye-controlling focus, comprising the following steps:
Step 1, eye movement interactive system is established in the interactive device;The eye movement interactive system is used based on CNN framework Convolutional neural networks;
Step 2, the training convolutional neural networks:
The training set image that the model of the convolutional neural networks uses includes with the different colours of skin, ethnic group, iris color, and eyeball is big Small equal eyes threedimensional model is in different angle, different illumination simulations, the eye analog image being truncated in different sight;
Processing is sharpened to the training set image, emphasize edge in order to learn, and adjustment of image size be 256x 256 Pixel;
The model is according to ResNet network struction, training process are as follows:
Input picture successively passes through one layer of BatchNorm (BN) layer, convolution (CONV) layer of one layer of 7 convolution kernel of 7x, one layer of amendment Alignment unit (relu) layer, into convolutional network;
The convolutional network includes the first module, the second module, third module and the 4th module, and input picture sequence passes through convolution 4 modules of network;
Any one module is all made of several networks, and each network is identical in the same module;
By one layer BN layers, one layer 3CONV layers of 3x are connected in sequence each network in the module with one layer relu layers;
The first network of first module is using received input picture as input quantity;The input quantity of other networks of the first module is all For last network output quantity and input quantity and;
The input quantity of the first network of other modules be a upper module the last one network output quantity and input quantity and; The input quantity of other networks of other modules be all last network output quantity and input quantity and;
On the one hand the output quantity of 4th module obtains 32 iris feature points by dimensionality reduction and by connecting (FC) layer entirely;It is another Aspect passes sequentially through one layer BN layers, 3CONV layers of one layer of 3x with one layer relu layer, then dimensionality reduction passes through FC layers, obtains 33 other spies Sign point;
Pupil center is obtained according to 32 iris features point;According to described 33 other Feature point recognition eye motions;With 55 whole characteristic points obtain 2 sight line vectors by 3 FC layers as input;It is determined as sky with two sight line vector intersection points Between upper human eye sight focus position;
Using obtained pupil center, sight line vector and sight focus as training result, so that eye movement interactive system is reached use and want It asks;
Step 3, the original RGB for the left and right eye that the eye movement interactive system is acquired respectively by 2 miniature ocular pursuit cameras Triple channel image, successively performs the following operation:
(1) histogram equalization is carried out to the red channel in image, enhances the image detail under most of scenes;
(2) contrast, the color difference of prominent skin and eyeball and the white of the eye and iris are improved;
(3) pass through Edge contrast, projecting edge feature;
(4) image is adjusted the dimensions to 256 pixel of 256x;
Step 4, the identification of sight motion track is carried out to by step 3 treated image by the eye movement interactive system, into And the various patterns that sight motion track is drawn are identified to carry out corresponding interactive action;The knowledge of eye motion is carried out simultaneously Not.
CN201811278631.4A 2018-10-30 2018-10-30 Augmented reality man-machine interaction equipment based on sight tracking and control method Active CN109240510B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811278631.4A CN109240510B (en) 2018-10-30 2018-10-30 Augmented reality man-machine interaction equipment based on sight tracking and control method
PCT/CN2019/088729 WO2020087919A1 (en) 2018-10-30 2019-05-28 Augmented reality human-computer interaction device and a control method based on gaze tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811278631.4A CN109240510B (en) 2018-10-30 2018-10-30 Augmented reality man-machine interaction equipment based on sight tracking and control method

Publications (2)

Publication Number Publication Date
CN109240510A true CN109240510A (en) 2019-01-18
CN109240510B CN109240510B (en) 2023-12-26

Family

ID=65079352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811278631.4A Active CN109240510B (en) 2018-10-30 2018-10-30 Augmented reality man-machine interaction equipment based on sight tracking and control method

Country Status (2)

Country Link
CN (1) CN109240510B (en)
WO (1) WO2020087919A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020087919A1 (en) * 2018-10-30 2020-05-07 东北大学 Augmented reality human-computer interaction device and a control method based on gaze tracking
CN116185192A (en) * 2023-02-09 2023-05-30 北京航空航天大学 Eye movement identification VR interaction method based on denoising variation encoder
CN117289788A (en) * 2022-11-28 2023-12-26 清华大学 Interaction method, interaction device, electronic equipment and computer storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070011609A1 (en) * 2005-07-07 2007-01-11 Florida International University Board Of Trustees Configurable, multimodal human-computer interface system and method
CN102749991A (en) * 2012-04-12 2012-10-24 广东百泰科技有限公司 Non-contact free space eye-gaze tracking method suitable for man-machine interaction
CN105589551A (en) * 2014-10-22 2016-05-18 褚秀清 Eye tracking method for human-computer interaction of mobile device
CN106354264A (en) * 2016-09-09 2017-01-25 电子科技大学 Real-time man-machine interaction system based on eye tracking and a working method of the real-time man-machine interaction system
CN106407772A (en) * 2016-08-25 2017-02-15 北京中科虹霸科技有限公司 Human-computer interaction and identity authentication device and method suitable for virtual reality equipment
CN106447184A (en) * 2016-09-21 2017-02-22 中国人民解放军国防科学技术大学 Unmanned aerial vehicle operator state evaluation method based on multi-sensor measurement and neural network learning
CN107105333A (en) * 2017-04-26 2017-08-29 电子科技大学 A kind of VR net casts exchange method and device based on Eye Tracking Technique
CN107545302A (en) * 2017-08-02 2018-01-05 北京航空航天大学 A kind of united direction of visual lines computational methods of human eye right and left eyes image
US20180018451A1 (en) * 2016-07-14 2018-01-18 Magic Leap, Inc. Deep neural network for iris identification
CN107656613A (en) * 2017-09-08 2018-02-02 国网山东省电力公司电力科学研究院 A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye
US20180053056A1 (en) * 2016-08-22 2018-02-22 Magic Leap, Inc. Augmented reality display device with deep learning sensors
DE102016118647A1 (en) * 2016-09-30 2018-04-05 Deutsche Telekom Ag Augmented reality communication system and augmented reality interaction device
US20180181592A1 (en) * 2016-12-27 2018-06-28 Adobe Systems Incorporate Multi-modal image ranking using neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109240510B (en) * 2018-10-30 2023-12-26 东北大学 Augmented reality man-machine interaction equipment based on sight tracking and control method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070011609A1 (en) * 2005-07-07 2007-01-11 Florida International University Board Of Trustees Configurable, multimodal human-computer interface system and method
CN102749991A (en) * 2012-04-12 2012-10-24 广东百泰科技有限公司 Non-contact free space eye-gaze tracking method suitable for man-machine interaction
CN105589551A (en) * 2014-10-22 2016-05-18 褚秀清 Eye tracking method for human-computer interaction of mobile device
US20180018451A1 (en) * 2016-07-14 2018-01-18 Magic Leap, Inc. Deep neural network for iris identification
US20180053056A1 (en) * 2016-08-22 2018-02-22 Magic Leap, Inc. Augmented reality display device with deep learning sensors
CN106407772A (en) * 2016-08-25 2017-02-15 北京中科虹霸科技有限公司 Human-computer interaction and identity authentication device and method suitable for virtual reality equipment
CN106354264A (en) * 2016-09-09 2017-01-25 电子科技大学 Real-time man-machine interaction system based on eye tracking and a working method of the real-time man-machine interaction system
CN106447184A (en) * 2016-09-21 2017-02-22 中国人民解放军国防科学技术大学 Unmanned aerial vehicle operator state evaluation method based on multi-sensor measurement and neural network learning
DE102016118647A1 (en) * 2016-09-30 2018-04-05 Deutsche Telekom Ag Augmented reality communication system and augmented reality interaction device
US20180181592A1 (en) * 2016-12-27 2018-06-28 Adobe Systems Incorporate Multi-modal image ranking using neural networks
CN107105333A (en) * 2017-04-26 2017-08-29 电子科技大学 A kind of VR net casts exchange method and device based on Eye Tracking Technique
CN107545302A (en) * 2017-08-02 2018-01-05 北京航空航天大学 A kind of united direction of visual lines computational methods of human eye right and left eyes image
CN107656613A (en) * 2017-09-08 2018-02-02 国网山东省电力公司电力科学研究院 A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JENS GRUBERT等: "A survey of Calibration methods for optical see-through head-mounted displays", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, pages 2649 - 2662 *
雷卓石: "虚拟与增强现实技术", 科技创新导报, pages 150 - 152 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020087919A1 (en) * 2018-10-30 2020-05-07 东北大学 Augmented reality human-computer interaction device and a control method based on gaze tracking
CN117289788A (en) * 2022-11-28 2023-12-26 清华大学 Interaction method, interaction device, electronic equipment and computer storage medium
CN116185192A (en) * 2023-02-09 2023-05-30 北京航空航天大学 Eye movement identification VR interaction method based on denoising variation encoder
CN116185192B (en) * 2023-02-09 2023-10-20 北京航空航天大学 Eye movement identification VR interaction method based on denoising variation encoder

Also Published As

Publication number Publication date
CN109240510B (en) 2023-12-26
WO2020087919A1 (en) 2020-05-07

Similar Documents

Publication Publication Date Title
CN109240510A (en) Augmented reality human-computer interaction device and control method based on Eye-controlling focus
CN109583338A (en) Driver Vision decentralized detection method based on depth integration neural network
CN110110662A (en) Driver eye movement behavioral value method, system, medium and equipment under Driving Scene
WO2020253949A1 (en) Systems and methods for determining one or more parameters of a user's eye
WO2023155533A1 (en) Image driving method and apparatus, device and medium
Wan et al. Robust and accurate pupil detection for head-mounted eye tracking
CN116431036A (en) Virtual online teaching system based on meta universe
CN113419624B (en) Eye movement interaction method and device based on head time sequence signal correction
Jaiswal et al. Smart AI based Eye Gesture Control System
KR20220067964A (en) Method for controlling an electronic device by recognizing movement in the peripheral zone of camera field-of-view (fov), and the electronic device thereof
US20220159174A1 (en) Method of controlling electronic device by recognizing movement in peripheral zone of field of view of camera, and electronic device therefor
CN114115535A (en) Eye movement tracking and identifying method and system based on Yinhua mobile operation system of Galaxy
Niu et al. Real-time localization and matching of corneal reflections for eye gaze estimation via a lightweight network
Lin et al. The method of diagonal-box checker search for measuring one's blink in eyeball tracking device
CN115698989A (en) System and method for authenticating a user of a head mounted display
Park et al. Implementation of visual attention system using bottom-up saliency map model
US20240062400A1 (en) Eye movement analysis method and system
Wang et al. An integrated neural network model for eye-tracking during human-computer interaction
Tinn Cross-domain adaptation and geometric data synthesis for near-eye to remote gaze tracking
Matusz et al. Head-mounted, wireless eyetracker for real-time gaze prediction utilizing machine-learning
Li et al. Improvement of Unconstrained Appearance-Based Gaze Tracking with LSTM
Singh et al. Application control using eye motion
CN113591562B (en) Image processing method, device, electronic equipment and computer readable storage medium
Mitchell Applications of convolutional neural networks to facial detection and recognition for augmented reality and wearable computing
US20240303889A1 (en) Passive and continuous deep learning methods and systems for removal of objects relative to a face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant