CN107436675A - A kind of visual interactive method, system and equipment - Google Patents
A kind of visual interactive method, system and equipment Download PDFInfo
- Publication number
- CN107436675A CN107436675A CN201610349652.5A CN201610349652A CN107436675A CN 107436675 A CN107436675 A CN 107436675A CN 201610349652 A CN201610349652 A CN 201610349652A CN 107436675 A CN107436675 A CN 107436675A
- Authority
- CN
- China
- Prior art keywords
- pupil
- picture
- screen
- visual
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of visual interactive method, system and equipment, the described method comprises the following steps:Step 1: user eyeball picture is identified, pre-processed by picture, orient pupil center location in picture;Step 2: demarcation, several bright spots occurs in screen successively, when each bright spot occurs, user watches this point attentively, the picture of system photographs now eyeball;Pupil position picture when the position coordinates of each bright spot and this of observation is established into corresponding relation, with the realization position that real-time display pupil is watched attentively in screen, realizes visual pursuit;Step 3: whether identification mapping position of the pupil on screen and the interactive icons position in screen are overlapping, and then judge whether to activate the icon.What the present invention can realize has the technical effect that:Experience effect of the user in virtual reality interactive experience can be improved, lifts computer reaction speed, UI icons are selected using user's sight, improve the picture texture of interactive interface.
Description
Technical field
The invention belongs to virtual reality method and technology field, and in particular to a kind of visual interactive method, system and equipment.
Background technology
In virtual reality (VR) experience of the prior art, some employs visual pursuit technology, when the mesh of user
When light watches somewhere on interactive interface attentively, the similar mark such as cursor of mouse can be produced in order to which user equipment interacts manipulation
When, select icon.However, because the change of VR images can produce very big data operation quantity, cause computer processing time mistake
It is long, and in visual interactive, caused cursor of mouse image is unstable in UI (i.e. interactive interface) screen, float, drop
The experience effect of low user.
The content of the invention
The technical problem to be solved in the present invention, there is provided a kind of virtual reality visual interactive method, system and equipment, can subtract
The operand of few visual interactive process Computer, improves reaction speed;And eliminate the mouse light in the prior art in screen
Mark, the selection manipulation during directly being interacted by user's sight.
To solve the above problems, the present invention adopts the following technical scheme that:
A kind of visual interactive method, it is characterised in that comprise the following steps:
Step 1: user eyeball picture is identified, pre-processed by picture, orient pupil center position in picture
Put;
Step 2: demarcating, occurring several bright spots successively in screen, when each point occurs, user watches this point attentively,
The picture of system photographs now eyeball.By the position coordinates of each point pass corresponding with pupil position picture foundation when observing this
System, with the realization position that real-time display pupil is watched attentively in screen, realize visual pursuit;
Step 3: whether identification mapping position of the pupil on screen and the interactive icons position in screen are overlapping, and then
Judge whether to activate the icon.
The mode of icon activation can light icon or other any information that can be judged by human body carry
Show, such as icon amplification, text prompt, voice message etc..
Preferably, the demarcation link of the step 2, visual pursuit is realized using frame difference method;
Preferably, are watched attentively by position and is comprised the following steps for the tracking of user's vision, identification pupil:
1) to live part filtering process, the interference that noise identifies to pupil is removed;The live part, refer to locating in advance
During reason, the part that background parts and user's sight are paid close attention to is isolated using frame difference method using 9 pictures, carries out pupil identification
When need not handle background parts data, efficiency and accuracy of identification are improved with this, reduce unnecessary calculating;
2) filtered frame data are subjected to threshold process;
3) frame data XY axle histogram distribution figures are obtained;
4) XY axle histogram peaks are obtained respectively, obtain user's pupil position;
5) Coordinate Conversion is carried out to the position of acquisition with spherical model, obtains the region that pupil is watched attentively on screen.
Preferably, the region that the pupil is watched attentively is overlapping with the coordinate position of UI interactive interfaces icon on screen, then lights
UI interactive interface icons in the region.
Preferably, region minimum value or maximum filter method are used to the filtering process of live part.
A kind of visual interactive system, using above-mentioned visual interactive method.
A kind of visual interactive device, using above-mentioned visual interactive method.
Its operation principle is:By the picture at thermal camera high-speed capture eye position, by picture while shooting
It is transmitted, identifies the position of pupil in picture, by algorithm, the watching area corresponded in real time on screen, realizes vision
Tracking.And because high-speed capture can generate major class picture (each second is up to 120), therefore can generate in computing it is a large amount of invalid
Data.For the present invention by being pre-processed before visual pursuit to shooting picture, it is invalid outside pupil watching area to delete
Background image, retain effective coverage, then by algorithm, the watching area corresponded in real time on screen, realize visual pursuit, greatly
Ground reduces the data volume of computer disposal, improves reaction speed.Meanwhile present invention eliminates the mouse light in interactive interface
Mark, directly using visual spatial attention, by the region watched attentively to pupil it is whether overlapping with the coordinate position of UI interactive interface icons enter
Row judges, if coordinate is overlapping, activates the UI interactive interface icons in the region, completes icon selection manipulation.
What the present invention can realize has the technical effect that:Experience effect of the user in virtual reality interactive experience can be improved
Fruit, computer reaction speed is lifted, non-jitter cursor, improves the picture texture of interactive interface.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of virtual reality visual interactive method of the present invention.
Embodiment
Refering to a kind of virtual reality visual interactive method flow schematic diagram shown in Fig. 1, the present invention uses following technical side
Case:
A kind of visual interactive method, it is characterised in that comprise the following steps:
Step 1: user eyeball picture is identified, pre-processed by picture, orient pupil center position in picture
Put;
Step 2: demarcation, occur several (can select to set 1-9) bright spots in screen successively, each point occur when
Wait, user watches this point attentively, the picture of system photographs now eyeball.By the pupil when position coordinates of each point and this of observation
Hole site picture establishes corresponding relation, with the realization position that real-time display pupil is watched attentively in screen, realizes visual pursuit;
Step 3: whether identification mapping position of the pupil on screen and the interactive icons position in screen are overlapping, and then
Judge whether to activate the icon.
The mode of icon activation can light icon or other any information that can be judged by human body carry
Show, such as icon amplification, text prompt, voice message etc..
Preferably, the demarcation link of the step 2, visual pursuit is realized using frame difference method, is determined in user eyeball picture
The background parts not changed, to reject influence of the background to algorithm.
Preferably, are watched attentively by position and is comprised the following steps for the tracking of user's vision, identification pupil:
1) to live part filtering process, the interference that noise identifies to pupil is removed:The live part, refer to locating in advance
During reason, the part that background parts and user's sight are paid close attention to is isolated using frame difference method using 9 pictures, carries out pupil identification
When need not handle background parts data, efficiency and accuracy of identification are improved with this, reduce unnecessary calculating;
2) filtered frame data are subjected to threshold process;
3) frame data XY axle histogram distribution figures are obtained;
4) XY axle histogram peaks are obtained respectively, obtain user's pupil position;
5) Coordinate Conversion is carried out to the position of acquisition with spherical model, obtains the region that pupil is watched attentively on screen.
Preferably, the region that the pupil is watched attentively is overlapping with the coordinate position of UI interactive interfaces icon on screen, then lights
UI interactive interface icons in the region.
Preferably, region minimum value or maximum filter method are used to the filtering process of live part.
A kind of visual interactive system, using above-mentioned visual interactive method.
A kind of visual interactive device, using above-mentioned visual interactive method.
In the present embodiment, can be used using regional area minimum value or maximum filtering method, the filtering method
Following proposal:
Assuming that regional window Q sizes are k*k, image I sizes are m*n, and unit is pixel.Its step is following (with minimum
Exemplified by value):
The first step, single channel image conversion
If image I is multichannel coloured image, interchannel minimum value is taken, obtains single channel image S.
Second step, image expand
M, n are adjusted as follows
M=m-m%k+k (2)
N=n-n%k+k (3)
3rd step, regional area minimum value image zooming-out establish four width single channel image A, B, C, the D that size is m*n, its
The original pixel values of all pixels are 255.According to below equation to A, B, C, pixel is operated in D:
4th step, regional area minimum value compare
To region Q, it is assumed that top left co-ordinate is (x, y), and its minimum value is asked for as follows:
If maximum is asked for, minimum value min replaces with maximum max in step title.In 3rd step at the beginning of pixel
Beginning pixel value is 0.Formula (1), (4), (5), (6), (7) are used in (8) to take minimum Value Operations to replace with to take maximum to grasp
Make.
Filtered obtained data are subjected to threshold process, establish XY axle histogram distribution figures.Obtain XY axles respectively again
The peak value of histogram, obtain user's pupil position.
Eyeball is finally reduced to spherical model, Coordinate Conversion is carried out, pupil position can be converted to and watch screen attentively
Region, according to region, UI elements in the region are lighted, realize no cursor visual spatial attention.
Its operation principle is:By the picture at thermal camera high-speed capture eye position, by picture while shooting
It is transmitted, identifies the position of pupil in picture, by algorithm, the watching area corresponded in real time on screen, realizes vision
Tracking.And because high-speed capture can generate major class picture (each second is up to 120), therefore can generate in computing it is a large amount of invalid
Data.For the present invention by being pre-processed before visual pursuit to shooting picture, it is invalid outside pupil watching area to delete
Background image, retain effective coverage, then by algorithm, the watching area corresponded in real time on screen, realize visual pursuit, greatly
Ground reduces the data volume of computer disposal, improves reaction speed.Meanwhile present invention eliminates the mouse light in interactive interface
Mark, directly using visual spatial attention, by the region watched attentively to pupil it is whether overlapping with the coordinate position of UI interactive interface icons enter
Row judges, if coordinate is overlapping, activates the UI interactive interface icons in the region, completes icon selection manipulation.
What the present invention can realize has the technical effect that:Experience effect of the user in virtual reality interactive experience can be improved
Fruit, computer reaction speed is lifted, non-jitter cursor, improves the picture texture of interactive interface.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any
The change or replacement expected without creative work, it should all be included within the scope of the present invention.Therefore, it is of the invention
Protection domain should be determined by the scope of protection defined in the claims.
Claims (7)
- A kind of 1. visual interactive method, it is characterised in that comprise the following steps:Step 1: user eyeball picture is identified, pre-processed by picture, orient pupil center location in picture;Step 2: demarcating, occur several bright spots successively in screen, when each bright spot occurs, user watches this point attentively, is System shoots the picture of now eyeball;By the position coordinates of each bright spot pass corresponding with pupil position picture foundation when observing this System, with the realization position that real-time display pupil is watched attentively in screen, realize visual pursuit;Step 3: whether identification mapping position of the pupil on screen and the interactive icons position in screen are overlapping, and then judge Whether the icon is activated.
- A kind of 2. visual interactive method according to claim 1, it is characterised in that in the demarcation link of the step 2, Visual pursuit is realized using frame difference method.
- 3. visual interactive method according to claim 1, it is characterised in that:Tracking to user's vision, identification pupil note Comprise the following steps apparent place putting:1) to live part filtering process, the interference that noise identifies to pupil is removed;The live part, refers to pretreated Cheng Zhong, reject pupil and watch remaining area after the background image outside position attentively;2) filtered frame data are subjected to threshold process;3) frame data XY axle histogram distribution figures are obtained;4) XY axle histogram peaks are obtained respectively, obtain user's pupil position.5) Coordinate Conversion is carried out to the position of acquisition with spherical model, obtains the region that pupil is watched attentively on screen.
- 4. visual interactive method according to claim 3, it is characterised in that:Submitted with screen in the region that the pupil is watched attentively The coordinate position of mutual interface icon is overlapping, then lights the interactive interface icon in the region.
- 5. visual interactive method according to claim 3, it is characterised in that:Region is used to the filtering process of live part Minimum value or the maximum regional value filter method.
- A kind of 6. visual interactive system, using the visual interactive method as any one of claim 1-5.
- A kind of 7. visual interactive equipment, using the visual interactive method as any one of claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610349652.5A CN107436675A (en) | 2016-05-25 | 2016-05-25 | A kind of visual interactive method, system and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610349652.5A CN107436675A (en) | 2016-05-25 | 2016-05-25 | A kind of visual interactive method, system and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107436675A true CN107436675A (en) | 2017-12-05 |
Family
ID=60452897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610349652.5A Pending CN107436675A (en) | 2016-05-25 | 2016-05-25 | A kind of visual interactive method, system and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107436675A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109343700A (en) * | 2018-08-31 | 2019-02-15 | 深圳市沃特沃德股份有限公司 | Eye movement controls calibration data acquisition methods and device |
CN109634431A (en) * | 2019-01-22 | 2019-04-16 | 像航(上海)科技有限公司 | No medium floating projects visual pursuit interaction systems |
CN109917914A (en) * | 2019-03-05 | 2019-06-21 | 河海大学常州校区 | Interactive interface analysis and optimization method based on visual field position |
CN114356482A (en) * | 2021-12-30 | 2022-04-15 | 业成科技(成都)有限公司 | Method for interacting with human-computer interface by using sight line drop point |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030156257A1 (en) * | 1999-12-30 | 2003-08-21 | Tapani Levola | Eye-gaze tracking |
JP2006204855A (en) * | 2005-01-28 | 2006-08-10 | Tokyo Institute Of Technology | Device for detecting gaze motion |
CN102662473A (en) * | 2012-04-16 | 2012-09-12 | 广东步步高电子工业有限公司 | Device and method for implementation of man-machine information interaction based on eye motion recognition |
CN103176607A (en) * | 2013-04-16 | 2013-06-26 | 重庆市科学技术研究院 | Eye-controlled mouse realization method and system |
CN104123161A (en) * | 2014-07-25 | 2014-10-29 | 西安交通大学 | Screen unlocking and application starting method through human eye watching point |
CN105425967A (en) * | 2015-12-16 | 2016-03-23 | 中国科学院西安光学精密机械研究所 | Sight tracking and human eye area-of-interest positioning system |
-
2016
- 2016-05-25 CN CN201610349652.5A patent/CN107436675A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030156257A1 (en) * | 1999-12-30 | 2003-08-21 | Tapani Levola | Eye-gaze tracking |
JP2006204855A (en) * | 2005-01-28 | 2006-08-10 | Tokyo Institute Of Technology | Device for detecting gaze motion |
CN102662473A (en) * | 2012-04-16 | 2012-09-12 | 广东步步高电子工业有限公司 | Device and method for implementation of man-machine information interaction based on eye motion recognition |
CN103176607A (en) * | 2013-04-16 | 2013-06-26 | 重庆市科学技术研究院 | Eye-controlled mouse realization method and system |
CN104123161A (en) * | 2014-07-25 | 2014-10-29 | 西安交通大学 | Screen unlocking and application starting method through human eye watching point |
CN105425967A (en) * | 2015-12-16 | 2016-03-23 | 中国科学院西安光学精密机械研究所 | Sight tracking and human eye area-of-interest positioning system |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109343700A (en) * | 2018-08-31 | 2019-02-15 | 深圳市沃特沃德股份有限公司 | Eye movement controls calibration data acquisition methods and device |
CN109634431A (en) * | 2019-01-22 | 2019-04-16 | 像航(上海)科技有限公司 | No medium floating projects visual pursuit interaction systems |
CN109634431B (en) * | 2019-01-22 | 2024-04-26 | 像航(上海)科技有限公司 | Medium-free floating projection visual tracking interaction system |
CN109917914A (en) * | 2019-03-05 | 2019-06-21 | 河海大学常州校区 | Interactive interface analysis and optimization method based on visual field position |
CN109917914B (en) * | 2019-03-05 | 2022-06-17 | 河海大学常州校区 | Interactive interface analysis and optimization method based on visual field position |
CN114356482A (en) * | 2021-12-30 | 2022-04-15 | 业成科技(成都)有限公司 | Method for interacting with human-computer interface by using sight line drop point |
CN114356482B (en) * | 2021-12-30 | 2023-12-12 | 业成科技(成都)有限公司 | Method for interaction with human-computer interface by using line-of-sight drop point |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104850228B (en) | The method of the watching area of locking eyeball based on mobile terminal | |
CN107436675A (en) | A kind of visual interactive method, system and equipment | |
CN107230187A (en) | The method and apparatus of multimedia signal processing | |
CN101576771B (en) | Scaling method for eye tracker based on nonuniform sample interpolation | |
WO2017020489A1 (en) | Virtual reality display method and system | |
DE102018103572A1 (en) | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND RECORDING MEDIUM | |
WO2018133593A1 (en) | Control method and device for intelligent terminal | |
CN107589850A (en) | A kind of recognition methods of gesture moving direction and system | |
CN104813258A (en) | Data input device | |
US20160252966A1 (en) | Method by which eyeglass-type display device recognizes and inputs movement | |
CN108153502B (en) | Handheld augmented reality display method and device based on transparent screen | |
CN106708270A (en) | Display method and apparatus for virtual reality device, and virtual reality device | |
CN107784885A (en) | Operation training method and AR equipment based on AR equipment | |
CN104809347A (en) | Realization method for controlling highlighted region display of display | |
CN106327583A (en) | Virtual reality equipment for realizing panoramic image photographing and realization method thereof | |
CN102905136B (en) | A kind of video coding-decoding method, system | |
CN106598356B (en) | Method, device and system for detecting positioning point of input signal of infrared emission source | |
CN107589628A (en) | A kind of holographic projector and its method of work based on gesture identification | |
CN109613982A (en) | Wear-type AR shows the display exchange method of equipment | |
CN103927014A (en) | Character input method and device | |
CN204462541U (en) | A kind of intelligent glasses realizing augmented reality | |
KR101395388B1 (en) | Apparatus and method for providing augmented reality | |
CN112330753A (en) | Target detection method of augmented reality system | |
CN104469252A (en) | Facial image extraction achieving method, device and system for VTM | |
CN112445342A (en) | Display screen control method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171205 |