CN117632330A - Interactive target layout method and system of eye control interface in virtual environment - Google Patents
Interactive target layout method and system of eye control interface in virtual environment Download PDFInfo
- Publication number
- CN117632330A CN117632330A CN202311320050.3A CN202311320050A CN117632330A CN 117632330 A CN117632330 A CN 117632330A CN 202311320050 A CN202311320050 A CN 202311320050A CN 117632330 A CN117632330 A CN 117632330A
- Authority
- CN
- China
- Prior art keywords
- eye
- interaction
- experiment
- virtual reality
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000002452 interceptive effect Effects 0.000 title claims description 38
- 230000003993 interaction Effects 0.000 claims abstract description 153
- 238000002474 experimental method Methods 0.000 claims abstract description 108
- 230000004424 eye movement Effects 0.000 claims abstract description 7
- 238000004891 communication Methods 0.000 claims abstract description 6
- 210000001747 pupil Anatomy 0.000 claims description 18
- 230000000694 effects Effects 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 12
- 230000001960 triggered effect Effects 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 8
- 230000004438 eyesight Effects 0.000 claims description 8
- 230000006399 behavior Effects 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 238000007405 data analysis Methods 0.000 claims description 3
- 238000012482 interaction analysis Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 2
- 230000004379 myopia Effects 0.000 claims description 2
- 208000001491 myopia Diseases 0.000 claims description 2
- 238000012549 training Methods 0.000 claims 1
- 238000013461 design Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000036992 cognitive tasks Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009133 cooperative interaction Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009290 primary effect Effects 0.000 description 1
- 230000007115 recruitment Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000003945 visual behavior Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to the field of man-machine interaction interface design, and discloses a layout method and a system of interaction targets in a virtual environment eye control interface, wherein the method comprises the following steps: constructing a virtual reality eye-control interaction experimental platform which is in communication connection with a virtual reality helmet, wherein the experimental platform is provided with a virtual reality screen, and the virtual reality helmet is provided with an eye-movement tracking module; setting a target search trigger task on the experiment platform, designing a semantic-free icon as an interaction target, and determining an experiment paradigm; based on the experiment platform, recruiting a preset number of testees, executing an eye-controlled search triggering experiment, and collecting search experiment data and triggering experiment data by the virtual reality eye-controlled interaction experiment platform; and obtaining a layout recommendation mode of the interaction target in the virtual reality eye control interaction interface according to the search experiment data and the trigger experiment data. The invention can provide layout references of interface elements for eye control interface designers and developers.
Description
Technical Field
The invention relates to the field of man-machine interaction interface design, in particular to an interaction target layout method and system of an eye control interface in a virtual environment.
Background
Virtual Reality (VR) is a technology that allows users to perform immersive interactive experiences in a manually created virtual space by selecting and validating interactive targets through a handheld controller. However, this interaction approach is inefficient and inaccurate due to the need for accurate positioning, and may even cause physical fatigue. Thus, various alternative interaction methods have emerged, featuring higher pointing speeds or accuracy, such as gaze interactions, gesture interactions or eye-hand collaborative interactions. In addition, with the improvement of Eye tracking precision and the appearance of a head-mounted VR display (such as HTC device Eye pro) equipped with Eye tracking technology, the development of Eye control interaction in virtual reality is also promoted. The eye control interaction method provides a natural, rapid and physical effort-reducing interaction mode.
Because the virtual reality has an unlimited space range, more interface layout elements can be contained in the eye-control interaction interface, and the layout mode of the interaction target can be changed. Numerous studies have demonstrated that visual elements of an eye-controlled interactive interface affect eye-controlled interactive effects, such as size, position, distance, and shape of an interactive object, however, most of interface coding optimization is mainly focused on a two-dimensional interface, and studies on the influence of specific coding factors on visual search and trigger interaction in a three-dimensional space are limited, especially layout studies on interactive elements in a virtual implementation eye-controlled interface.
Disclosure of Invention
The technical purpose is that: in order to overcome the defects in the prior art, the invention provides a layout method and a system of interactive targets in an eye-controlled interface of a virtual environment, a search trigger experiment based on staring and smooth tracking is designed by constructing a virtual reality eye-controlled interactive experiment platform, the number of interactive targets and layout indexes of presentation positions are explored, the layout optimization method of the interactive targets of the eye-controlled interface is obtained, the triggering efficiency of the virtual reality eye-controlled interactive interface is improved, and layout references of interface elements are provided for eye-controlled interface designers and developers.
The technical scheme is as follows: in order to achieve the technical purpose, the invention adopts the following technical scheme:
a layout method of interactive targets in a virtual environment eye control interface comprises the following steps:
(1) Constructing a virtual reality eye-control interaction experimental platform which is in communication connection with a virtual reality helmet, wherein the virtual reality eye-control interaction experimental platform is provided with an eye-control interaction interface with a virtual reality screen, and the virtual reality helmet is provided with an eye-movement tracking module;
(2) Setting a target search trigger task on the virtual reality eye control interaction experiment platform;
(3) Based on the virtual reality eye-controlled interaction experiment platform and the target search trigger task, recruiting a preset number of testees, executing an eye-controlled search trigger experiment, and collecting search experiment data and trigger experiment data;
(4) And (3) obtaining a layout recommendation mode of the interaction targets in the virtual reality eye-controlled interaction interface according to the search experimental data and the trigger experimental data obtained in the step (3), wherein the layout recommendation mode comprises a presentation area recommendation mode and a quantity recommendation mode, and determining the layout mode of the interaction targets.
Preferably, the step (1) specifically includes the following steps:
setting the background color of a virtual reality screen on the eye-control interaction interface;
setting a plurality of interaction targets to be positioned in a virtual reality screen in a mapping mode;
setting eye-control interaction operation types, including a staring interaction mode and a smooth tracking interaction mode, wherein the staring interaction mode triggers interaction behavior by monitoring the coincidence degree of the position of the staring point of the tested person and the interaction target position area within 1000 ms; the smooth tracking interaction mode is to trigger interaction behavior by judging the track coincidence degree of the gaze point and the interaction target position in 900ms through the pearson product moment correlation coefficient;
the triggering precision of the eye-controlled interaction operation is optimized, and an adjustable accuracy threshold of continuous position detection is set.
Preferably, the gaze interaction means comprises the steps of:
a1, acquiring a gaze point coordinate, and detecting the coincidence ratio of a gaze point region and an icon region;
a2, judging whether the overlap ratio is more than 90%, if not, returning to the step A1, and if so, entering the step A3;
a3, judging whether the target area is a target icon, if so, turning the semantic-free icon of the target area to red, and recording the trigger time, and if not, turning the semantic-free icon of the target area to green, and recording the trigger time.
The smooth tracking interaction mode comprises the following steps:
b1, acquiring a fixation point coordinate, and designing a 25-time sliding window;
b2, obtaining the fixation point at intervals of 0.04s each time, and updating the sliding window by using the new fixation point;
b3, judging whether the pearson moment correlation coefficient of the 25 fixation points in the sliding window and the icon coordinates is larger than 0.9, if not, returning to the step B2, and if so, entering the step B4;
and B4, judging whether the target area is a target icon, if so, turning the semantic-free icon of the target area to red, and recording the trigger time, and if not, turning the semantic-free icon of the target area to green, and recording the trigger time.
Preferably, the step (2) specifically includes the following steps:
collecting linear icons, rotating, adding and redrawing lines, and making semantic-free icons with different patterns at the center, wherein the semantic-free icons are used as interaction targets and simultaneously used as target buttons triggered during eye control interaction operation;
setting a search trigger interface with an annular menu in the virtual reality screen, and setting the occurrence number of interaction targets and the occurrence positions on the annular menu;
setting an experimental paradigm, namely: in the experiment, after finding the target button, the testee triggers the selected target button for a preset time period through a random eye control interaction mode in staring and smooth tracking; when the triggering succeeds and the triggering fails, the corresponding target buttons are displayed in different colors, and the experiment platform records the triggering time, the triggering accuracy and pupil diameter data of the tested person.
Preferably, the testee in the step (3) goes through a procedure of recruiting testees, pre-experiments, pilot learning and exercise experiments before the formal experiments, and the formal experiments include the following steps:
before an experiment, a tested person wears a virtual reality helmet, line of sight calibration is carried out on each tested person through a nine-point Eye control calibration program of HTC VIVE Pro Eye, and after calibration, the tested person enters an experiment prompt interface;
each tested person executes eye control search triggering experiments with preset times;
after the user completes the experiment for the preset times, prompting the completion of the experiment on the virtual reality screen;
each eye control search triggering experiment comprises the following procedures:
displaying a black fixed cross on the virtual reality screen for a first duration;
presenting a target graph to be identified on the virtual reality screen for a second duration;
a blank screen is presented on the virtual reality screen for a third duration;
the virtual reality screen is provided with a preset search trigger interface with a ring menu, and a timer is started;
the testee finds a target button and triggers the found target button through eye control interaction operation, if the target graph is triggered, the trigger is successful, the corresponding target button turns red, a timer is stopped, and an experimental platform records the trigger time and enters the next test; if the non-target graph is triggered, the corresponding target button turns green.
Preferably, the subjects have normal vision or correct to normal vision, and the minimum vision requirement for a Snellen myopia prescription test is 20/40, with more than 40 eye control triggering experiments per subject.
Preferably, the step (4) specifically includes the following steps:
search triggering experimental data processing: the experiment platform collects the triggering time, the triggering accuracy and pupil diameter data, wherein the triggering time in the experiment is acquired by a timer, the time interval between the moment when a searching interface appears and the moment when a tested person triggers a target button is indicated, and the average triggering time of each tested person is calculated; removing extreme values, namely a value deviating from the average value by +/-0.5 standard deviation, then carrying out mixed method analysis on the triggering time, the triggering accuracy and pupil diameter data, and carrying out main effect analysis and interaction analysis under each index;
obtaining layout recommendation of interaction targets: and obtaining the dominant values of the interaction targets on the number and the presentation positions by setting the triggering success rate and the screening conditions of the triggering time.
Preferably, a user experience expert group is established, and the user experience expert group comprises 4 interaction designers with experience exceeding 5 years and 2 users with experience of use of VR equipment more than 3 years, the minimum experience threshold is set for the triggering success rate of virtual reality eye control, namely the triggering success rate is more than 90%, the triggering time is shorter and better, and the dominant value of the interaction targets in the number and the presentation positions is obtained by taking the triggering success rate as a screening condition.
A layout system of interactive targets in a virtual environment eye-controlled interface, comprising: the system comprises a virtual reality eye-control interaction experiment platform and a virtual reality helmet, wherein the virtual reality eye-control interaction experiment platform is in communication connection with the virtual reality helmet, and the virtual reality helmet is provided with an eye-movement tracking module;
wherein, the mutual experimental platform of virtual reality eye accuse is equipped with the mutual interface of eye accuse that shows virtual reality screen on it, still includes:
the parameter setting module is used for setting a target search trigger task;
the experiment execution module is used for executing an eye-controlled search trigger experiment based on the target search trigger task and collecting search experiment data and trigger experiment data;
the data analysis module is used for analyzing and obtaining the layout recommendation mode of the interactive targets in the virtual reality eye-controlled interactive interface according to the search experimental data and the trigger experimental data obtained by the experiment execution module, wherein the layout recommendation mode comprises the recommendation of the presentation area and the recommendation of the quantity, and the layout mode of the interactive targets is determined
The beneficial effects are that: compared with the prior art, the invention has the following beneficial effects:
the invention provides a layout method of interactive targets in an eye-controlled interface of a virtual environment, which can realize an eye-controlled interactive system for selecting annular menu targets by using staring and smooth tracking, uses the system to develop a user experiment, designs a search trigger experiment based on staring and smooth tracking, explores layout indexes of the number and the presentation positions of the interactive targets, and compares two eye-controlled interactive modes to provide a layout optimization method of the interactive targets of the eye-controlled interface, thereby improving the triggering efficiency of the virtual reality eye-controlled interactive interface.
Drawings
FIG. 1 is a schematic diagram of a virtual reality eye-controlled interaction system of the present invention;
FIG. 2 is a flow chart of a gaze interaction in the present invention;
FIG. 3 is a flow chart of a smooth trace interaction in the present invention;
FIG. 4 is a schematic diagram of an eye-controlled search triggering experiment in the method of the present invention;
FIG. 5 is a semantic icon less schematic of the present invention;
FIG. 6 is a schematic diagram of the number of interaction targets and the position of the presentation field of view of the present invention;
FIG. 7 is a flow chart of an eye-controlled search triggering experiment in the method of the present invention;
FIG. 8 is a schematic diagram of the results of a trigger time experiment of the present invention;
FIG. 9 is a schematic diagram of the trigger accuracy test results of the present invention;
fig. 10 is a schematic representation of the results of the pupil diameter experiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example 1
The invention provides a layout method of interactive targets in a virtual environment eye control interface, which comprises the following steps:
(1) And constructing a virtual reality eye-control interaction experimental platform, realizing the custom layout and eye-control interaction operation of eye-control interaction targets, and adjusting platform parameters to realize efficient interaction target triggering efficiency.
(2) Setting target searching-triggering tasks aiming at an eye control interaction interface, designing semantic-free icons as interaction targets, and setting interaction target layout modes under different conditions according to task requirements, wherein the interaction target layout modes comprise target positions, arrangement modes and target numbers; and determining an interactive feedback mode and setting platform acquisition data (triggering time, triggering accuracy and pupil diameter data) under a task.
(3) And (3) carrying out target search-triggering tasks by combining the experimental platform in the step (1), specifically comprising the eye control interaction experimental flow of recruitment tested, pre-experiment, pilot learning, exercise experiment and formal experiment, and collecting experimental platform data.
(4) And (3) judging the layout recommendation mode (presentation area recommendation and quantity recommendation) of the interaction targets in the virtual reality eye-controlled interaction interface according to the search experimental data and the trigger experimental data obtained in the step (3).
The detailed description is as follows:
(1) Eye control interaction experimental platform for developing virtual reality
(11) The experimental platform is developed by using C# and Unity engines and operates on HTC VIVE Pro Eye (a virtual reality helmet equipped with a Tobii Eye movement tracking module, the sampling rate is 90HZ, and the tracking visual angle is 110 degrees). It is connected to a desktop computer (Intel Core i7-10800F,NVIDIA GeForce RTX 2060, 16 GB) as shown in FIG. 1. In virtual reality, the background of the experiment is light gray (rgb= 164,164,164).
(12) The interaction target is positioned in a unit environment through a coordinate point of a central position in a mapping mode, and is completed through C# programming.
The experiment is divided into two eye control interaction modes, one is staring interaction, namely, the program triggers interaction behavior by monitoring the coincidence degree of the position of the staring point and the interaction target position area within 1000 ms; the other is smooth tracking interaction, and the interaction behavior is triggered by judging the track coincidence degree of the gaze point and the interaction target position in 900ms through the pearson product moment correlation coefficient. The trigger function is accomplished by C# programming.
(13) Trigger accuracy optimization for gaze interactions and smooth tracking interactions. Gaze triggering relies on detecting the overlap of gaze coordinates and a target area (icon area). The gaze coordinate is obtained through an eye tracking module in the VR helmet, and the central position coordinate of the target area is known data preset by the experimental platform. The distance between the gaze coordinates and the coordinates of the target center location is calculated. If the distance is less than the target radius (a), the target is triggered. In view of the smaller target size, higher accuracy is required to acquire eye-tracking data. Thus, the effective detection area is set to 1.3 times the target size (as shown by areas a and B in fig. 4 (B)) to ensure more reliable triggering; meanwhile, an adjustable accuracy threshold of continuous position detection is set to be 0.9, and interference of blink and eye jump visual behaviors to trigger accuracy is avoided.
(2) Experimental material making and eye control interaction task design
(21) And making a semantic-free icon.
To eliminate the impact of semantics on tasks, experimental stimuli use icons that do not contain semantic content. And collecting linear Korean characters with similar outlines, rotating, adding and redrawing lines, and manufacturing semantic-free icons. In the invention, interaction targets are presented by semantic-free icons. An example of a partially semantically silent icon is shown in fig. 5. The experiment simulates a circular menu in virtual reality, where buttons are evenly distributed in a circular array. Pie chart menus ensure that the distance from each button to the center of the screen remains consistent, previous studies have demonstrated that the perceived efficiency of the pie menu in virtual reality is high.
(22) Setting the occurrence number of interaction targets, and determining 3, 4, 5, 6, 7 and 8 semantic-free icon positions, as shown in fig. 6 (a); the interactive target appearance position range is set, and is defined as a 65 ° field angle and an 80 ° field angle region, as shown in fig. 6 (b). The menu layout mode is set to be a ring menu mode.
(23) The experiment uses a search-trigger task, and the mixed design experiment uses 2 (eye-controlled input technique: gaze vs. smooth tracking) ×2 (field angle: 65 deg. vs.80 deg.) two inter-group design and 6 (number of targets: 3, 4, 5, 6, 7, and 8 target icons) conditions of one intra-band design. The participants are required to quickly and accurately find the target buttons in the ring menu. After finding the target button, the participant needs to touch the target for 1000 milliseconds through random one eye-control interaction mode in staring and smooth tracking; if the correct button is activated, the button will turn red. Otherwise, it will turn green. The experiment platform records the triggering time, the triggering accuracy and pupil diameter data. Experiments were divided into four groups, in ABBA order, a and B each representing one of the two interactions described above, with 50 trials in each block and 3 minutes of rest in the middle of each block, with the entire experiment lasting approximately 40 minutes for each participant.
The gaze interaction means shown in fig. 2 comprises the steps of:
a1, acquiring a gaze point coordinate, and detecting the coincidence ratio of a gaze point region and an icon region;
a2, judging whether the overlap ratio is more than 90%, if not, returning to the step A1, and if so, entering the step A3;
a3, judging whether the target area is a target icon, if so, turning the semantic-free icon of the target area to red, and recording the trigger time, and if not, turning the semantic-free icon of the target area to green, and recording the trigger time.
The smooth trace interaction method shown in fig. 3 includes the steps of:
b1, acquiring a fixation point coordinate, and designing a 25-time sliding window;
b2, obtaining the fixation point at intervals of 0.04s each time, and updating the sliding window by using the new fixation point;
b3, judging whether the pearson moment correlation coefficient of the 25 fixation points in the sliding window and the icon coordinates is larger than 0.9, if not, returning to the step B2, and if so, entering the step B4;
and B4, judging whether the target area is a target icon, if so, turning the semantic-free icon of the target area to red, and recording the trigger time, and if not, turning the semantic-free icon of the target area to green, and recording the trigger time.
The sliding window in the present invention is a comparison range, and other values of the sliding window, such as a sliding window of 40 times, can be designed, that is, if 100 gaze points can be acquired in one second, only the last 40 gaze points are compared at a time (60 th to 100 th). 150 gaze points, whether monitored or the last 40 gaze points (110-150) were acquired at 1.5 seconds.
(3) And (3) carrying out target search-triggering tasks by combining the experimental platform in the step (1).
The method specifically comprises the eye control interaction experimental flow of pre-experiment, pilot study, exercise experiment and formal experiment, and data collected by an experimental platform are collected.
The test was recruited. A total of 40 college students were recruited, 20 men and 20 women (age average = 23.57 years, age range = 2.73 years). These participants are recruited through the campus advertising network. All participants had normal or corrected to normal vision (minimum vision requirement of 20/40 through Snellen's myopic optometry). In addition, participants require familiarity with Korean to minimize the impact of symbol semantics. The experiment was performed under normal lighting conditions using a 40 watt fluorescent lamp. Study protocols have been approved by the university ethics committee, and all participants signed consent prior to participation in the study.
And guiding study and practice experiments to help the tested person to be familiar with experimental materials and know experimental procedures. Determining whether to perform pilot learning on the test object according to specific experimental tasks and experimental materials, specifically: for experiments with complex experimental flow, need to explain experimental material samples and lead learning not to influence the perception behavior of a user, the lead learning is necessary, so that the test object is helped to quickly know the experiment, and misoperation in the experimental process is avoided, in particular to related learning of experimental materials and experimental operation.
Formal experiments
(31) Calibration and prompting stage. Each test was line-of-sight calibrated by a nine-point Eye-control calibration program for HTC VIVE Pro Eye. After calibration, an experiment prompt interface is entered for prompting the start of the test experiment and focusing attention.
(32) And a task execution stage. The subject performs an eye-controlled search-trigger experiment by first displaying a black fixed cross ("+") for 500 milliseconds and then displaying the interactive target icon for 1000 milliseconds. After a further 1000ms blank screen, a search-trigger interface with a ring menu appears, at which time a timer starts. The participant's task is to find the target button and trigger it, the successful target icon is triggered then the icon turns red, the non-target icon is triggered, and the icon turns green. Once the target is triggered, the timer stops. The experimental program records the trigger time and then starts the next test. The experimental procedure is shown in figure 7. In the test process of the invention, the tested person is required to keep the position unchanged, for example, the tested person can sit on a stool and keep stable. The head of the subject can be rotated.
(4) Experimental results
(41) Search-trigger experimental data processing. The experiment collects trigger time, trigger accuracy and pupil diameter data.
Wherein the trigger time in the experiment is obtained by a program timer, which is the interval between the occurrence of the search interface and the instant the participant triggers the target button. The average trigger time for each participant was then calculated using statistical analysis software SPSS23.0 (IBM corporation, new york, usa). To ensure data reliability, the extreme values in the search time, i.e., values that deviate from the mean ± 0.5 standard deviations, are removed. And then carrying out mixed method analysis on the three parts of data, and carrying out main effect analysis and interaction analysis under each index. In statistical analysis, the primary effect refers to the effect of one independent variable on the dependent variable, while ignoring the effect of the other independent variables. The principal effect analysis is used to determine the individual effect of each independent variable on the dependent variable. Interaction means the interaction between two or more independent variables, resulting in the variation of the dependent variable being more than the sum of the effects of the individual independent variables. Through the two analyses, the comprehensive influence of the self factors of the design on the experimental result can be better understood.
Triggering a time result. The trigger time difference between the field angle (F (1,442) =8.441, p <0.05, η2=0.019) and the target number (F (5,2210) =30.219, p <0.05, η2=0.064) is significant. However, the eye-controlled input type (F (4,442) =2.068, p >0.05, η2=0.024) had no statistically significant effect on the trigger time. For the interaction effect of the trigger time, the eye-controlled input type×field angle (F (1,442) =10.849, p <0.05, η2=0.024) and the eye-controlled input type×target number (F (5,2210) =2.988, p <0.05, η2=0.007) are remarkable. The results are shown in FIG. 8.
And triggering an accuracy result. The target number (F (5,2900) =23.979, p <0.05, η2=0.040) has a significant difference in accuracy. In addition, there is also a significant difference in the eye-controlled input types (F (1, 580) =10.105, p >0.05, η2=0.017). In contrast, changing the field angle (F (1, 580) =0.344, p >0.05, η2=0.000) has no significant effect on the trigger accuracy. The results are shown in FIG. 9.
Pupil diameter results. The change in pupil diameter can be used to assess the extent of cognitive load. A larger pupil is often associated with higher cognitive load because the pupil may dilate to enhance visual input when cognitive tasks are more difficult or complex. Thus, pupil data can be used to measure the complexity of cognitive tasks and cognitive load of the subject. The pupil diameter difference between the eye-controlled input type (F (1,449) =23.430, p <0.05, η2=0.050) and the field angle (F (1,449) =5.777, p <0.05, η2=0.013) is significant. However, the target number (F (5,2245) =0.963, p >0.05, η2=0.002) had no statistically significant effect on pupil diameter. For the interaction effect, the target number×the eye-controlled input type (F (5,2245) =3.261, p <0.05, η2=0.007) and the eye-controlled input type×the field angle (F (1,449) =4.293, p <0.05, η2=0.009) are remarkable. The results are shown in FIG. 10 (d, e).
(42) And obtaining layout recommendation of the interaction targets. Establishing a minimum experience threshold for the triggering success rate of the virtual reality eye control by a user experience expert group (consisting of 4 interaction designers with experience of more than 5 years and 2 users with experience of more than 3 years of VR equipment use), namely, the triggering success rate is more than 90%; the triggering time is less and better, and the dominant value of the interaction targets in the number and the presentation position is obtained by taking the triggering time as a screening condition. The result of the recommended value shows that under the triggering mode of fixation interaction, the number of recommended interaction targets is 7 or less, and the presentation range of the interaction targets is a range of 65 degrees; under the trigger mode of smooth tracking, the number of recommended interaction targets is 6 or less, and the interaction targets are indistinguishable under all view angles in the presentation range. The former is more recommended in both gaze interaction and smooth tracking interaction modes, because the gaze-based interaction method is well behaved in terms of trigger accuracy and trigger time under the same conditions.
The invention can verify by objective results:
firstly, in the virtual reality eye control interaction process, the number of interaction targets has a remarkable influence on the eye control-based search-trigger task, and virtual reality eye control interaction efficiency can be improved by restricting the number range of the interaction targets.
Second, the field angle presented by the interaction target has a significant impact on the gaze-triggered eye-control approach, and has little impact on the smooth tracking-based eye-control approach. The trigger time and accuracy of gaze interaction may be improved by optimizing the field angle range of the target presentation.
Third, in the same case, gaze-based eye control is superior to smooth tracking-based eye control in trigger effect.
Fourth, in the pupil diameter aspect of the triggering process, the interaction target number and the field angle range presented by the target are not greatly influenced.
The invention verifies the influence of the number of the interaction targets and the presentation view angle range of the eye control interface in the virtual reality environment on the triggering effect, and proposes an optimized layout mode of the interaction targets in the eye control interface based on the evaluation method.
Example two
The embodiment provides a cloth system of interactive targets in a virtual environment eye control interface, which comprises: the eye-controlled virtual reality system comprises a virtual reality eye-controlled interaction experiment platform and a virtual reality helmet, wherein the virtual reality eye-controlled interaction experiment platform is in communication connection with the virtual reality helmet, and the virtual reality helmet is provided with an eye-movement tracking module.
Wherein, the mutual experimental platform of virtual reality eye accuse is equipped with the mutual interface of eye accuse that shows virtual reality screen on it, still includes:
the parameter setting module is used for setting a target search trigger task, designing a semantic-free icon as an interaction target, setting a layout mode and an interaction feedback mode of the interaction target, and determining an experimental paradigm;
the experiment execution module is used for executing an eye control search trigger experiment based on the experiment parameters set by the parameter setting module, and recording search experiment data and trigger experiment data;
the data analysis module is used for obtaining the layout recommendation mode of the interaction targets in the virtual reality eye-controlled interaction interface according to the obtained search experiment data and the trigger experiment data, and the layout recommendation mode comprises presentation area recommendation and quantity recommendation.
The foregoing description is merely illustrative of the preferred embodiments of the present invention, and it should be noted that the scope of the present invention is not limited thereto, and any person skilled in the art should be able to substitute or change the technical solution according to the present invention and the inventive concept thereof within the scope of the present invention.
Claims (9)
1. A layout method of interactive targets in a virtual environment eye control interface is characterized by comprising the following steps:
(1) Constructing a virtual reality eye-control interaction experimental platform which is in communication connection with a virtual reality helmet, wherein the virtual reality eye-control interaction experimental platform is provided with an eye-control interaction interface with a virtual reality screen, and the virtual reality helmet is provided with an eye-movement tracking module;
(2) Setting a target search trigger task on the virtual reality eye control interaction experiment platform;
(3) Based on the virtual reality eye-controlled interaction experiment platform and the target search trigger task, recruiting a preset number of testees, executing an eye-controlled search trigger experiment, and collecting search experiment data and trigger experiment data;
(4) And (3) obtaining a layout recommendation mode of the interaction targets in the virtual reality eye-controlled interaction interface according to the search experimental data and the trigger experimental data obtained in the step (3), wherein the layout recommendation mode comprises a presentation area recommendation mode and a quantity recommendation mode, and determining the layout mode of the interaction targets.
2. The method for layout of interactive objects in a virtual environment eye-controlled interface according to claim 1, wherein: the step (1) specifically comprises the following steps:
setting the background color of a virtual reality screen on the eye-control interaction interface;
setting a plurality of interaction targets to be positioned in a virtual reality screen in a mapping mode;
setting eye-control interaction operation types, including a staring interaction mode and a smooth tracking interaction mode, wherein the staring interaction mode triggers interaction behavior by monitoring the coincidence degree of the position of the staring point of the tested person and the interaction target position area within 1000 ms; the smooth tracking interaction mode is to trigger interaction behavior by judging the track coincidence degree of the gaze point and the interaction target position in 900ms through the pearson product moment correlation coefficient;
the triggering precision of the eye-controlled interaction operation is optimized, and an adjustable accuracy threshold of continuous position detection is set.
3. The method for layout of interactive objects in a virtual environment eye-controlled interface according to claim 2, wherein: the gaze interaction means comprises the steps of:
a1, acquiring a gaze point coordinate, and detecting the coincidence ratio of a gaze point region and an icon region;
a2, judging whether the overlap ratio is more than 90%, if not, returning to the step A1, and if so, entering the step A3;
a3, judging whether the target area is a target icon, if so, turning the semantic-free icon of the target area to red, and recording the triggering time, and if not, turning the semantic-free icon of the target area to green, and recording the triggering time;
the smooth tracking interaction mode comprises the following steps:
b1, acquiring a fixation point coordinate, and designing a 25-time sliding window;
b2, obtaining the fixation point at intervals of 0.04s each time, and updating the sliding window by using the new fixation point;
b3, judging whether the pearson moment correlation coefficient of the 25 fixation points in the sliding window and the icon coordinates is larger than 0.9, if not, returning to the step B2, and if so, entering the step B4;
and B4, judging whether the target area is a target icon, if so, turning the semantic-free icon of the target area to red, and recording the trigger time, and if not, turning the semantic-free icon of the target area to green, and recording the trigger time.
4. The method for layout of interactive objects in a virtual environment eye-controlled interface according to claim 1, wherein: the step (2) specifically comprises the following steps:
collecting linear icons, rotating, adding and redrawing lines, and making semantic-free icons with different patterns at the center, wherein the semantic-free icons are used as interaction targets and simultaneously used as target buttons triggered during eye control interaction operation;
setting a search trigger interface with an annular menu in the virtual reality screen, and setting the occurrence number of interaction targets and the occurrence positions on the annular menu;
setting an experimental paradigm, namely: in the experiment, after finding the target button, the testee triggers the selected target button for a preset time period through a random eye control interaction mode in staring and smooth tracking; when the triggering succeeds and the triggering fails, the corresponding target buttons are displayed in different colors, and the experiment platform records the triggering time, the triggering accuracy and pupil diameter data of the tested person.
5. The method for layout of interactive objects in a virtual environment eye-controlled interface according to claim 1, wherein: the testee in the step (3) goes through the procedures of recruiting the testee, pre-experiment, pilot learning and training experiment before the formal experiment, and the formal experiment comprises the following steps:
before an experiment, a tested person wears a virtual reality helmet, line of sight calibration is carried out on each tested person through a nine-point Eye control calibration program of HTC VIVE Pro Eye, and after calibration, the tested person enters an experiment prompt interface;
each tested person executes eye control search triggering experiments with preset times;
after the user completes the experiment for the preset times, prompting the completion of the experiment on the virtual reality screen;
each eye control search triggering experiment comprises the following procedures:
displaying a black fixed cross on the virtual reality screen for a first duration;
presenting a target graph to be identified on the virtual reality screen for a second duration;
a blank screen is presented on the virtual reality screen for a third duration;
the virtual reality screen is provided with a preset search trigger interface with a ring menu, and a timer is started;
the testee finds a target button and triggers the found target button through eye control interaction operation, if the target graph is triggered, the trigger is successful, the corresponding target button turns red, a timer is stopped, and an experimental platform records the trigger time and enters the next test; if the non-target graph is triggered, the corresponding target button turns green.
6. The method for layout of interactive objects in a virtual environment eye-controlled interface according to claim 1, wherein: the subjects had normal vision or corrected to normal vision, and the minimum vision requirement for passing the Snellen myopia prescription test was 20/40, with more than 40 eye-controlled triggering experiments per subject.
7. The method for layout of interactive objects in a virtual environment eye-controlled interface according to claim 1, wherein: the step (4) specifically comprises the following steps:
search triggering experimental data processing: the experiment platform collects the triggering time, the triggering accuracy and pupil diameter data, wherein the triggering time in the experiment is acquired by a timer, the time interval between the moment when a searching interface appears and the moment when a tested person triggers a target button is indicated, and the average triggering time of each tested person is calculated; removing extreme values, namely a value deviating from the average value by +/-0.5 standard deviation, then carrying out mixed method analysis on the triggering time, the triggering accuracy and pupil diameter data, and carrying out main effect analysis and interaction analysis under each index;
obtaining layout recommendation of interaction targets: and obtaining the dominant values of the interaction targets on the number and the presentation positions by setting the triggering success rate and the screening conditions of the triggering time.
8. The method for layout of interactive objects in a virtual environment eye-controlled interface according to claim 7, wherein: and establishing a user experience expert group, wherein the user experience expert group comprises 4 interaction designers with experience exceeding 5 years and 2 users with experience of use of VR equipment more than 3 years, setting a minimum experience threshold for the triggering success rate of virtual reality eye control, namely the triggering success rate is more than 90%, the triggering time is shorter and better, and taking the triggering time as a screening condition to obtain the dominant value of the interaction targets on the number and the presentation positions.
9. A layout system of interactive objects in a virtual environment eye-controlled interface, comprising: the system comprises a virtual reality eye-control interaction experiment platform and a virtual reality helmet, wherein the virtual reality eye-control interaction experiment platform is in communication connection with the virtual reality helmet, and the virtual reality helmet is provided with an eye-movement tracking module;
wherein, the mutual experimental platform of virtual reality eye accuse is equipped with the mutual interface of eye accuse that shows virtual reality screen on it, still includes:
the parameter setting module is used for setting a target search trigger task;
the experiment execution module is used for executing an eye-controlled search trigger experiment based on the target search trigger task and collecting search experiment data and trigger experiment data;
the data analysis module is used for analyzing and obtaining the layout recommendation mode of the interaction targets in the virtual reality eye-controlled interaction interface according to the search experiment data and the trigger experiment data obtained by the experiment execution module, wherein the layout recommendation mode comprises the recommendation of the presentation area and the recommendation of the quantity, and the layout mode of the interaction targets is determined.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311320050.3A CN117632330B (en) | 2023-10-12 | 2023-10-12 | Interactive target layout method and system of eye control interface in virtual environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311320050.3A CN117632330B (en) | 2023-10-12 | 2023-10-12 | Interactive target layout method and system of eye control interface in virtual environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117632330A true CN117632330A (en) | 2024-03-01 |
CN117632330B CN117632330B (en) | 2024-07-16 |
Family
ID=90015338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311320050.3A Active CN117632330B (en) | 2023-10-12 | 2023-10-12 | Interactive target layout method and system of eye control interface in virtual environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117632330B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117891352A (en) * | 2024-03-14 | 2024-04-16 | 南京市文化投资控股集团有限责任公司 | Meta universe-based travel content recommendation system and method |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104392144A (en) * | 2014-12-09 | 2015-03-04 | 河海大学常州校区 | Analytical method for physiological experiment of error factors of visual information interface |
US20150131850A1 (en) * | 2013-11-12 | 2015-05-14 | Fuji Xerox Co., Ltd. | Identifying user activities using eye tracking data, mouse events, and keystrokes |
CN105393192A (en) * | 2013-06-28 | 2016-03-09 | 微软技术许可有限责任公司 | Web-like hierarchical menu display configuration for a near-eye display |
WO2017031089A1 (en) * | 2015-08-15 | 2017-02-23 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
CN107885857A (en) * | 2017-11-17 | 2018-04-06 | 山东师范大学 | A kind of search results pages user's behavior pattern mining method, apparatus and system |
CN110096328A (en) * | 2019-05-09 | 2019-08-06 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of HUD interface optimization layout adaptive approach and system based on aerial mission |
CN110162304A (en) * | 2019-05-10 | 2019-08-23 | 河海大学常州校区 | The interaction interface error factor-information characteristics reaction chain quick interface arrangement method |
CN111399659A (en) * | 2020-04-24 | 2020-07-10 | Oppo广东移动通信有限公司 | Interface display method and related device |
CN111949131A (en) * | 2020-08-17 | 2020-11-17 | 陈涛 | Eye movement interaction method, system and equipment based on eye movement tracking technology |
CN112970056A (en) * | 2018-09-21 | 2021-06-15 | 神经股份有限公司 | Human-computer interface using high speed and accurate user interaction tracking |
CN114911341A (en) * | 2022-04-21 | 2022-08-16 | 中国人民解放军国防科技大学 | Target selection method and system based on eye potential secondary triggering |
CN115359567A (en) * | 2014-06-14 | 2022-11-18 | 奇跃公司 | Method and system for generating virtual and augmented reality |
CN115598842A (en) * | 2021-06-28 | 2023-01-13 | 见臻科技股份有限公司(Tw) | Optical system and related method for improving user experience and gaze interaction accuracy |
CN115756173A (en) * | 2022-12-07 | 2023-03-07 | 中国科学院空间应用工程与技术中心 | Eye tracking method, system, storage medium and computing equipment |
CN116339511A (en) * | 2023-03-15 | 2023-06-27 | 吴晓莉 | Detection method for human-computer interaction interface perception breadth guide information search |
-
2023
- 2023-10-12 CN CN202311320050.3A patent/CN117632330B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105393192A (en) * | 2013-06-28 | 2016-03-09 | 微软技术许可有限责任公司 | Web-like hierarchical menu display configuration for a near-eye display |
US20150131850A1 (en) * | 2013-11-12 | 2015-05-14 | Fuji Xerox Co., Ltd. | Identifying user activities using eye tracking data, mouse events, and keystrokes |
CN115359567A (en) * | 2014-06-14 | 2022-11-18 | 奇跃公司 | Method and system for generating virtual and augmented reality |
CN104392144A (en) * | 2014-12-09 | 2015-03-04 | 河海大学常州校区 | Analytical method for physiological experiment of error factors of visual information interface |
WO2017031089A1 (en) * | 2015-08-15 | 2017-02-23 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
CN107885857A (en) * | 2017-11-17 | 2018-04-06 | 山东师范大学 | A kind of search results pages user's behavior pattern mining method, apparatus and system |
CN112970056A (en) * | 2018-09-21 | 2021-06-15 | 神经股份有限公司 | Human-computer interface using high speed and accurate user interaction tracking |
CN110096328A (en) * | 2019-05-09 | 2019-08-06 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of HUD interface optimization layout adaptive approach and system based on aerial mission |
CN110162304A (en) * | 2019-05-10 | 2019-08-23 | 河海大学常州校区 | The interaction interface error factor-information characteristics reaction chain quick interface arrangement method |
CN111399659A (en) * | 2020-04-24 | 2020-07-10 | Oppo广东移动通信有限公司 | Interface display method and related device |
CN111949131A (en) * | 2020-08-17 | 2020-11-17 | 陈涛 | Eye movement interaction method, system and equipment based on eye movement tracking technology |
CN115598842A (en) * | 2021-06-28 | 2023-01-13 | 见臻科技股份有限公司(Tw) | Optical system and related method for improving user experience and gaze interaction accuracy |
CN114911341A (en) * | 2022-04-21 | 2022-08-16 | 中国人民解放军国防科技大学 | Target selection method and system based on eye potential secondary triggering |
CN115756173A (en) * | 2022-12-07 | 2023-03-07 | 中国科学院空间应用工程与技术中心 | Eye tracking method, system, storage medium and computing equipment |
CN116339511A (en) * | 2023-03-15 | 2023-06-27 | 吴晓莉 | Detection method for human-computer interaction interface perception breadth guide information search |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117891352A (en) * | 2024-03-14 | 2024-04-16 | 南京市文化投资控股集团有限责任公司 | Meta universe-based travel content recommendation system and method |
CN117891352B (en) * | 2024-03-14 | 2024-05-31 | 南京市文化投资控股集团有限责任公司 | Meta universe-based travel content recommendation system and method |
Also Published As
Publication number | Publication date |
---|---|
CN117632330B (en) | 2024-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2024009889A (en) | System and method for visual analysis | |
Poole et al. | Eye tracking in HCI and usability research | |
RU2716201C2 (en) | Method and apparatus for determining visual acuity of user | |
US8342685B2 (en) | Testing/training visual perception speed and/or span | |
CN117632330B (en) | Interactive target layout method and system of eye control interface in virtual environment | |
Jiang et al. | Pupil responses to continuous aiming movements | |
JP2009542368A (en) | Eccentric vision diagnosis and treatment system | |
Brederoo et al. | Reproducibility of visual-field asymmetries: Nine replication studies investigating lateralization of visual information processing | |
Bennett et al. | Virtual reality based assessment of static object visual search in ocular compared to cerebral visual impairment | |
CN109700472A (en) | A kind of fatigue detection method, device, equipment and storage medium | |
Wright et al. | A model of the uncertainty effects in choice reaction time that includes a major contribution from effector selection. | |
Niu et al. | Improving eye–computer interaction interface design: Ergonomic investigations of the optimum target size and gaze-triggering dwell time | |
Zuo et al. | Study on the brightness and graphical display object directions of the single-gaze-gesture user interface | |
Kosovicheva et al. | Looking ahead: When do you find the next item in foraging visual search? | |
US11768594B2 (en) | System and method for virtual reality based human biological metrics collection and stimulus presentation | |
JP2009136663A (en) | Full-field retinal function scanning program | |
KR101955949B1 (en) | A method for diagnosis of internet/smartphone addiction disorder, and computer-readable storage medium in which the method is recorded | |
Li et al. | Validation of a haptic-based simulation to test complex figure reproduction capability | |
Chen et al. | Development and Validation of an Internet-Based Remote Perimeter (Perimouse) | |
RU2798899C1 (en) | Method for assessing the functional readiness of the operator for activities provided by fine motor skills of the fingers | |
Braaten et al. | Peripheral vision in matching‐to‐sample procedures | |
Manninen et al. | Feasibility and System Architecture for Vision Field Measurement Using Virtual Reality Technology | |
Kiviranta | Mapping the visual field: an empirical study on the user experience benefits of gaze-based interaction in visual field testing | |
US10444980B1 (en) | Biomechanical motion measurement and analysis for self-administered tests | |
Vrzakova | Machine learning methods in interaction inference from gaze |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |