CN112633273A - User preference processing method and system based on afterglow area - Google Patents

User preference processing method and system based on afterglow area Download PDF

Info

Publication number
CN112633273A
CN112633273A CN202011505536.0A CN202011505536A CN112633273A CN 112633273 A CN112633273 A CN 112633273A CN 202011505536 A CN202011505536 A CN 202011505536A CN 112633273 A CN112633273 A CN 112633273A
Authority
CN
China
Prior art keywords
afterglow
area
glasses wearer
real object
preference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011505536.0A
Other languages
Chinese (zh)
Inventor
孙立
叶柳青
刘晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shadow Creator Information Technology Co Ltd
Original Assignee
Shanghai Shadow Creator Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shadow Creator Information Technology Co Ltd filed Critical Shanghai Shadow Creator Information Technology Co Ltd
Priority to CN202011505536.0A priority Critical patent/CN112633273A/en
Publication of CN112633273A publication Critical patent/CN112633273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a user preference processing method and a user preference processing system based on a residual light area, which comprise the following steps: a fixation point obtaining step: acquiring a fixation point of an AR glasses wearer; an afterglow area acquisition step: determining the afterglow area of the AR glasses wearer according to the fixation point of the AR glasses wearer; a preference obtaining step: the method comprises the steps of identifying a real object in a real environment in a residual light area, taking the identified real object with the frequency exceeding a frequency threshold value as the preference of an AR glasses wearer, and providing recommendation information to the AR glasses wearer according to the preference. The afterglow area of the invention is also the interesting area of the user, for example, the user always keeps the pet dog in the afterglow area to continuously pay attention to the behavior of the pet dog. By comparing the common information of the contents in the residual light area under different times, the interested real object in the residual light area can be found.

Description

User preference processing method and system based on afterglow area
Technical Field
The present invention relates to the field of AR, and in particular, to a method and system for processing user preferences based on an afterglow area.
Background
Patent document CN109145566A provides a method and device for unlocking AR glasses based on gaze point information, and AR glasses, and relates to the technical field of virtual reality. According to the method and the device for unlocking the AR glasses based on the gazing point information and the AR glasses, provided by the embodiment of the invention, the information of the gazing point of the user is collected, the unlocking information is generated according to the information of the gazing point, and the unlocking information is compared with the prestored unlocking secret key; and judging whether to unlock the AR glasses or not according to the comparison result. Compared with the existing unlocking mode, the unlocking method has the advantages that the unlocking operation convenience is improved, and the user experience is improved. Under the condition that the unlocking information is known, mistakes are not easy to make, and under the condition that the unlocking information is not known, the unlocking information is difficult to crack through an exhaustion method, so that the use safety of the equipment is further improved.
Patent document CN109298780A provides an AR-based information processing method, apparatus, AR device, and storage medium. According to the method provided by the embodiment of the invention, when the user gazes at the target object in the process of using the AR equipment, the image information of the target object gazed by the user eyeball can be acquired, the related information of the target object is acquired according to the image information of the target object, and the related information of the target object is superposed into the AR scene image in the visual field range of the user, so that any related information of the object gazed by the user can be superposed into the AR scene image in real time according to the user gazing point, and the virtual information superposed into the AR scene is greatly enriched; and the relevant information of the object concerned by the user can be displayed to the user according to the difference of the points concerned by the user, the corresponding virtual information does not need to be bound for the AR scene or part of the objects in the AR scene, and the personalized superposition of the virtual information in the AR scene can be realized.
The prior art has the defect that the residual light area is not fully utilized.
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide a method and a system for processing user preference based on an afterglow area.
The invention provides a user preference processing method based on a residual light area, which comprises the following steps:
a fixation point obtaining step: acquiring a fixation point of an AR glasses wearer;
an afterglow area acquisition step: determining the afterglow area of the AR glasses wearer according to the fixation point of the AR glasses wearer;
a preference obtaining step: the method comprises the steps of identifying a real object in a real environment in a residual light area, taking the identified real object with the frequency exceeding a frequency threshold value as the preference of an AR glasses wearer, and providing recommendation information to the AR glasses wearer according to the preference.
Preferably, if a real object continues to appear in the afterglow area for a time greater than a time threshold and is then fixated by the point of regard of the AR glasses wearer, the number of times the real object is recognized is increased by 1.
Preferably, the image of the real object watched by the gazing point of the AR glasses wearer is stored as a comparison template, and when the real object is identified in the afterglow area, the real object to be identified is matched with the comparison template to identify the real object.
Preferably, for a real object that is a preference of the AR glasses wearer, highlight reality is performed in the afterglow area.
Preferably, the method comprises the following steps:
and an afterglow area image processing step: performing image quality reduction processing on a virtual object located in an afterglow area of an AR glasses wearer and then displaying the virtual object;
the image quality reduction processing comprises the following steps: reducing the resolution of the virtual object or blurring the virtual object.
The invention provides a user preference processing system based on a residual light area, which comprises:
a fixation point acquisition module: acquiring a fixation point of an AR glasses wearer;
an afterglow area acquisition module: determining the afterglow area of the AR glasses wearer according to the fixation point of the AR glasses wearer;
a preference acquisition module: the method comprises the steps of identifying a real object in a real environment in a residual light area, taking the identified real object with the frequency exceeding a frequency threshold value as the preference of an AR glasses wearer, and providing recommendation information to the AR glasses wearer according to the preference.
Preferably, if a real object continues to appear in the afterglow area for a time greater than a time threshold and is then fixated by the point of regard of the AR glasses wearer, the number of times the real object is recognized is increased by 1.
Preferably, the image of the real object watched by the gazing point of the AR glasses wearer is stored as a comparison template, and when the real object is identified in the afterglow area, the real object to be identified is matched with the comparison template to identify the real object.
Preferably, for a real object that is a preference of the AR glasses wearer, highlight reality is performed in the afterglow area.
Preferably, the method comprises the following steps:
the residual light area image processing module: performing image quality reduction processing on a virtual object located in an afterglow area of an AR glasses wearer and then displaying the virtual object;
the image quality reduction processing comprises the following steps: reducing the resolution of the virtual object or blurring the virtual object.
Compared with the prior art, the invention has the following beneficial effects:
the afterglow area is also an interested area of the user, for example, the user always keeps the pet dog in the afterglow area to continuously pay attention to the behavior of the pet dog. By comparing the common information of the contents in the residual light area under different times, the interested real object in the residual light area can be found.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the steps of the method of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides a user preference processing method based on a residual light area, which comprises the following steps:
a fixation point obtaining step: acquiring a fixation point of an AR glasses wearer; specifically, the position of the gaze point of the AR glasses wearer can be obtained from the eye information of the AR glasses wearer. Those skilled in the art can obtain the gazing point at least by referring to patent document CN105812777B, and details thereof are not repeated herein.
An afterglow area acquisition step: determining the afterglow area of the AR glasses wearer according to the fixation point of the AR glasses wearer; in a preferred example, an area outside a set distance from the gaze point is used as the afterglow area; alternatively, the visual area of the AR glasses wearer is divided into a grid, and a grid area which is not adjacent to the grid where the gaze point of the AR glasses wearer is located is used as the surplus light area. The grids are arranged in rows and columns. Further preferably, the gridding may be a grid dividing the field of view area of each glasses into 5 rows and 6 columns, respectively.
A preference obtaining step: the method comprises the steps of identifying a real object in a real environment in a residual light area, taking the identified real object with the frequency exceeding a frequency threshold value as the preference of an AR glasses wearer, and providing recommendation information to the AR glasses wearer according to the preference. The afterglow area is also an interested area of the user, for example, the user always keeps the pet dog in the afterglow area to continuously pay attention to the behavior of the pet dog. By comparing the common information of the contents in the residual light area under different times, the interested real object in the residual light area can be found.
If a real object continues to appear in the afterglow area for a time greater than a time threshold and is then fixated by the gaze point of the AR glasses wearer, the number of times the real object is recognized is increased by 1 time. The method comprises the steps of storing an image of a real object watched by a gazing point of an AR glasses wearer as a comparison template, and matching the real object to be identified with the comparison template to identify the real object when the real object is identified in a residual light area. For a real object that is a preference of the AR glasses wearer, highlight reality is performed in the afterglow area. For example, the pet dog is watched once, the image of the pet dog is used as a comparison template for storage, and the real object to be recognized in the residual light area is matched with the template to obtain that the real object is the pet dog, so that the recognition rate is greatly increased. And simultaneously utilizes interest information of the fixation point and the surplus light area.
And an afterglow area image processing step: and performing image quality reduction processing on the virtual object positioned in the afterglow area of the AR glasses wearer and displaying the virtual object. The image quality reduction processing comprises the following steps: the resolution of the virtual object is reduced. The image quality reduction processing comprises the following steps: the virtual object is blurred. The invention considers that the residual light area is fuzzy in real vision, thus the image processing capability is not needed to be put into the residual light area, and the residual light area is very clear but not real, therefore, the residual light area is displayed as the residual light area after the simple calculation processing. Specifically, the virtual object is a program interface, and only the outline or the edge of the program interface may be displayed or the program interface may not be rendered in the image quality reduction process.
In particular, the method for processing the user preference based on the afterglow area comprises the following steps:
loading in advance: pre-judging a current afterglow area where a fixation point arrives according to the motion track of the fixation point, and loading and rendering a program interface of the afterglow area where the fixation point arrives in advance according to the requirement of non-degraded image quality; here, a current remaining light area that is touched by an extension of a motion trajectory of the gaze point that exceeds the set distance threshold, for example, a grid area that is currently a remaining light area that is touched by an extension, may be regarded as a remaining light area that the gaze point will reach.
The invention provides a user preference processing system based on a residual light area, which comprises:
a fixation point acquisition module: acquiring a fixation point of an AR glasses wearer; specifically, the position of the gaze point of the AR glasses wearer can be obtained from the eye information of the AR glasses wearer. Those skilled in the art can obtain the gazing point at least by referring to patent document CN105812777B, and details thereof are not repeated herein.
An afterglow area acquisition module: determining the afterglow area of the AR glasses wearer according to the fixation point of the AR glasses wearer; in a preferred example, an area outside a set distance from the gaze point is used as the afterglow area; alternatively, the visual area of the AR glasses wearer is divided into a grid, and a grid area which is not adjacent to the grid where the gaze point of the AR glasses wearer is located is used as the surplus light area. The grids are arranged in rows and columns.
A preference acquisition module: the method comprises the steps of identifying a real object in a real environment in a residual light area, taking the identified real object with the frequency exceeding a frequency threshold value as the preference of an AR glasses wearer, and providing recommendation information to the AR glasses wearer according to the preference. The afterglow area is also an interested area of the user, for example, the user always keeps the pet dog in the afterglow area to continuously pay attention to the behavior of the pet dog. By comparing the common information of the contents in the residual light area under different times, the interested real object in the residual light area can be found.
If a real object continues to appear in the afterglow area for a time greater than a time threshold and is then fixated by the gaze point of the AR glasses wearer, the number of times the real object is recognized is increased by 1 time. The method comprises the steps of storing an image of a real object watched by a gazing point of an AR glasses wearer as a comparison template, and matching the real object to be identified with the comparison template to identify the real object when the real object is identified in a residual light area. For a real object that is a preference of the AR glasses wearer, highlight reality is performed in the afterglow area. For example, the pet dog is watched once, the image of the pet dog is used as a comparison template for storage, and the real object to be recognized in the residual light area is matched with the template to obtain that the real object is the pet dog, so that the recognition rate is greatly increased. And simultaneously utilizes interest information of the fixation point and the surplus light area.
The residual light area image processing module: and performing image quality reduction processing on the virtual object positioned in the afterglow area of the AR glasses wearer and displaying the virtual object. The image quality reduction processing comprises the following steps: the resolution of the virtual object is reduced. The image quality reduction processing comprises the following steps: the virtual object is blurred. The invention considers that the residual light area is fuzzy in real vision, thus the image processing capability is not needed to be put into the residual light area, and the residual light area is very clear but not real, therefore, the residual light area is displayed as the residual light area after the simple calculation processing. Specifically, the virtual object is a program interface, and only the outline or the edge of the program interface may be displayed or the program interface may not be rendered in the image quality reduction process.
In particular, the afterlight area-based user preference processing system includes:
loading a module in advance: pre-judging a current afterglow area where a fixation point arrives according to the motion track of the fixation point, and loading and rendering a program interface of the afterglow area where the fixation point arrives in advance according to the requirement of non-degraded image quality; here, a current remaining light area that is touched by an extension of a motion trajectory of the gaze point that exceeds the set distance threshold, for example, a grid area that is currently a remaining light area that is touched by an extension, may be regarded as a remaining light area that the gaze point will reach.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A user preference processing method based on an afterglow area is characterized by comprising the following steps:
a fixation point obtaining step: acquiring a fixation point of an AR glasses wearer;
an afterglow area acquisition step: determining the afterglow area of the AR glasses wearer according to the fixation point of the AR glasses wearer;
a preference obtaining step: the method comprises the steps of identifying a real object in a real environment in a residual light area, taking the identified real object with the frequency exceeding a frequency threshold value as the preference of an AR glasses wearer, and providing recommendation information to the AR glasses wearer according to the preference.
2. The method of claim 1, wherein if a real object continues to appear in the afterglow area for more than a time threshold and is then fixated by the point of regard of the AR glasses wearer, the number of times the real object is recognized is increased by 1.
3. The method of claim 1, wherein the image of the real object gazed by the gaze point of the AR glasses wearer is stored as a comparison template, and when the real object is identified in the afterglow region, the real object to be identified is matched with the comparison template to identify the real object.
4. The method of claim 1, wherein highlight reality is performed in the afterglow area for a real object that is a preference of the AR glasses wearer.
5. The method of claim 1, comprising:
and an afterglow area image processing step: performing image quality reduction processing on a virtual object located in an afterglow area of an AR glasses wearer and then displaying the virtual object;
the image quality reduction processing comprises the following steps: reducing the resolution of the virtual object or blurring the virtual object.
6. A user preference processing system based on an afterglow area comprising:
a fixation point acquisition module: acquiring a fixation point of an AR glasses wearer;
an afterglow area acquisition module: determining the afterglow area of the AR glasses wearer according to the fixation point of the AR glasses wearer;
a preference acquisition module: the method comprises the steps of identifying a real object in a real environment in a residual light area, taking the identified real object with the frequency exceeding a frequency threshold value as the preference of an AR glasses wearer, and providing recommendation information to the AR glasses wearer according to the preference.
7. The afterglow-region-based user preference processing system of claim 6, wherein if a real object continues to appear in the afterglow region for a time greater than a time threshold and is then fixated by the point of regard of the AR glasses wearer, the number of times the real object is recognized is increased by 1.
8. The afterglow-region-based user preference processing system according to claim 6, wherein an image of a real object gazed at a gaze point of an AR glasses wearer is stored as a comparison template, and when the afterglow region identifies the real object, the real object to be identified is matched with the comparison template to identify the real object.
9. The afterglow-region-based user preference processing system according to claim 6, wherein for a real object that is a preference of the AR glasses wearer, a highlight reality is performed in the afterglow region.
10. The afterglow area based user preference processing system of claim 6, comprising:
the residual light area image processing module: performing image quality reduction processing on a virtual object located in an afterglow area of an AR glasses wearer and then displaying the virtual object;
the image quality reduction processing comprises the following steps: reducing the resolution of the virtual object or blurring the virtual object.
CN202011505536.0A 2020-12-18 2020-12-18 User preference processing method and system based on afterglow area Pending CN112633273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011505536.0A CN112633273A (en) 2020-12-18 2020-12-18 User preference processing method and system based on afterglow area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011505536.0A CN112633273A (en) 2020-12-18 2020-12-18 User preference processing method and system based on afterglow area

Publications (1)

Publication Number Publication Date
CN112633273A true CN112633273A (en) 2021-04-09

Family

ID=75317162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011505536.0A Pending CN112633273A (en) 2020-12-18 2020-12-18 User preference processing method and system based on afterglow area

Country Status (1)

Country Link
CN (1) CN112633273A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120054018A1 (en) * 2010-08-25 2012-03-01 Neurofocus, Inc. Effective virtual reality environments for presentation of marketing materials
CN106412563A (en) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 Image display method and apparatus
KR101748563B1 (en) * 2016-09-26 2017-06-20 유비씨엔(주) Eye tracking method based both eyes
CN107589837A (en) * 2017-08-22 2018-01-16 努比亚技术有限公司 A kind of AR terminals picture adjusting method, equipment and computer-readable recording medium
CN107590859A (en) * 2017-09-01 2018-01-16 广州励丰文化科技股份有限公司 A kind of mixed reality picture processing method and service equipment
CN107765842A (en) * 2016-08-23 2018-03-06 深圳市掌网科技股份有限公司 A kind of augmented reality method and system
CN109086726A (en) * 2018-08-10 2018-12-25 陈涛 A kind of topography's recognition methods and system based on AR intelligent glasses
CN109145566A (en) * 2018-09-08 2019-01-04 太若科技(北京)有限公司 Method, apparatus and AR glasses based on blinkpunkt information unlock AR glasses

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120054018A1 (en) * 2010-08-25 2012-03-01 Neurofocus, Inc. Effective virtual reality environments for presentation of marketing materials
CN107765842A (en) * 2016-08-23 2018-03-06 深圳市掌网科技股份有限公司 A kind of augmented reality method and system
KR101748563B1 (en) * 2016-09-26 2017-06-20 유비씨엔(주) Eye tracking method based both eyes
CN106412563A (en) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 Image display method and apparatus
CN107589837A (en) * 2017-08-22 2018-01-16 努比亚技术有限公司 A kind of AR terminals picture adjusting method, equipment and computer-readable recording medium
CN107590859A (en) * 2017-09-01 2018-01-16 广州励丰文化科技股份有限公司 A kind of mixed reality picture processing method and service equipment
CN109086726A (en) * 2018-08-10 2018-12-25 陈涛 A kind of topography's recognition methods and system based on AR intelligent glasses
CN109145566A (en) * 2018-09-08 2019-01-04 太若科技(北京)有限公司 Method, apparatus and AR glasses based on blinkpunkt information unlock AR glasses

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PATRICK RENNER ET AL.: "Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems", 《2017 IEEE SYMPOSIUM ON 3D USER INTERFACES》, 6 April 2017 (2017-04-06), pages 186 - 194 *
赵新灿;左洪福;徐兴民;: "基于视线跟踪的增强现实交互", 光电工程, no. 04, 30 April 2008 (2008-04-30), pages 135 - 139 *

Similar Documents

Publication Publication Date Title
CN107111629B (en) Method and system for detecting an object of interest
US11579686B2 (en) Method and device for carrying out eye gaze mapping
Susilo et al. Solving the upside-down puzzle: Why do upright and inverted face aftereffects look alike?
US11314089B2 (en) Method and device for evaluating view images
Eisma et al. Visual sampling processes revisited: Replicating and extending Senders (1983) using modern eye-tracking equipment
CN110276229A (en) Target object regional center localization method and device
CN112101123B (en) Attention detection method and device
CN110569826B (en) Face recognition method, device, equipment and medium
Howard et al. Suspiciousness perception in dynamic scenes: a comparison of CCTV operators and novices
JP2020163100A5 (en)
Choe et al. To search or to like: Mapping fixations to differentiate two forms of incidental scene memory
CN115601811A (en) Facial acne detection method and device
DE112016006769B4 (en) Method for sign language input into a user interface of a vehicle and vehicle
Meyer et al. Perceiving faces: Too much, too fast?—face specificity in response caution.
CN112633273A (en) User preference processing method and system based on afterglow area
Holm et al. Looking as if you know: Systematic object inspection precedes object recognition
Hild et al. Gaze-based moving target acquisition in real-time full motion video
CN109298782B (en) Eye movement interaction method and device and computer readable storage medium
Hertz et al. I haven’ta clue! Expectations based on repetitions and hints facilitate perceptual experience of ambiguous images.
CN112633128A (en) Method and system for pushing information of interested object in afterglow area
CN114967128B (en) Sight tracking system and method applied to VR glasses
CN114917590B (en) Virtual reality game system
CN112634461A (en) Method and system for enhancing reality of afterglow area
CN112669578B (en) Interested object warning method and system based on sound source in afterglow area
Westhoven et al. Head turn scaling below the threshold of perception in immersive virtual environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination