CN114115532A - AR labeling method and system based on display content - Google Patents
AR labeling method and system based on display content Download PDFInfo
- Publication number
- CN114115532A CN114115532A CN202111332392.8A CN202111332392A CN114115532A CN 114115532 A CN114115532 A CN 114115532A CN 202111332392 A CN202111332392 A CN 202111332392A CN 114115532 A CN114115532 A CN 114115532A
- Authority
- CN
- China
- Prior art keywords
- content
- module
- display
- display content
- trigger
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses an AR labeling method and system based on display content, and belongs to the technical field of intelligent wearable equipment. It comprises the following steps: and S1, analyzing the display content to be labeled, extracting the markable content in the display content for AI sorting, automatically dividing the markable content into a plurality of trigger sets according to the completeness of the markable content, and networking to perform background preloading labeling on the set content. The invention provides a scheme for combining a user's gazing focus with an active virtual cursor, wherein the gazing focus is triggered slowly, so that unwanted marking is not triggered at will, and then the selected display content can be marked quickly and accurately by combining secondary determination of the virtual cursor, so that the marking is carried out at the fastest speed while the accuracy is ensured, and the marked content is in line with the mind expectation of the user and the user experience is good.
Description
Technical Field
The invention relates to an AR labeling method and system based on display content, and belongs to the technical field of intelligent wearable equipment.
Background
Augmented Reality (AR) is a relatively new technical content which enables real world information and virtual world information content to be integrated together, and the method implements analog simulation processing on the basis of computer and other scientific technologies of entity information which is relatively difficult to experience in the space range of the real world originally, superposes the virtual information content to be effectively applied in the real world, and can be perceived by human senses in the process, so that the sensory experience beyond Reality is realized. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time.
In the practical application of AR, overlaying a label on display content in a display device is the most common way, and the application principle is to perform data analysis on the display content in a current area, search related content in a parallel network, and overlay the searched related content to a corresponding position in the display device to form a label, in this process, how to select desired display content for labeling is a difficult point, and an existing solution is to use an eye tracking technology for interactive selection, but there are some drawbacks: because the attention of people can not be always concentrated, the sight line can be irregularly deviated, and therefore certain computer instructions can be triggered at any time, and the marked contents which are not needed jump out and are scattered in attention, on the other hand, the operation efficiency of the existing eye movement interaction mode is not high, the eyes are easily tired due to the fact that clicking operation is carried out by adopting the single-eye blinking or double-eye blinking actions, blinking also belongs to natural reaction, and when the eyes are stimulated, the blinking frequency is more uncontrolled, misoperation is easily caused, and therefore the user can not mark the displayed contents according to own will, and the user experience is influenced.
Disclosure of Invention
The present invention is directed to the above technical problem, and therefore, a method and a system for displaying content based AR annotation are provided.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
an AR labeling method based on display content comprises the following steps:
s1, analyzing the display content to be annotated, extracting the annotated content in the display content for AI sorting, automatically dividing the annotated content into a plurality of trigger sets according to the completeness of the annotated content, and networking to perform background preloading annotation on the set content;
s2, capturing a gazing focus of a user by an eyeball tracking module of the equipment, highlighting the display content mapped by the trigger set after the time of the gazing focus concentrated on the trigger set exceeds the set time, wherein the highlight effect is hidden after the highlight display lasts for the set time;
s3, capturing gesture information of the finger by an image shooting module of the device, forming an invisible virtual cursor at the top end of the fingertip of the finger, enabling a label on a corresponding trigger set to enter a preloading state when the virtual cursor stays on the trigger set, loading and displaying label content after the trigger set stays for a set time, and hiding the label content after the virtual cursor leaves the trigger set for the set time;
s4, when the focus of fixation and the virtual cursor captured by the eyeball tracking module are concentrated on a trigger set, immediately displaying the marked content in the area near the display content mapped by the trigger set;
s5, when one of the focus of fixation and the virtual cursor leaves the trigger set, repeating the steps of S2 or S3, and when both the focus of fixation and the virtual cursor leave the trigger set, the display content returns to the original state.
As preferred examples, the annotatable content comprises graphics, animations, videos, texts and icons.
In the above S2, the setting time for focusing on the trigger set is 3-5S, and the setting time for highlighting is 1-3S.
As a preferred example, in the step S2, after the highlighting effect is hidden, the trigger set receives highlighting awakening of the focus again after 10-15S.
In S2, the highlighted content is quickly converted into the annotated content for display by the remote operation device.
In the preferred example, in S3, the set time for the virtual cursor to stay on the trigger set is 5-8S, and the set time for the virtual cursor to leave the trigger set is 1-3S.
As a preferred example, in S3, the captured finger posture information does not include the posture information of the thumb.
An AR labeling system based on display content comprises a display module, a system processing module, a signal module, an eyeball tracking module and an image shooting module;
the display module comprises a transparent screen and is used for superposing AR annotation content on display content in the real world;
the system processing module includes logic and associated computer memory for control instructions for receiving and processing signals from the sensors and providing display signals to the display module for making AR annotations;
the signal module comprises at least two of a 5G network communication module, a wireless module, a Bluetooth module or an infrared module and is used for being connected with an external signal;
the eyeball tracking module is used for tracking the gazing focuses of the eyes of the user in real time, converting the gazing focuses into coordinate signals and transmitting the coordinate signals to the system processing module;
the image shooting module is used for extracting display contents and capturing finger posture information, the extracted display contents are converted into processable signals and transmitted to the system processing module, and the captured finger posture information is converted into coordinate signals and transmitted to the system processing module.
As a preferred example, the remote control device further comprises a remote control device, wherein the remote control device comprises a smart ring or a smart bracelet/watch and is connected with the signal module through a wireless signal.
The invention has the beneficial effects that: the invention provides a scheme for combining a user's gazing focus with an active virtual cursor, wherein the gazing focus is triggered slowly, so that unwanted marking is not triggered at will, and then the selected display content can be marked quickly and accurately by combining secondary determination of the virtual cursor, so that the marking is carried out at the fastest speed while the accuracy is ensured, and the marked content is in line with the mind expectation of the user and the user experience is good.
Drawings
FIG. 1 is a flow chart of the operation of the present invention;
fig. 2 is a schematic structural diagram of the present invention.
Detailed Description
In order to make the technical means, the original characteristics, the achieved purpose and the efficacy of the invention easy to understand, the invention is further described with reference to the specific drawings.
The present invention is implemented based on AR glasses or smart phones/tablets, which must be provided with a relevant image capturing device to capture the display content and the eye movements of the user.
As shown in fig. 1, an AR labeling method based on display content includes the following steps:
s1, analyzing the display content to be marked, extracting the marked content in the display content to perform AI sorting, automatically dividing the marked content according to the integrity of the marked content, such as a word, a complete sentence, a single graph and other units with complete meanings, editing a plurality of units into a plurality of trigger sets, preloading and marking the content of the trigger sets in a background by a parallel network, caching the marked content in a computer memory, simultaneously converting the whole display content into two-dimensional plane data, converting the plurality of trigger sets into coordinate sets according to the original corresponding positions on the display content, and embedding the coordinate sets at the corresponding positions of the two-dimensional plane;
s2, an eyeball tracking module of the equipment captures a gazing focus of a user, a gazing focus position signal is converted into a coordinate signal on a two-dimensional plane, whether the gazing focus is coincident with a trigger set or not is judged according to the gazing focus position signal, when the time that the gazing focus is concentrated on the trigger set exceeds set time, display content mapped by the trigger set is highlighted and used for indicating that the display content has markable content, after the highlight continues for set time, the highlight effect is hidden, when the time that the gazing focus is concentrated on the trigger set is lower than the set time, the display content does not react, and misoperation caused by line-of-sight jumping is prevented;
s3, capturing the gesture information of the finger by an image capturing module of the equipment, firstly capturing and tracking the peripheral outline of the finger through an image recognition technology, then determining a plurality of measuring points on the peripheral outline based on a template matching algorithm and an artificial neural network, determining coordinates on a two-dimensional plane by the plurality of measuring points at the moment, recognizing the measuring point at the top end of the finger tip as an invisible virtual cursor, enabling a label on a corresponding trigger set to enter a preloading state when the virtual cursor stays on the trigger set, loading and displaying the content of the label after the trigger set is stopped for a set time, and hiding the content of the label after the virtual cursor leaves the trigger set for a set time;
s4, when the gaze focus and the virtual cursor captured by the eyeball tracking module are concentrated on a trigger set, namely under the condition of not considering depth, the two-dimensional coordinate of the gaze focus and the two-dimensional coordinate of the virtual cursor are both positioned in the coordinate set of the trigger set, and at the moment, the marked content is immediately displayed in the area near the display content mapped by the trigger set;
s5, when one of the focus of fixation and the virtual cursor leaves the trigger set, repeating the steps of S2 or S3, and when both the focus of fixation and the virtual cursor leave the trigger set, the display content returns to the original state.
The annotatable content includes graphics, animations, video, text and icons.
In S2, the setting time of the focus on the trigger set is 3-5S, the setting time of highlight duration is 1-3S, and after the highlight effect is hidden, the trigger set receives highlight awakening of the focus again after 10-15S.
By adopting the scheme, the reading or watching speed of the user is comprehensively considered, the watching focus is concentrated on about 4s, the influence of short sight deviation is avoided, the corresponding marking time is not delayed too much, and meanwhile, the sleep time after the highlighting effect is hidden is set to prevent the user from repeatedly highlighting to disperse the attention of the user when the user mainly watches a section of display content.
In the step S2, the highlighted content is quickly converted into labeled content through the remote operation device for display, and the specific principle is that in the highlighted state of the displayed content, the remote operation device performs active labeling operation through triggering or gestures, so that the content labeling can be conveniently and quickly completed.
In S3, the set time of the virtual cursor staying on the trigger set is 5-8S, the set time of the virtual cursor leaving the trigger set is 1-3S, the set time of the virtual cursor triggering is slightly longer than the trigger time of the focus, the situation that the finger gesture is accidentally shot by the image shooting module and then the mark is triggered immediately to influence the sight line is prevented, and meanwhile, the effect of marking the target display content can be achieved after a period of time.
In S3, the captured finger posture information does not include the posture information of the thumb, because in a normal use scenario, the user does not use the thumb when reading and understanding with finger assistance, generally only uses the index finger and the middle finger, and the ring finger and the little finger will curl, the method can shield the capturing of the posture information of the thumb, and prevent the posture information of the thumb from mistakenly touching irrelevant display content labels.
As shown in fig. 2, an AR labeling system based on display content includes a display module, a system processing module, a signal module, an eyeball tracking module, and an image capturing module;
the display module comprises a transparent screen and is used for superposing AR annotation content on display content in the real world;
the system processing module includes logic and associated computer memory for control instructions for receiving and processing signals from the sensors and providing display signals to the display module for making AR annotations;
the signal module comprises at least two of a 5G network communication module, a wireless module, a Bluetooth module or an infrared module, is used for being connected with an external signal, and comprises a networking inquiry marking material, a data interchange with an information terminal, a receiving information instruction of remote operation equipment and the like;
the eyeball tracking module is used for tracking the gazing focuses of the eyes of the user in real time, converting the gazing focuses into coordinate signals and transmitting the coordinate signals to the system processing module, the main equipment comprises infrared equipment and image acquisition equipment, in order to facilitate miniaturization of the system, the infrared equipment is preferably adopted, the characteristics are extracted by actively projecting light beams such as infrared rays and the like to the iris, the precision is high, and the technology is mature;
the image shooting module is used for extracting display content and capturing finger posture information, the extracted display content is converted into a processable signal and transmitted to the system processing module, and the captured finger posture information is converted into a coordinate signal and transmitted to the system processing module.
Still include remote operation equipment, be connected with signal module through radio signal, wearable equipment at the hand, for example intelligent ring or intelligent bracelet/wrist-watch are selected for use to this remote operation equipment, carry out remote instruction operation through the mode to the press point selection or the gesture motion of above-mentioned equipment, like mark etc. appears directly in highlight display content.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (9)
1. An AR labeling method based on display content, characterized in that: the method comprises the following steps:
s1, analyzing the display content to be annotated, extracting the annotated content in the display content for AI sorting, automatically dividing the annotated content into a plurality of trigger sets according to the completeness of the annotated content, and networking to perform background preloading annotation on the set content;
s2, capturing a gazing focus of a user by an eyeball tracking module of the equipment, highlighting the display content mapped by the trigger set after the time of the gazing focus concentrated on the trigger set exceeds the set time, wherein the highlight effect is hidden after the highlight display lasts for the set time;
s3, capturing gesture information of the finger by an image shooting module of the device, forming an invisible virtual cursor at the top end of the fingertip of the finger, enabling a label on a corresponding trigger set to enter a preloading state when the virtual cursor stays on the trigger set, loading and displaying label content after the trigger set stays for a set time, and hiding the label content after the virtual cursor leaves the trigger set for the set time;
s4, when the focus of fixation and the virtual cursor captured by the eyeball tracking module are concentrated on a trigger set, immediately displaying the marked content in the area near the display content mapped by the trigger set;
s5, when one of the focus of fixation and the virtual cursor leaves the trigger set, repeating the steps of S2 or S3, and when both the focus of fixation and the virtual cursor leave the trigger set, the display content returns to the original state.
2. The display content-based AR labeling method of claim 1, wherein: the annotatable content comprises graphics, animations, video, text and icons.
3. The display content-based AR labeling method of claim 1, wherein: in S2, the set time for focusing the focus on the trigger set is 3-5S, and the set time for the highlight duration is 1-3S.
4. The display content-based AR labeling method of claim 1, wherein: and in the step S2, after the highlight effect is hidden, the trigger set receives highlight awakening of the focus again after 10-15S.
5. The display content-based AR labeling method of claim 1, wherein: in S2, the highlighted content is quickly converted into annotated content for display by the remote operation device.
6. The display content-based AR labeling method of claim 1, wherein: in S3, the set time for the virtual cursor to stay on the trigger set is 5-8S, and the set time for the virtual cursor to leave the trigger set is 1-3S.
7. The display content-based AR labeling method of claim 1, wherein: in S3, the captured finger posture information does not include the posture information of the thumb.
8. An AR tagging system based on display content, characterized by: the system comprises a display module, a system processing module, a signal module, an eyeball tracking module and an image shooting module;
the display module comprises a transparent screen and is used for superposing AR annotation content on display content in the real world;
the system processing module includes logic and associated computer memory for control instructions for receiving and processing signals from the sensors and providing display signals to the display module for making AR annotations;
the signal module comprises at least two of a 5G network communication module, a wireless module, a Bluetooth module or an infrared module and is used for being connected with an external signal;
the eyeball tracking module is used for tracking the gazing focuses of the eyes of the user in real time, converting the gazing focuses into coordinate signals and transmitting the coordinate signals to the system processing module;
the image shooting module is used for extracting display contents and capturing finger posture information, the extracted display contents are converted into processable signals and transmitted to the system processing module, and the captured finger posture information is converted into coordinate signals and transmitted to the system processing module.
9. The display-content-based AR tagging system of claim 8, wherein: the remote operation device comprises an intelligent ring or an intelligent bracelet/watch and is connected with the signal module through a wireless signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111332392.8A CN114115532B (en) | 2021-11-11 | 2021-11-11 | AR labeling method and system based on display content |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111332392.8A CN114115532B (en) | 2021-11-11 | 2021-11-11 | AR labeling method and system based on display content |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114115532A true CN114115532A (en) | 2022-03-01 |
CN114115532B CN114115532B (en) | 2023-09-29 |
Family
ID=80378242
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111332392.8A Active CN114115532B (en) | 2021-11-11 | 2021-11-11 | AR labeling method and system based on display content |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114115532B (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130050432A1 (en) * | 2011-08-30 | 2013-02-28 | Kathryn Stone Perez | Enhancing an object of interest in a see-through, mixed reality display device |
US20140092014A1 (en) * | 2012-09-28 | 2014-04-03 | Sadagopan Srinivasan | Multi-modal touch screen emulator |
US20150293585A1 (en) * | 2014-04-09 | 2015-10-15 | Hyundai Motor Company | System and method for controlling heads up display for vehicle |
WO2016064073A1 (en) * | 2014-10-22 | 2016-04-28 | 윤영기 | Smart glasses on which display and camera are mounted, and a space touch inputting and correction method using same |
CN106814854A (en) * | 2016-12-29 | 2017-06-09 | 杭州联络互动信息科技股份有限公司 | A kind of method and device for preventing maloperation |
US10061352B1 (en) * | 2017-08-14 | 2018-08-28 | Oculus Vr, Llc | Distributed augmented reality system |
CN108829239A (en) * | 2018-05-07 | 2018-11-16 | 北京七鑫易维信息技术有限公司 | Control method, device and the terminal of terminal |
CN109298780A (en) * | 2018-08-24 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Information processing method, device, AR equipment and storage medium based on AR |
CN110187855A (en) * | 2019-05-28 | 2019-08-30 | 武汉市天蝎科技有限公司 | The intelligent adjusting method for avoiding hologram block vision of near-eye display device |
KR20190128962A (en) * | 2018-05-09 | 2019-11-19 | 서강대학교산학협력단 | METHOD AND WEARABLE DISPLAY APPARATUS FOR PROVIDING eBOOK BASED ON AUGMENTED REALLITY |
CN111931579A (en) * | 2020-07-09 | 2020-11-13 | 上海交通大学 | Automatic driving assistance system and method using eye tracking and gesture recognition technology |
CN111949131A (en) * | 2020-08-17 | 2020-11-17 | 陈涛 | Eye movement interaction method, system and equipment based on eye movement tracking technology |
WO2021073743A1 (en) * | 2019-10-17 | 2021-04-22 | Huawei Technologies Co., Ltd. | Determining user input based on hand gestures and eye tracking |
CN112817447A (en) * | 2021-01-25 | 2021-05-18 | 暗物智能科技(广州)有限公司 | AR content display method and system |
KR20210073429A (en) * | 2019-12-10 | 2021-06-18 | 한국전자기술연구원 | Integration Interface Method and System based on Eye tracking and Gesture recognition for Wearable Augmented Reality Device |
-
2021
- 2021-11-11 CN CN202111332392.8A patent/CN114115532B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130050432A1 (en) * | 2011-08-30 | 2013-02-28 | Kathryn Stone Perez | Enhancing an object of interest in a see-through, mixed reality display device |
US20140092014A1 (en) * | 2012-09-28 | 2014-04-03 | Sadagopan Srinivasan | Multi-modal touch screen emulator |
US20150293585A1 (en) * | 2014-04-09 | 2015-10-15 | Hyundai Motor Company | System and method for controlling heads up display for vehicle |
WO2016064073A1 (en) * | 2014-10-22 | 2016-04-28 | 윤영기 | Smart glasses on which display and camera are mounted, and a space touch inputting and correction method using same |
CN106814854A (en) * | 2016-12-29 | 2017-06-09 | 杭州联络互动信息科技股份有限公司 | A kind of method and device for preventing maloperation |
US10061352B1 (en) * | 2017-08-14 | 2018-08-28 | Oculus Vr, Llc | Distributed augmented reality system |
CN108829239A (en) * | 2018-05-07 | 2018-11-16 | 北京七鑫易维信息技术有限公司 | Control method, device and the terminal of terminal |
KR20190128962A (en) * | 2018-05-09 | 2019-11-19 | 서강대학교산학협력단 | METHOD AND WEARABLE DISPLAY APPARATUS FOR PROVIDING eBOOK BASED ON AUGMENTED REALLITY |
CN109298780A (en) * | 2018-08-24 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Information processing method, device, AR equipment and storage medium based on AR |
CN110187855A (en) * | 2019-05-28 | 2019-08-30 | 武汉市天蝎科技有限公司 | The intelligent adjusting method for avoiding hologram block vision of near-eye display device |
WO2021073743A1 (en) * | 2019-10-17 | 2021-04-22 | Huawei Technologies Co., Ltd. | Determining user input based on hand gestures and eye tracking |
KR20210073429A (en) * | 2019-12-10 | 2021-06-18 | 한국전자기술연구원 | Integration Interface Method and System based on Eye tracking and Gesture recognition for Wearable Augmented Reality Device |
CN111931579A (en) * | 2020-07-09 | 2020-11-13 | 上海交通大学 | Automatic driving assistance system and method using eye tracking and gesture recognition technology |
CN111949131A (en) * | 2020-08-17 | 2020-11-17 | 陈涛 | Eye movement interaction method, system and equipment based on eye movement tracking technology |
CN112817447A (en) * | 2021-01-25 | 2021-05-18 | 暗物智能科技(广州)有限公司 | AR content display method and system |
Also Published As
Publication number | Publication date |
---|---|
CN114115532B (en) | 2023-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108351685B (en) | System and method for biomechanically based eye signals for interacting with real and virtual objects | |
US11663784B2 (en) | Content creation in augmented reality environment | |
US10552004B2 (en) | Method for providing application, and electronic device therefor | |
CN103336576B (en) | A kind of moving based on eye follows the trail of the method and device carrying out browser operation | |
WO2017118075A1 (en) | Human-machine interaction system, method and apparatus | |
CN107479691B (en) | Interaction method, intelligent glasses and storage device thereof | |
CN110456907A (en) | Control method, device, terminal device and the storage medium of virtual screen | |
CN107562186B (en) | 3D campus navigation method for emotion operation based on attention identification | |
WO2017112099A1 (en) | Text functions in augmented reality | |
US20240077948A1 (en) | Gesture-based display interface control method and apparatus, device and storage medium | |
CN110442233B (en) | Augmented reality keyboard and mouse system based on gesture interaction | |
CN108027655A (en) | Information processing system, information processing equipment, control method and program | |
CN105068646A (en) | Terminal control method and system | |
CN107450717B (en) | Information processing method and wearable device | |
CN114821753B (en) | Eye movement interaction system based on visual image information | |
CN108829239A (en) | Control method, device and the terminal of terminal | |
CN111766936A (en) | Virtual content control method and device, terminal equipment and storage medium | |
CN106681509A (en) | Interface operating method and system | |
CN114115532B (en) | AR labeling method and system based on display content | |
US11328187B2 (en) | Information processing apparatus and information processing method | |
CN115185365A (en) | Wireless control eye control system and control method thereof | |
CN112433664A (en) | Man-machine interaction method and device used in book reading process and electronic equipment | |
CN111766937B (en) | Virtual content interaction method and device, terminal equipment and storage medium | |
US11586295B2 (en) | Wink gesture control system | |
Jungwirth | Contour-Guided Gaze Gestures: Eye-based Interaction with Everyday Objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |