CN109976889B - Multitasking collaborative processing method based on intelligent glasses - Google Patents

Multitasking collaborative processing method based on intelligent glasses Download PDF

Info

Publication number
CN109976889B
CN109976889B CN201910230408.0A CN201910230408A CN109976889B CN 109976889 B CN109976889 B CN 109976889B CN 201910230408 A CN201910230408 A CN 201910230408A CN 109976889 B CN109976889 B CN 109976889B
Authority
CN
China
Prior art keywords
intelligent glasses
virtual screen
wearer
mode
pupil
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910230408.0A
Other languages
Chinese (zh)
Other versions
CN109976889A (en
Inventor
孙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910230408.0A priority Critical patent/CN109976889B/en
Publication of CN109976889A publication Critical patent/CN109976889A/en
Application granted granted Critical
Publication of CN109976889B publication Critical patent/CN109976889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a multitasking collaborative processing method based on intelligent glasses, which comprises the following steps: s1, selecting a required operation mode from preset operation modes by utilizing intelligent glasses; s2, based on the selected operation mode, executing preset triggering operation until a required interface is selected from a virtual screen on the intelligent glasses lens, and entering the interface; and S3, when the operation mode is required to be switched, directly pressing a zeroing button arranged on the intelligent glasses, and repeating the steps S1 and S2 to finish the cooperative processing of the multiple tasks. The invention can avoid the damage to the eyeballs of the user when the intelligent glasses are used to a large extent, and provides a safer and more convenient use method for the user under the condition of some specific environments (text editing, walking and the like).

Description

Multitasking collaborative processing method based on intelligent glasses
Technical Field
The invention belongs to the technical field of intelligent glasses, and particularly relates to a multitasking collaborative processing method based on intelligent glasses.
Background
In the prior art, electronic products such as intelligent glasses are limited by the size of the screen of the electronic products, so that multitasking cannot be realized. With the continuous progress of technology, the size of the screen applied to the intelligent glasses is greatly optimized, but a virtual screen with a plurality of windows on the virtual screen always appears in front of the eyes of a wearer, so that a lot of inconvenience is brought to the wearer. Some intelligent glasses can adjust the position of the virtual screen, but still can not obtain the surrounding environment of the wearer due to the problem of the equipment of the wearer, and cannot realize the multi-task cooperative processing.
Disclosure of Invention
Aiming at the problems, the invention provides the multi-task cooperative processing method based on the intelligent glasses, which can avoid damage to eyeballs of users when the intelligent glasses are used to a large extent and provides a safer and more convenient use method for the users under specific environments (text editing, walking and the like).
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
a multitasking cooperative processing method based on intelligent glasses comprises the following steps:
s1: selecting a required operation mode from preset operation modes by utilizing intelligent glasses;
s2: based on the selected operation mode, executing preset triggering operation until a required interface is selected from a virtual screen on the intelligent glasses lens, and entering the interface;
s3: when the operation mode is required to be switched, a zeroing button arranged on the intelligent glasses is directly pressed, and the steps S1 and S2 are repeated to finish the cooperative processing of the multiple tasks.
Preferably, before the step S1, the method further includes: the system safety check is carried out, specifically:
operating an eyeball signal collector arranged on the intelligent glasses when the intelligent glasses are started;
the eyeball signal collector sends the collected eyeball image to the central processor chip, the central processor chip analyzes whether the eyeball model of the wearer is matched with the stored eyeball model according to the received eyeball image, if so, the virtual screen on the intelligent glasses lens enters the main interface, and if not, the virtual screen on the intelligent glasses lens stays at the public interface.
Preferably, the smart glasses are used for selecting a required operation mode from preset operation modes, specifically:
the wearer selects a desired operating mode from a virtual screen on the smart eyeglass lens using the main interface, wherein the operating modes include a walking mode, an operating mode, and an entertainment mode.
Preferably, when the selected operation mode is a walking mode, the step S2 is specifically:
in the walking mode, the wearer is limited to only open one window at a time, a virtual screen is established on the intelligent glasses lens, the virtual screen is displayed on a main interface and only comprises time information and a message prompt bar, the message prompt bar is used for displaying prompt messages, and the wearer cannot operate the virtual screen, so that traffic accidents caused by that virtual images appearing in front of the eyes of the wearer interfere with the judgment of the wearer on the current environment are avoided to a great extent.
Preferably, when the selected operation mode is the operation mode or the entertainment mode, the step S2 is specifically:
in the working mode or the entertainment mode, a wearer can simultaneously open a plurality of windows, the intelligent glasses establish a 3-D surrounding type virtual screen, and the virtual screen is moved until the user enters a required window by collecting the deflection angle of the head or the pupil of the wearer and based on the deflection angle of the head or the pupil of the wearer.
Preferably, in the working mode or the entertainment mode, the wearer can open a plurality of windows simultaneously, the intelligent glasses establish a 3-D surrounding virtual screen, and the virtual screen is moved by collecting the deflection angle of the head or the pupil of the wearer and based on the deflection angle of the head or the pupil of the wearer until entering the required windows, specifically:
when the intelligent glasses lens is in a working mode or an entertainment mode, each open window in the virtual screen on the intelligent glasses lens is respectively arranged around the initial window, the pupil position of the wearer is acquired in real time by utilizing the eyeball signal acquisition device and is sent to the central processor chip, and when the central processor chip judges that the pupil position of the wearer is out of the lens central area of the intelligent glasses lens, the virtual screen is moved towards the direction of the pupil;
during the movement of the virtual screen, pupil positions of the wearer are acquired in real time through eyeball signals and are sent to the central processor chip, the central processor chip judges that the pupil positions of the wearer return to the lens center area of the intelligent glasses, and the user finishes the movement, finishes selecting a required window from the virtual screen on the lenses of the intelligent glasses and enters the window.
Preferably, in the working mode, the wearer can open a plurality of windows simultaneously, the intelligent glasses establish a 3-D surrounding virtual screen, and the virtual screen is moved by collecting the deflection angle of the head or the pupil of the wearer and based on the deflection angle of the head or the pupil of the wearer until entering the required windows, specifically:
when the intelligent glasses lens is in a working mode or an entertainment mode, all open windows in a virtual screen on the intelligent glasses lens are respectively arranged around an initial window, the head position of a wearer is acquired in real time by using a gyroscope and is sent to a central processing unit chip, the central processing unit chip calculates a virtual image on the virtual screen which is presented when the head of the wearer is at a current angle, the virtual screen is moved to the position, during the movement of the virtual screen, different window sizes on the virtual image which is presented on the virtual screen are compared, and the window with the largest proportion on the virtual screen is selected to be amplified, so that the movement is considered to be completed.
Preferably, the angle of deflection of the virtual screen on the smart eyeglass lens = sensitivity x actual angle of deflection of the head or eye.
Compared with the prior art, the invention has the beneficial effects that:
the intelligent glasses-based multitasking cooperative processing device and the intelligent glasses-based multitasking cooperative processing method can avoid damage to eyeballs of users to a large extent when the intelligent glasses are used, and provide a safer and more convenient use method for the users under specific environments (text editing, walking and the like).
Drawings
Fig. 1 is a flow chart of a multi-task cooperative processing method based on intelligent glasses according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The principle of application of the invention is described in detail below with reference to the accompanying drawings.
Example 1
The origin points in the invention all represent the center points of the intelligent glasses lenses; as shown in fig. 1, an embodiment of the present invention provides a method for collaborative processing of multiple tasks based on smart glasses, including the following steps:
s1: the wearer wears intelligent glasses, a zeroing button, a power-on button, a gyroscope, an eyeball collector and a central processing unit chip are arranged on the intelligent glasses, the power-on is realized by pressing the power-on button, and in the power-on process, the gyroscope and the eyeball collector are both started, and a zero point is arranged through the central processing unit chip to form a virtual screen; selecting a required operation mode from preset operation modes by utilizing intelligent glasses;
in a specific implementation manner of the embodiment of the present invention, the selecting, by using the smart glasses, a desired operation mode from preset operation modes is specifically:
the wearer selects a desired operating mode from a virtual screen on the smart eyeglass lens using the main interface, wherein the operating modes include a walking mode, an operating mode, and an entertainment mode.
S2: based on the selected operation mode, executing preset triggering operation until a required interface is selected from a virtual screen on the intelligent glasses lens, and entering the interface;
in a specific implementation manner of the embodiment of the present invention, when the selected operation mode is a walking mode, the step S2 specifically includes:
in a walking mode, a wearer is limited to only open one window at a time, a virtual screen is established on the lens of the intelligent glasses, a main interface is displayed on the virtual screen, and the virtual screen only comprises time information and a message prompt bar, wherein the message prompt bar is used for displaying prompt messages, the wearer cannot operate the virtual screen, a plurality of display windows exist on the virtual screen on the intelligent glasses, so that judgment of the wearer on the current environment is influenced, and traffic accidents are avoided to a great extent;
in another specific implementation manner of the embodiment of the present invention, when the selected operation mode is the operation mode or the entertainment mode, the step S2 specifically includes:
in the working mode or the entertainment mode, a wearer can simultaneously open a plurality of windows, the intelligent glasses establish a 3-D surrounding type virtual screen, and the virtual screen is moved until the user enters a required window by collecting the deflection angle of the head or the pupil of the wearer and based on the deflection angle of the head or the pupil of the wearer.
More specifically: in the working mode or the entertainment mode, a wearer can simultaneously open a plurality of windows, the intelligent glasses establish a 3-D surrounding type virtual screen, the deflection angle of the head or the pupil of the wearer is collected, and the movement of the virtual screen is realized based on the deflection angle of the head or the pupil of the wearer until the wearer enters the required windows, and the method specifically comprises the following steps:
when the intelligent glasses lens is in a working mode or an entertainment mode, each open window in the virtual screen on the intelligent glasses lens is respectively arranged around the initial window, the pupil position of the wearer is acquired in real time by utilizing the eyeball signal acquisition device and is sent to the central processor chip, and when the central processor chip judges that the pupil position of the wearer is out of the lens central area of the intelligent glasses lens, the virtual screen is moved towards the direction of the pupil; in the working mode, the operation executed by a default wearer is operated for the window which is opened first after the starting, and the operation is carried out on the window after the operation is regarded as the wearer when the wearer blinks 3 of the window on the central area of the current lens within 1.5 seconds through the eyeball signal collector; in the entertainment mode, a default user will operate a window in a central area of the smart eyeglass lens;
during the movement of the virtual screen, pupil positions of the wearer are acquired in real time through eyeball signals and are sent to the central processor chip, the central processor chip judges that the pupil positions of the wearer return to the lens center area of the intelligent glasses, and the user finishes the movement, finishes selecting a required window from the virtual screen on the lenses of the intelligent glasses and enters the window.
Or when the intelligent glasses lens is just in a working mode or an entertainment mode, each opened window in the virtual screen on the intelligent glasses lens is respectively arranged around the initial window, the head position of a wearer is acquired in real time by using a gyroscope and is sent to the central processing unit chip, the central processing unit chip calculates a virtual image on the virtual screen which is presented when the head of the wearer is at the current angle, the virtual screen is moved to the position, during the movement of the virtual screen, the sizes of different windows on the virtual image which is presented on the virtual screen are compared, and the window with the largest proportion on the virtual screen is selected to be amplified, so that the movement is considered to be completed;
the virtual deflection angle of the virtual image on the smart eyeglass lens is determined in relation to the actual deflection position of the head or pupil by:
when the head or pupil of the wearer is deflected by θ°, and the sensitivity is 1, the virtual image projected on the smart eyeglass lens will be deflected by θ°. The deflection of the virtual image on the mirror plate is related to the sensitivity:
deflection angle of virtual screen projected on the lens = sensitivity (sensitivity ranges from 0-4) x actual deflection angle of head or pupil;
s3: when the operation mode is required to be switched, a zeroing button arranged on the intelligent glasses is directly pressed, and the steps S1 and S2 are repeated to finish the cooperative processing of the multiple tasks.
Example 2
Based on the same inventive concept as embodiment 1, the embodiment of the present invention is different from embodiment 1 in that:
in order to improve the system security of the smart glasses, the step S1 further includes: the system safety check is carried out, specifically:
operating an eyeball signal collector arranged on the intelligent glasses when the intelligent glasses are started;
the eyeball signal collector sends the collected eyeball image to the central processor chip, the central processor chip analyzes whether the eyeball model of the wearer is matched with the stored eyeball model according to the received eyeball image, if so, the virtual screen on the intelligent glasses lens enters the main interface, and if not, the virtual screen on the intelligent glasses lens stays at the public interface.
Example 3
The multi-task cooperative processing method based on the intelligent glasses in the embodiment of the invention realizes the multi-task cooperative processing based on the rotation of the head of a wearer, and specifically comprises the following steps:
step 1: after the intelligent glasses are started, setting an origin, taking a connecting line of center points of two lenses as an x-axis, taking a connecting line of a point on an upper left beam (an upper left glasses frame) perpendicular to the center point of the left lens as a y-axis, taking a plane enclosed by an xy-axis as a first observation plane, and taking a line perpendicular to the first observation plane as a z-axis, and establishing a 3-D surrounding type virtual screen;
step 2: starting the gyroscope, starting the gyroscope to work, and uploading the real-time rotation angle of the head of the user to a central processing unit chip for processing;
the method comprises the following steps: after the CPU chip obtains data, calculating a virtual image of a wearer on a virtual screen presented by the current angle, and simultaneously moving to the position;
step 4: during the movement of the virtual screen, comparing different window sizes on the virtual image presented on the virtual screen, and selecting the window with the largest proportion on the virtual screen to amplify, thereby finishing the movement;
step 5: by pressing a button for setting sensitivity, the head is turned by x DEG in real life, and the angle is changed by k by x DEG (x DEG is the degree of turning of the head, and k is the sensitivity);
step 6: when the zeroing button is pressed, a connecting line of the center points of the two lenses is taken as an x axis, a connecting line of a point on the upper left beam (upper left mirror frame) perpendicular to the center point of the left lens is taken as a y axis, a plane surrounded by the xy axis is taken as a first observation plane, a line perpendicular to the first observation plane is taken as a z axis, and a 3-D surrounding type virtual screen is built again.
Example 4
The invention provides a multitasking collaborative processing method based on intelligent glasses, which realizes multitasking collaborative processing based on rotation of eyeballs of a wearer, and specifically comprises the following steps of
Step 1: after the intelligent glasses are started, setting an origin, taking a connecting line of center points of two lenses as an x-axis, taking a connecting line of a point on an upper left beam (an upper left glasses frame) perpendicular to the center point of the left lens as a y-axis, taking a plane enclosed by an xy-axis as a first observation plane, and taking a line perpendicular to the first observation plane as a z-axis, and establishing a 3-D surrounding type virtual screen;
step 2: starting an eyeball signal collector while the wearer stops rotating the head, performing 3-D modeling on the eyeball of the wearer, and judging the position of the pupil of the eyeball in real time;
step 3: judging the position change of the eyeball, and considering that the virtual screen is moved towards the direction outside the screen center of a virtual image formed on the virtual screen at the position of the pupil of the wearer;
step 4: after the user moves to the position wanted by the user, repeating the step 2 again, and if the pupil position of the user is in the central area, the user is considered to finish the movement;
step 5: by pressing the sensitivity setting button, the pupil rotates by x DEG in real life, and the pupil is changed by k times x (x is the deflection angle of the pupil, and k is the sensitivity);
step 6: when the zeroing button is pressed, a connecting line of the center points of the two lenses is taken as an x axis, a connecting line of a point on the upper left beam (upper left mirror frame) perpendicular to the center point of the left lens is taken as a y axis, a plane surrounded by the xy axis is taken as a first observation plane, a line perpendicular to the first observation plane is taken as a z axis, and a 3-D surrounding type virtual screen is built again.
The foregoing has shown and described the basic principles and main features of the present invention and the advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (3)

1. The multi-task cooperative processing method based on the intelligent glasses is characterized by comprising the following steps of:
s1: selecting a required operation mode from preset operation modes by utilizing intelligent glasses;
s2: based on the selected operation mode, executing preset triggering operation until a required interface is selected from a virtual screen on the intelligent glasses lens, and entering the interface;
s3: when the operation mode is required to be switched, directly pressing a zeroing button arranged on the intelligent glasses, and repeating the steps S1 and S2 to finish the cooperative processing of the multiple tasks;
the intelligent glasses are used for selecting a required operation mode from preset operation modes, and the operation modes are specifically as follows:
the wearer selects a required operation mode from a virtual screen on the intelligent glasses lens by using a main interface, wherein the operation mode comprises a walking mode, a working mode and an entertainment mode;
when the selected operation mode is the walking mode, the step S2 specifically includes:
in the walking mode, the wearer is limited to only open one window at a time, a virtual screen is established on the intelligent glasses lens, the virtual screen is displayed by using a main interface and only comprises time information and a message prompt bar, and the wearer cannot operate the virtual screen;
when the selected operation mode is the operation mode or the entertainment mode, the step S2 specifically includes:
in a working mode or an entertainment mode, a wearer can simultaneously open a plurality of windows, the intelligent glasses establish a 3-D surrounding type virtual screen, and the virtual screen is moved until the user enters a required window by collecting the deflection angle of the pupil of the wearer and based on the deflection angle of the pupil of the wearer;
in the working mode or the entertainment mode, a wearer can simultaneously open a plurality of windows, the intelligent glasses establish a 3-D surrounding type virtual screen, the movement of the virtual screen is realized by collecting the deflection angle of the pupil of the wearer and based on the deflection angle of the pupil of the wearer until the user enters the required window, and the method specifically comprises the following steps:
when the intelligent glasses lens is in a working mode or an entertainment mode, each open window in the virtual screen on the intelligent glasses lens is respectively arranged around the initial window, the pupil position of the wearer is acquired in real time by utilizing the eyeball signal acquisition device and is sent to the central processor chip, and when the central processor chip judges that the pupil position of the wearer is out of the lens central area of the intelligent glasses lens, the virtual screen is moved towards the direction of the pupil;
during the movement of the virtual screen, pupil positions of the wearer are acquired in real time through eyeball signals and are sent to the central processor chip, the central processor chip judges that the pupil positions of the wearer return to the lens center area of the intelligent glasses, and the user finishes the movement, finishes selecting a required window from the virtual screen on the lenses of the intelligent glasses and enters the window.
2. The intelligent glasses-based multitasking method of cooperative processing according to claim 1, wherein: the step S1 further includes: the system safety check is carried out, specifically:
operating an eyeball signal collector arranged on the intelligent glasses when the intelligent glasses are started;
the eyeball signal collector sends the collected eyeball image to the central processor chip, the central processor chip analyzes whether the eyeball model of the wearer is matched with the stored eyeball model according to the received eyeball image, if so, the virtual screen on the intelligent glasses lens enters the main interface, and if not, the virtual screen on the intelligent glasses lens stays at the public interface.
3. The intelligent glasses-based multitasking method of cooperative processing according to claim 1, wherein: deflection angle of virtual screen on smart eyeglass lenses = sensitivity x actual deflection angle of head or eye.
CN201910230408.0A 2019-03-26 2019-03-26 Multitasking collaborative processing method based on intelligent glasses Active CN109976889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910230408.0A CN109976889B (en) 2019-03-26 2019-03-26 Multitasking collaborative processing method based on intelligent glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910230408.0A CN109976889B (en) 2019-03-26 2019-03-26 Multitasking collaborative processing method based on intelligent glasses

Publications (2)

Publication Number Publication Date
CN109976889A CN109976889A (en) 2019-07-05
CN109976889B true CN109976889B (en) 2024-01-23

Family

ID=67080546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910230408.0A Active CN109976889B (en) 2019-03-26 2019-03-26 Multitasking collaborative processing method based on intelligent glasses

Country Status (1)

Country Link
CN (1) CN109976889B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104391574A (en) * 2014-11-14 2015-03-04 京东方科技集团股份有限公司 Sight processing method, sight processing system, terminal equipment and wearable equipment
WO2016064073A1 (en) * 2014-10-22 2016-04-28 윤영기 Smart glasses on which display and camera are mounted, and a space touch inputting and correction method using same
CN107272896A (en) * 2017-06-13 2017-10-20 北京小米移动软件有限公司 The method and device switched between VR patterns and non-VR patterns
CN107506236A (en) * 2017-09-01 2017-12-22 上海智视网络科技有限公司 Display device and its display methods
CN107515669A (en) * 2016-06-17 2017-12-26 北京小米移动软件有限公司 Display methods and device
CN107957843A (en) * 2017-12-20 2018-04-24 维沃移动通信有限公司 A kind of control method and mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016064073A1 (en) * 2014-10-22 2016-04-28 윤영기 Smart glasses on which display and camera are mounted, and a space touch inputting and correction method using same
CN104391574A (en) * 2014-11-14 2015-03-04 京东方科技集团股份有限公司 Sight processing method, sight processing system, terminal equipment and wearable equipment
CN107515669A (en) * 2016-06-17 2017-12-26 北京小米移动软件有限公司 Display methods and device
CN107272896A (en) * 2017-06-13 2017-10-20 北京小米移动软件有限公司 The method and device switched between VR patterns and non-VR patterns
CN107506236A (en) * 2017-09-01 2017-12-22 上海智视网络科技有限公司 Display device and its display methods
CN107957843A (en) * 2017-12-20 2018-04-24 维沃移动通信有限公司 A kind of control method and mobile terminal

Also Published As

Publication number Publication date
CN109976889A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
US11836289B2 (en) Use of eye tracking to adjust region-of-interest (ROI) for compressing images for transmission
US10720128B2 (en) Real-time user adaptive foveated rendering
US10775886B2 (en) Reducing rendering computation and power consumption by detecting saccades and blinks
US10169846B2 (en) Selective peripheral vision filtering in a foveated rendering system
Plopski et al. Corneal-imaging calibration for optical see-through head-mounted displays
CN102830797B (en) A kind of man-machine interaction method based on sight line judgement and system
US9779512B2 (en) Automatic generation of virtual materials from real-world materials
WO2016143862A1 (en) Measurement device for eyeglasses-wearing parameter, measurement program for eyeglasses-wearing parameter, and position designation method
CN112287795A (en) Abnormal driving posture detection method, device, equipment, vehicle and medium
CN113570624A (en) Foreground information prompting method based on intelligent wearable glasses and related equipment
US20190230266A1 (en) Human machine interface system and method of providing guidance and instruction for iris recognition on mobile terminal
CN109976889B (en) Multitasking collaborative processing method based on intelligent glasses
CN112101261A (en) Face recognition method, device, equipment and storage medium
KR100520050B1 (en) Head mounted computer interfacing device and method using eye-gaze direction
AU2018314050B2 (en) Wearable device-compatible electrooculography data processing device, spectacle-type wearable device provided with same, and wearable device-compatible electrooculography data processing method
CN113359996A (en) Life auxiliary robot control system, method and device and electronic equipment
CN111587397B (en) Image generation device, spectacle lens selection system, image generation method, and program
WO2016051429A1 (en) Input/output device, input/output program, and input/output method
CN114661152B (en) AR display control system and method for reducing visual fatigue
CN112445328A (en) Mapping control method and device
CN112925413B (en) Augmented reality glasses and touch method thereof
CN111553221B (en) Data processing method and intelligent device
CN212022480U (en) Vehicle pillar blind area display system
US20230306781A1 (en) Predicting display fit and ophthalmic fit measurements using a simulator
CN114063775A (en) Remote gaze interaction device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant