EP4049118A1 - Augenbasierte aktivierungs- und werkzeugauswahlsysteme und verfahren - Google Patents

Augenbasierte aktivierungs- und werkzeugauswahlsysteme und verfahren

Info

Publication number
EP4049118A1
EP4049118A1 EP20878692.1A EP20878692A EP4049118A1 EP 4049118 A1 EP4049118 A1 EP 4049118A1 EP 20878692 A EP20878692 A EP 20878692A EP 4049118 A1 EP4049118 A1 EP 4049118A1
Authority
EP
European Patent Office
Prior art keywords
eye
user
virtual
scrolling
viewport
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20878692.1A
Other languages
English (en)
French (fr)
Other versions
EP4049118A4 (de
Inventor
Dominic Philip HAINE
Scott Herz
Renaldi Winoto
Abhishek Bhat
Ramin Mirjalili
Joseph Czompo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tectus Corp
Original Assignee
Tectus Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/662,842 external-priority patent/US10901505B1/en
Priority claimed from US16/940,152 external-priority patent/US11662807B2/en
Application filed by Tectus Corp filed Critical Tectus Corp
Publication of EP4049118A1 publication Critical patent/EP4049118A1/de
Publication of EP4049118A4 publication Critical patent/EP4049118A4/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • G02C11/10Electronic devices other than hearing aids
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C7/00Optical parts
    • G02C7/02Lenses; Lens systems ; Methods of designing lenses
    • G02C7/04Contact lenses for the eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning

Definitions

  • the present disclosure generally relates to eye-controlled systems and methods for activating tools within a virtual environment, and more particularly, a contact-lens system that allows a user to activate and select virtual tools based on eye-movement that is tracked by sensors within a contact lens worn by the user.
  • AR/VR The growth of AR/VR technologies across a large and diverse set of markets is well understood by one of skill in the art.
  • Markets such as gaming, media, search, and information management implement a variety of different AR/VR products to allow an individual to interact with a virtual environment.
  • These AR/VR products provide an individual a rich and dynamic platform in which the user can retrieve information, view media content, navigate virtual scenes and interact with other individuals in a manner unique to the AR/VR environment. It is important that these AR/VR products maintain a user- friendly experience throughout their use and avoid overloading a user with too much content and information, while concurrently managing the way in which the user interacts with the virtual environment; a task that is oftentimes difficult given the constraints of today’s AR/VR products.
  • AR/VR technologies offer users the ability to uniquely interact with virtual content in a virtual medium and enjoy an immersive user experience
  • these technologies are not without limitations.
  • These technologies are oftentimes constrained by the way an individual can interact with the virtual environment.
  • many AR/VR products rely on hand-gestures, hand controllers, or other types of movement that must be translated into the virtual environment itself.
  • These interactive movements are potentially obtrusive, hard to learn, tiring over time, inconvenient to use, and not available to those without facile motion of their arms or hands. Further, such movements may appear awkward in a social context, thus, negatively impacting the overall experience.
  • FIG. 1A illustrates an exemplary eye-mounted display (“EMD”) system according to embodiments of the present disclosure.
  • FIG. IB illustrates an exemplary contact lens component for an EMD system according to embodiments of the present disclosure.
  • FIG. 1C illustrates an exemplary electronic contact lens according to embodiments of the present disclosure.
  • FIG. 2A illustrates an exemplary electronic contact lens comprising motion sensors according to embodiments of the present disclosure.
  • FIG. 2B shows a polar coordinate system that serves as reference frame for components in the electronic contact lens shown in FIG. 1.
  • FIG. 2C and FIG. 2D illustrate various conventions for reference frames for the electronic contact lens shown in FIG. 1.
  • FIG. 3 illustrates the concept of Span of Eccentricity (SoE) according to embodiments of the present disclosure.
  • FIG. 4A illustrates projecting onto the retina the visible portion of a virtual image according to embodiments of the present disclosure.
  • FIG. 4B and FIG. 4C illustrate the concept of SoE using a flashlight analogy.
  • FIG. 5A illustrates a “virtual tool activation chart” comprising an exemplary activation threshold according to embodiments of the present disclosure.
  • FIG. 5B illustrates a method for using an activation threshold to select a tool according to embodiments of the present disclosure.
  • FIG. 5C illustrates a method for displaying a selected a tool according to embodiments of the present disclosure.
  • FIG. 5D illustrates a method for using an auxiliary device to select several tools for display according to embodiments of the present disclosure.
  • FIG. 5E illustrates a set of exemplary angles for facilitating an activation according to embodiments of the present disclosure.
  • FIG. 5F illustrates an exemplary method for calibrating a user’s eye range of motion according to embodiments of the present disclosure.
  • FIG. 5G illustrates an exemplary process for automatically adjusting activation sensitivity according to embodiments of the present disclosure.
  • FIG. 6A-FIG. 6C illustrate exemplary methods for measuring eye position in an eye socket using capacitive skin sensors in a contact lens according to embodiments of the present disclosure.
  • FIG. 7 illustrates an exemplary method for activating tools by looking at a periphery according to embodiments of the present disclosure.
  • FIG. 8 illustrates an exemplary guide feature according to embodiments of the present disclosure.
  • FIG. 9 illustrates how an exemplary tool in a hierarchical tool set may reveal the presence of selectable sub-tools according to embodiments of the present disclosure.
  • FIG. 10A - FIG. 10D illustrate an exemplary method for highlighting tools according to embodiments of the present disclosure.
  • FIG. 11 illustrates exemplary methods for interpreting a user’s eye motion as an activation or tentative activation of the system according to embodiments of the present disclosure.
  • FIG. 12 illustrates an eye-based activation and tool selection system according to embodiments of the present disclosure.
  • FIG. 13 illustrates a process for using an eye-based activation and tool selection system according to embodiments of the present disclosure.
  • FIG. 14 illustrates another process for using an eye-based activation and tool selection system according to embodiments of the present disclosure.
  • FIG. 15 illustrates revealing nearby virtual objects using a trigger in the visible section of a virtual scene according to embodiments of the present disclosure.
  • FIG. 16A illustrates a virtual object that utilizes a connector according to embodiments of the present disclosure.
  • FIG. 16B illustrates a virtual object that, without utilizing a visible connector, reveals the presence of an otherwise not visible virtual object according to embodiments of the present disclosure.
  • FIG. 16C illustrates a proxy or pointer with a connector according to embodiments of the present disclosure.
  • FIG. 16D illustrates a proxy or pointer without a connector according to embodiments of the present disclosure.
  • FIG. 16E illustrates items that serve as hints for the presence of non-visible objects according to embodiments of the present disclosure.
  • FIG. 17 illustrates an exemplary arrangement of virtual objects in a virtual scene according to embodiments of the present disclosure.
  • FIG. 18A and FIG. 8B illustrate a method for using a wearer’s gaze to reveal objects in an exemplary virtual scene according to embodiments of the present disclosure.
  • FIG. 19 illustrates a method for revealing virtual objects in a virtual space according to embodiments of the present disclosure.
  • FIG. 20A and FIG. 20B illustrate a method for visually navigating virtual objects according to embodiments of the present disclosure.
  • FIG. 21 illustrates another method for visually navigating virtual information according to embodiments of the present disclosure.
  • FIG. 22 illustrates a virtual glance revealer for navigating virtual objects according to embodiments of the present disclosure.
  • FIG. 23 illustrates a process for using a virtual glance revealer system to navigate virtual tools according to embodiments of the present disclosure.
  • FIG. 24 illustrates a partitioned virtual scene into a plurality of zones according to various embodiments of the disclosure.
  • FIG. 25 illustrates an exemplary virtual tool ring relative to a range of eye motion of a user within a virtual scene according to various embodiments of the disclosure.
  • FIG. 26A illustrates a user-selected virtual tool that generates a peek window within an inner area of the tool ring according to embodiments of the present disclosure.
  • FIG. 26B illustrates a user-selected virtual window related to the peek window according to embodiments of the present disclosure.
  • FIG. 27A illustrates an exemplary user-selected virtual clock tool that generates a time & calendar peek window within an inner area of the tool ring according to embodiments of the present disclosure.
  • FIG. 27B illustrates an exemplary calendar related to the time & peek window according to various embodiments of the present disclosure.
  • FIG. 28A illustrates an exemplary user-selected virtual music tool that generates a simple music control within an inner area of the tool ring according to embodiments of the present disclosure.
  • FIG. 28B illustrates a detailed music control related to the simple music control according to various embodiments of the present disclosure.
  • FIG. 29A illustrates an exemplary user-selected virtual text tool that generates a simple book/document list within an inner area of the tool ring according to embodiments of the present disclosure.
  • FIG. 29B illustrates an exemplary virtual text window that provides a user a document or text according to embodiments of the present disclosure.
  • FIG. 30 illustrates an eye-tracking user interface manager according to various embodiments of the present disclosure.
  • FIG. 31 illustrates a viewport with scrolling zones for image scrolling within the viewport according to embodiments of the present disclosure.
  • FIG. 32 illustrates a viewport with scrolling zones for text scrolling within the viewport according to embodiments of the present disclosure.
  • FIG. 33 illustrates a process for scrolling in a viewport according to embodiments of the present disclosure.
  • FIG. 34A illustrates a viewport with a zooming area for zooming via user gazing according to embodiments of the present disclosure.
  • FIG. 34B illustrates a viewport with zoomed scene area according to embodiments of the present disclosure.
  • FIG. 35 illustrates a process for virtual scene zooming according to embodiments of the present disclosure.
  • FIG. 36 illustrates a process for virtual scene unzooming according to embodiments of the present disclosure.
  • Embodiments of the present invention allow a user to wear dynamic contact lenses that provide a virtual framework for the user to retrieve information and interact with his/her environment.
  • a user may select one or more tools within a virtual environment generated by the contact lenses.
  • This selection of virtual tools is designed to allow a user to select and activate a virtual tool by performing pre-defined eye movements that are recognized by the system.
  • the selection of virtual tools may also include the use of an auxiliary device, such as a watch, piece of jewelry, or other device external to the contact lens, which allows the user to identify to the system an intent to activate one or more tools.
  • This unique way of activating virtual tools allows a user to interact with a virtual environment, generated by contact lenses, in a way that is not blatantly obvious to others proximate to the user.
  • FIGS. 1A and IB illustrate an exemplary eye-mounted display (“EMD”) system according to embodiments of the present disclosure.
  • the EMD system 102 allows a user to interact with virtual objects, including virtual tools and windows, using eye movement that is translated into a virtual scene.
  • the EMD system 102 may be a contact lens 140, such as a scleral contact lens designed to be fixed on the wearer’s eyeball.
  • Embedded on the contact lens 140 may be a display 104, sensors, power components, communications devices, control systems, and other components that provide various functions within the system.
  • the display 104 may be implemented as a miniature video projector that projects images on the part of the wearer’s retina centered on the fovea; the highly sensitive and high-resolution region of the retina that is referred to when the eye directly gazes or inspects an object.
  • the display 104 may be defined as a femtoprojector 120 within FIG. IB, which is described within certain US applications and patents identified below.
  • Sensors may comprise any type of motion sensors 125, such as accelerometers, magnetometers, and gyroscopes, and image sensors (such as a camera) that may be used for eye-tracking functionality.
  • the power, communications, and control systems comprise coils that enable inductive power transfer, or an energy storage device, such as a battery 165, that can deliver sufficient energy to operate EMD system 102 for a period of time.
  • a power circuit 170 may also be provided that regulates and controls power to the various devices on the system.
  • Various EMD systems may also include transceivers 115 for communication with internal and/or external devices, and various controllers that control circuits and sub circuits.
  • the user of an eye controlled EMD system 102 may use any combination of eye movements and other signals to interact with a virtual scene. This interaction may be supplemented with various auxiliary devices such a head-mounted head-tracking device, a smartphone, a hand-held controller, other body sensors, electronic jewelry or any other type of device that can communicate with the EMD system.
  • various auxiliary devices such as a head-mounted head-tracking device, a smartphone, a hand-held controller, other body sensors, electronic jewelry or any other type of device that can communicate with the EMD system.
  • EMD system 102 may equally be performed, for example, by an accessory device (not shown in FIG. 1) that may be communicatively coupled with EMD system 102 and, in embodiments, provides power via inductive coupling.
  • an accessory device (not shown in FIG. 1) that may be communicatively coupled with EMD system 102 and, in embodiments, provides power via inductive coupling.
  • Exemplary accessory devices, EMDs, and their functions and components are described in greater detail in US Patent Applications Ser. No. 15/959,169, filed on April 21, 2018, entitled “Power Generation Necklaces that Mitigate Energy Absorption in the Human Body,” listing inventors Miller et al.; US Patent Application Ser. No. 15/966,481, filed on April 30, 2018, entitled “Multi-Coil Field Generation In An Electronic Contact Lens System,” listing inventors Owens et al.; US Patent Application Ser. No.
  • EMD system 102 manages how, where, and when virtual objects, such as virtual tools, peek windows and virtual windows in a virtual scene are activated, selected, displayed and dismissed within a given coordinate space.
  • the EMD system 102 controls the content and layout of a virtual scene including the graphical representation of the virtual objects on the display according to user’s eye-movement. This control allows a user to efficiently interact with virtual objects to activate, select and dismiss tools and windows in an organized and structured manner within the virtual scene.
  • eye-movements may be tracked, estimated (e.g., using a Kalman filter algorithm) and/or predicted based on motion, image, sensor data or a combination thereof.
  • Data derived from such eye movements may include timing and sequences of saccadic movements, eye direction (e.g., eye angle, elevation, roll, yaw), the fixation point in space, orientation of head/body, and body position data. This data may also consider wearer-specific conditions, such as physical and biological characteristics, that relate to the user’s range of eye-motion, eye muscle irregularities, and other limiting factors and context that may vary over time.
  • FIG. 1C illustrates an exemplary electronic contact lens according to embodiments of the present disclosure.
  • the electronic contact lens 100 allows a user to interact with a virtual environment such that eye movement is translated into a visible virtual scene within a larger virtual environment.
  • the electronic contact lens 100 may be implemented as a contact lens 102, such as a scleral contact lens designed to be fixed on the wearer’s eyeball.
  • Embedded on the contact lens 102 may be femtoprojector 104, sensors 106, and power, communications, and control systems 110.
  • Femtoprojector 104 may be implemented as a miniature video projector that projects images on the part of the wearer’s retina centered on the fovea — the highly sensitive, i.e., high-resolution region of the retina that is referred to when the eye directly gazes or inspects an object.
  • Sensors 106 may comprise any type of motion sensors, such as accelerometers, magnetometers, and gyroscopes, and image sensors (such as a camera) that may be used for eye-tracking functionality.
  • the power, communications, and control systems 110 comprise coils that enable inductive power transfer, or an energy storage device, such as a battery, that can deliver sufficient energy to operate electronic contact lens 100 for a period of time.
  • Various electronic contact lenses may also include transceivers for communication with internal and/or external devices, and various controllers that control circuits and sub-circuits.
  • the user of an eye-controlled electronic contact lens 100 may use any combination of eye movements and other signals to interact with a virtual scene within a virtual environment. This interaction may be supplemented with various auxiliary devices such a wearable head-mounted eye-tracking device, a smartphone, a hand-held controller, other body sensor, electronic jewelry or any other type of device that can communicate with the electronic contact lens.
  • auxiliary device (not shown in FIG. 1C) that may be communicatively coupled with electronic contact lens 100 and, in embodiments, provides power via inductive coupling.
  • auxiliary device not shown in FIG. 1C
  • Exemplary accessory devices, femtoprojectors, and their functions and components are described in greater detail in US Patent Applications Ser. No. 15/959,169, filed on April 21, 2018, entitled “Power Generation Necklaces that Mitigate Energy Absorption in the Human Body,” listing inventors Miller et al.; US Patent Application Ser. No. 15/966,481, filed on April 30, 2018, entitled “Multi-Coil Field Generation In An Electronic Contact Lens System,” listing inventors Owens et al.; US Patent Application Ser. No.
  • the auxiliary device may comprise circuitry to communicate via an electronic communication protocol with contact lens 102 and directly or indirectly (e.g., via the user’s phone) with an external network (e.g., Internet).
  • the auxiliary device may perform various computationally intensive tasks in lieu of electronic contact lens 102, such as computing some or all of the display data for femtoprojectors 104.
  • the accessory device may serve as an intermediate data storage tool that increases the storage capacity of electronic contact lens 100.
  • electronic contact lens 100 and/or the auxiliary device manages how, where, and when a virtual object in a virtual scene is displayed within a given coordinate space.
  • the electronic contact lens and/or auxiliary device may update the content and layout of a virtual scene including the graphical representation of objects on the display according to user’s eye-movement. As will be explained in detail below, this content update allows the user to scan a virtual scene by effectively updating a projected image correlated to where the user is looking within the scene itself.
  • eye-movements may be tracked, estimated (e.g., using a Kalman filter algorithm) and/or predicted based on motion, image, sensor data or a combination thereof.
  • Data derived from such eye movements may include timing and sequences of saccadic movements, eye direction (e.g., eye angle, elevation, roll, yaw), the fixation point in space, orientation of head/body, and body position data.
  • This data may also take into account wearer-specific conditions, such as physical and biological characteristics, that relate to the user’s range of eye-motion, eye muscle irregularities, and other limiting factors and context that may vary over time.
  • FIG. 2A illustrates an exemplary contact lens comprising motion sensors according to embodiments of the present disclosure.
  • contact lens 102 may be a scleral contact lens.
  • Contact lens 102 comprises magnetometer 201 and accelerometers 202A and 202B that may be embedded within contact lens 102. It is understood that any number and type of sensors may be used to perform the tasks related to the objectives of the present disclosure. Suitable sensors may be used to sense eye movements to determine distance, speed, acceleration, orientation, path, angle, rate, etc.
  • Various types of sensors and their strategic locations on contact lens 102 are described in more detail in in in US Patent Application Ser. No.
  • magnetometer 201 and accelerometers 202A, 202B may be used as motion sensors to detect and track the orientation of contact lens 102 and, thus, the orientation of the eye of the user.
  • a gyroscope or outward-facing image sensor may be deployed within the contact lens 102 to replace or supplement the sensors described above.
  • Other sensors located on the body or head may also be involved.
  • raw sensor data from sensors 201, 202 may be converted into control signals that may be used to control, activate, deactivate, navigate, or select virtual objects in a virtual scene. This type of interaction between a user and a virtual scene allows for a smooth, intuitive, and effortless manner in which a user can navigate a scene and extract information therefrom.
  • FIG. 2B shows a spherical coordinate system that may serve as a reference frame for components in the electronic contact lens shown in FIG. 1C.
  • the reference for an elevation sensor such as an accelerometer
  • the reference for a yaw sensor such as a magnetometer
  • a reference frame may be defined in any arbitrary convention, including a polar coordinate system, a cylindrical coordinate system, or any other system known in the art.
  • FIG. 2C and FIG. 2D illustrate various conventions for reference frames for the electronic contact lens shown in FIG. 1.
  • FIG. 2C refers to the coordinate space of the user’s eye 204 or head to enable eye-tracking or head-tracking by tracking polar angle Q (i.e., up/down elevation) and azimuthal angle f (i.e., left/right rotation).
  • FIG. 2D refers to the coordinate space of the user’s environment to enable “world-tracking,” by tracking angles Q and f, representing elevation and yaw, respectively.
  • objects in the virtual environment appear locked at locations in the user’s environment, irrespective of how the user moves his/her eyes, head or body.
  • a transition may involve switching from a reference frame to which the user’s eyes or head are fixed to one where it is the user’s body that is fixed.
  • a first frame of reference e.g., for the user’s head
  • a second frame of reference for the user’s eyes by taking into account the orientation of the user’s eyes and the manner in which the user’s head follows the user’s eyes.
  • a transition may involve transitioning between various reference frames that are associated with different objects in a virtual scene, e.g., objects that are fixed to different reference frames.
  • FIG. 3 illustrates the concept of Span of Eccentricity (hereinafter, “SoE”) according to embodiments of the present disclosure.
  • SoE Span of Eccentricity
  • the term “projected” is used interchangeably with the terms “displayed.”
  • the term “user” is used interchangeably with the term “wearer.”
  • “Activating” refers to exiting a standby (sleep) modes or switching to a wake model; triggering; or selecting, enabling, displaying, or otherwise making available a virtual tool, event, or area.
  • “Span of Eccentricity” refers to the angular width of the image 210 centered on the line of gaze, extending into the peripheral vision. As depicted in FIG.
  • projected the image 210 is the visible section of a virtual scene, such as that depicted in FIG. 4B.
  • the image 210 that is projected onto retina 206 by electronic contact lens 100 appears to have an angular width in the outside world equal to that of the SoE 304.
  • the image 210 projected by electronic contact lens 100 is ordinarily fixed (i.e., locked) to and moves together with eyeball 204.
  • the wearer sees projected image 210 displayed on retina 206 irrespective of where wearer of electronic contact lens 100 directs his/her eye 204 (or any other body parts).
  • the wearer of electronic contact lens 100 cannot even look at or fixate eye 204 anywhere other than about the center of SoE 304; specifically, the foveal vision region 308 (the fovea extends from zero to about 1.5° eccentricity about 3° within the SoE).
  • the wearer cannot look at or inspect objects or images appearing outside of foveal vision region 308 at the edges of SoE 304 as those images remain only in the wearer’s peripheral vision region 306.
  • the wearer of electronic contact lens 100 may recognize that a virtual object is present at the edge of projected image 210, without additional capability, the wearer is unable to direct his/her gaze there. Because eye movements alone do not change the content and location of what is projected on the wearer’s retina 206, the attempt to gaze at an object displayed in peripheral vision region 306 is rendered futile.
  • SoE is markedly different from, and not to be confused with, the concept of “field of view” as used in connection with conventional displays, such as computer monitors, TVs, or displays on eyeglasses (i.e., the angular separation between the edges of a display). For instance, if a user has to move his/her eyes by an angle of 50 degrees from one edge of a conventional display to the opposite edge, the field of view is said to be 50 degrees wide.
  • a canvas that has a fixed width and height that define the user’ s field of view here, the entire world around the user’ s head / eyes is the virtual canvas. This is true even if the image displayed on retina 206 is a portion of the canvas that is covered by SoE 304, i.e., what is seen at any moment in time when eye 204 does not move.
  • the extent of the virtual canvas is practically unlimited in that moving SoE 304 (i.e., the visible portion) allows the user to view a virtual scene in all directions (i.e., 360 degrees around the user) with no boundaries and without a “field of view” limitation.
  • the visible area is the same as the field of view of the display area. Despite the limited field of view, a user can look around a larger virtual scene in an AR system by turning the head.
  • the projected image 210 is updated to move SoE 304 to the new location within the virtual scene.
  • the updated image is correlated to the movement of the eye 204 and electronic contact lens 100 to render the appropriate segment of the virtual scene to the user. For example, if a movement of eye 204 in one direction occurs, the projected image 210 may be updated in an opposite direction such as to allow the user to scan the virtual scene.
  • FIG. 4A illustrates projecting onto the retina the visible portion of a virtual image according to embodiments of the present disclosure.
  • Electronic contact lens 100 comprises femtoprojector 104 that may be embedded with a contact lens.
  • femtoprojector 104 may be implemented as a miniature video projector (hereinafter “femtoprojector”) that comprises an image source (e.g., a light-emitting-diode microdisplay) and an optical system that projects an image generated by the image source directly onto retina 206 to cause the image to appear in the user’s field of vision.
  • an image source e.g., a light-emitting-diode microdisplay
  • a femtoprojector has been proposed by Deering.
  • femtoprojector is based on a tiny projector mounted inside a contact lens. The projector projects images onto the retina of a person wearing the contact lens. The projector must be sufficiently small ( less than 2 mm x 2 mm x 2 mm by cubic volume) to fit inside or on a contact lens that can be worn on a person’s eyeball, such that, for convenience, Deering called it a “femtoprojector.”
  • a femtoprojector preferably is no larger than about one or two millimeters in any dimension.
  • the femtoprojector’ s optical system may be implemented using a cylindrical, solid plastic, dual-mirror design. While being constrained to the physical dimensions of a contact lens, the optical system provides appropriate magnification and sufficient image quality.
  • one or more femtoprojectors 104 may be used, for example, one femtoprojector 104 that projects an image directly onto fovea 208, which contains the highest number of retinal receptive fields, i.e., generating the highest resolution images on retina 206.
  • a different, lower resolution femtoprojector 104 may be used to project images mainly onto the “lower-resolution” peripheral region of retina 206 that cannot resolve the higher resolution images.
  • electronic contact lens 100 may be used in VR applications, AR applications, mixed reality applications, and the like.
  • virtual reality applications the image projected by electronic contact lens 100 replaces what the user would normally see in the external environment, whereas in AR and mixed reality applications, the projected images appear superimposed onto the external environment, such that the projected image augments or adds to what the user sees in the real world.
  • FIG. 4B and FIG. 4C illustrate the concept of SoE by using a flashlight analogy.
  • the notion of an SoE making visible just a section of the larger virtual scene is analogous to looking at objects in a dark environment (FIG. 4C) illuminated only by a flashlight 400 (FIG. 4B).
  • a flashlight 400 FIG. 4B
  • only the portion of the 2D or 3D scene that is “illuminated” by SoE 304 or the conical beam 312 of the flashlight is visible at a given moment.
  • This analogy assumes that a defined circular edge exists around the circumference 410 of the projected flashlight that effectively limits the visible region within the circumference of the flashlight relative to a virtual scene.
  • FIG. 4C Depicted in FIG. 4C is a virtual scene that comprises visible section 310 and invisible sections of virtual scene 406 defined by what is displayed within the SoE 304 at any moment in time.
  • the image displayed in visible section 310 has a circular shape, similar to the projection produced by flashlight 400.
  • a femtoprojector projects images onto a limited (here, circular) visible section 310 corresponding to, for example, a 25- degrees-wide SoE 304. Therefore, as shown in FIG. 4C, visible section 310, which comprises foveal 308 and peripheral 306 vision regions, correspond to the base of a 25 degrees-wide cone in the coordinate space of the virtual scene.
  • Objects 406A and partial objects 406B in FIG. 4C that do not fall within visible section 310 are not displayed on the retina and, thus remain invisible to the eye until being recalled from computer memory (or derived from stored information) and included within SoE 304 by the image projector that renders the recalled objects onto the retina, in response to the user turning their eye in the direction of those objects.
  • moving the eye and SoE 304 to look around a virtual image or scene bears resemblance to scanning a surface in the dark by illuminating the surface with a flashlight.
  • the image projector effectively updates the SoE 304 relative to eye movements of a user by loading a corresponding portion of the virtual image and updating what is projected onto the eye.
  • the concept of the SoE does not allow the wearer of an EMD system to inspect or move the eye to directly look at the edge of visible section 310 to view off-center regions 306 of visible section 310 that are projected outside of foveal vision region 308.
  • a displayed object in response to detecting an attempt to inspect an object or image that is displayed at the edge of visible section 310, a displayed object may be re-rendered, such as to move from the edge, the users’ peripheral vision region 306, to the user’s foveal vision region 308 to enable the user to inspect objects anywhere in a virtual scene, including objects originally located outside of foveal vision region 308.
  • embodiments presented herein may equally be used non-EMD systems, such as AR, VR, MR, and XR displays, in related applications to enable a clutter- free, naturally flowing, and user-friendly navigation.
  • One skilled in the art will recognize the difficulty in allowing a user to interact with virtual tools available within the virtual environment displayed on the user’s retina.
  • the discussion below identifies different embodiments that allow a user to select and activate a virtual tool based on tracked eye movements and/or simple physical interaction with an auxiliary device.
  • FIG. 5A illustrates a virtual tool activation chart comprising an exemplary activation threshold according to embodiments of the present disclosure.
  • Chart 500 represents a common range of motion 502 of a pair of human eyes, not accounting for variations between individuals.
  • activation chart 500 shows the angles from the center point that a person can directly aim the central focus of their eyes without moving the head. Note that chart 500 does not take into account peripheral vision.
  • Ranges of motion 502 for the human eye are greater than 95° horizontally and 75° vertically. Yet, most of the time, the eye operates in the central region of range 502 rather than at the periphery of range 502. Therefore, in embodiments, eye motion towards or directed at the periphery of range 502 may be advantageously used to wake or activate a virtual tool.
  • particular virtual tools are associated with certain points along the activation threshold 503, which allows a user to activate a desired virtual tool by looking beyond an associated point along the activation threshold 503.
  • chart 500 comprises activation threshold 503 that an electronic contact lens may utilize as a trigger to initiate an action.
  • an electronic contact lens may monitor eye motion to determine where in range 502 the eye is directed to determine whether activation threshold 503 has been crossed. If so, the corresponding eye motion may be interpreted as the user’s intent to initiate an action, such as activating the electronic contact lens (e.g., by exiting a sleep mode), activating a virtual tool, or any sequence of actions, such as both activating the electronic contact lens and selecting a tool, e.g., in a single action.
  • Various embodiments determine that the gaze reaches activation threshold 503 or that it approaches the edge of the eye’s range of motion 502, for example, by detecting that the eye is rotated relative to the user’s head or eye socket.
  • eye heading relative to the Earth’s magnetic field may be measured using a magnetometer disposed within the smart contact lens, and pitch may be measured relative to Earth’s gravitation field by using accelerometers.
  • Head position may be measured by a head tracking apparatus that may track the user’s head position, for example by using an inertial measurement unit (IMU), the IMU may comprise a magnetometer attached to the head to detect the compass heading of the head relative to the Earth’s magnetic field and accelerometers that track head pitch relative to Earth’s gravitation field.
  • IMU inertial measurement unit
  • eye angles may be compared to head angles to determine eye yaw and pitch relative to the head. If, for a given angle from the center point of chart 500 in FIG. 5A, the eye exceeds a threshold angle, this may be considered an activation event.
  • a moving average of eye angles may be determined and used to infer the user’s head position. This determination may take advantage of the fact that users naturally turn their head towards an object that they want to look at after a short delay.
  • FIG. 5B illustrates a method for using an activation threshold to select a tool according to embodiments of the present disclosure.
  • Eye range of motion 502 activation threshold 503 comprising crossing locations 510 and 512 that are associated with to-be activated but not yet visible tools 520 and 522, respectively. Since users tend to not glance upward as often as they glance to the left, right, or downward, in embodiments, glancing upward past activation threshold 503 may be interpreted as an activation or selection event.
  • the user’s eye movement 504 at a given angle or along a given path that crosses activation threshold 503 at crossing location 510 may serve as an indication of the user’s intent to activate or select one tool 520 over another tool 522.
  • one or more predetermined angles or activation areas may be utilized to initiate one or more actions. It is understood that activation may be completely independent of tool section. For example, glancing at or past activation threshold 503 may be interpreted as an activation that does not involve a tool selection.
  • Certain embodiments may take advantage of a low-power “watchdog mode” feature of existing accelerometer devices that enable exiting a sleep mode upon detecting a relatively large acceleration. It is understood that in an electronic contact lens the acceleration may be independent of activation threshold 503 or crossing locations 510.
  • the electronic contact lens may set one or more accelerometers to detect an acceleration that is caused by a relatively large saccade, and upon detecting the saccade, wake the system.
  • the combination of a relatively large saccade and acceleration may wake a system.
  • such combination may be used as a first pass to determine the presence of a wake signal, for example, in conjunction with other or additional sensors that may detect whether the eye is at or crosses a certain angle, and if not, remain in sleep/standby mode.
  • Saccades which may have a range of distances, may be directed toward or reach an edge of range of motion 502.
  • the distance of a saccade may be estimated using any eye tracking method described herein. For example, given that a larger saccade is likely to reach the end of range of motion 502, a detected change in angle or the speed of that change may be used to infer a relatively long saccade, which may then be interpreted as an activation, e.g., in a given direction.
  • a user’s neck movement when turning the head is typically accompanied by a quick saccade in the direction of the new target. Therefore, in embodiments, to avoid triggering a false activation based on a misinterpretation of the user’s turning their head in their environment as a long saccade, the EMD system may take into account a pause or other gesture that the user may have been instructed to make before turning their head.
  • the user’s intent to activate or select may be derived from the user directing the gaze, e.g., by lifting the chin, to a predetermined direction or location that comprises persistent or expected elements in the virtual field, for example dots at or above a certain threshold of elevation. It is understood that the threshold may not necessarily be within eye range of motion 502.
  • user interface activation elements may become visible in the electronic contact lens.
  • the system may activate and, e.g., bring up a ring of tools in a virtual scene as shown in FIG. 7 and FIG. 8.
  • FIG. 5C illustrates a method for displaying a selected a tool according to embodiments of the present disclosure.
  • a user has an eye range of motion 502 in which the user may interact with various tools within a virtual scene.
  • This eye range of motion may be divided by the activation threshold 503 into an inner area 511 and an outer area 512.
  • This partition of the eye range of motion 502 allows a user to select and activate tools.
  • tools e.g., 524
  • the location(s) to display the selected tools (e.g., 524) may be at a predetermined location, such as default location 526.
  • one or more tools that are not activated are shown within the outer area 512 and may be activated by a user when an eye position crosses the activation threshold 503 and looks at a particular tool.
  • the tools are not shown within the outer area 512 but certain tools are associated with portions of the outer area 512 so that when a user eye position crosses the activation threshold 503, the system associates a portion of the outer area 512 with a tool and then activates it.
  • the location may be adaptively chosen depending on virtual or real objects that may already present in the user’s range of motion 502, e.g., such as to prevent certain virtual objects from overlapping with certain real-world objects.
  • tools, leading lines, a ring, or any other structure(s) may be displayed to assist the user in identifying and/or selecting tools that have been activated and tools that have not been activated within the virtual scene. For example, different colors may be implemented within the virtual scene to identify an activated tool versus non-activated tools. Also, the threshold within the virtual scene may have a variety of shapes to differentiate between activated tools and non-activated tools.
  • FIG. 5D illustrates a method for using an auxiliary device to select several tools for display according to embodiments of the present disclosure.
  • Auxiliary device 566 in FIG. 5D may be a smartphone, sensor, or any other electronic device that may be capable of communicating with an EMD system.
  • auxiliary device 566 may activate the display of contact lens and causes trigger element 568 to activate the contact lens and/or a number of tools and select a subset or all of the activated tools for display in range of motion 502, including the inner area 511 and the outer area 512.
  • trigger element 568 may activate the display of contact lens and causes trigger element 568 to activate the contact lens and/or a number of tools and select a subset or all of the activated tools for display in range of motion 502, including the inner area 511 and the outer area 512.
  • all tools 540-548 are activated, but only those tools that have been (pre-)selected 540, 546 are displayed, i.e., made visible 530, 532 in range 502.
  • one or more tools may be initially displayed within the outer area 512 and then one selected by the user, which results in the selected tool then transitioning to the inner area 511.
  • the selected tools may appear within the inner area 511 once the tools are activated by a user interacting with the auxiliary device 566.
  • multiple tools 540 - 548 may be initially shown within the outer area after a user eye position passes the activation threshold 503. As a result, a user may then select one of the tools which will cause the tool to transition to the inner area 511.
  • the subset of tools 560, 562 may be chosen based on context. For example, a tool for communicating bank account information may be selected based on the EMD system detecting that its current environment is a hank.
  • FIG. 5E illustrates a set of exemplary angles for facilitating an activation according to embodiments of the present disclosure.
  • user intent to activate or trigger a contact lens display may be inferred from eye motion and/or distance of the eye movement, e.g., at predetermined angle(s).
  • pitch and yaw angles may be restricted to permit activation with less technically advanced EMD systems.
  • activation directions may be limited to 8, 4, or 2 permissible directions, or even a single direction.
  • FIG. 5F illustrates an exemplary method for calibrating a user’s eye range of motion according to embodiments of the present disclosure.
  • a user’s eye range of motion may be measured, e.g., as part of a calibration procedure that may adaptively adjust thresholds and compensate for users’ eye range of motion based on individual characteristics, such as age and other vision-related characteristics.
  • calibration may comprise, for example, prompting a user to uncover as much as possible of head-locked virtual scene 580 by scanning virtual scene 580 with their eye. Then, the extent of the area the user was able to reveal in virtual scene 580 may determine a custom range of motion at any given angle from a straight-ahead view.
  • a head- locked display may be used, and the user may be asked to expand a “rubber band” as much as possible.
  • a user may adjust, i.e., grow or shrink, the activation threshold or adjust the shape of the activation threshold, i.e., the shape of the periphery beyond which the system will recognize an activation.
  • the user may perform adjustment tasks by using any type of physical or virtual buttons, voice commands, a companion mobile phone app, and the like.
  • FIG. 5G illustrates an exemplary process for automatically adjusting activation sensitivity according to embodiments of the present disclosure.
  • process 590 begins at step 591 when a user’s “normal” range of motion is monitored while the user behaves normally in a real-world environment with the system being inactive.
  • the system in response to the user’s eye motion exceeding the normal range by some threshold, the system may be activated.
  • step 593 if the user ignores or dismisses system activation within a given time period, the activation at step 592 is considered a false activation, and the threshold may be increased by a certain amount for subsequent invocations.
  • step 594 if the user accepts system activation, e.g., by engaging with and using the system, the activation at step 592 is deemed a successful, and the threshold is maintained.
  • step 595 if the user’s eye remains at a large angle for a relatively long amount of time, this may be interpreted as an attempt to activate the system, such that the system is activated, at step 596, and the threshold is decreased for subsequent invocations. It is noted that there any number of thresholds may exist for various angles in the user’s range of motion. For example, a threshold in the upward direction may be smaller than the threshold in the right or left directions where users tend to spend more time.
  • the calibration and compensation methods herein may automatically adapt to different users and automatically adapt to a specific user as that user becomes more familiar with the system.
  • the system may monitor the specific capabilities of a user and adjust an activation threshold or a way in which virtual tools are displayed based on a historical analysis of how the user has interacted successfully and unsuccessfully in activating virtual tools.
  • these methods facilitate ease of activation while, at the same time, reducing the number of false positives.
  • FIG. 6A-FIG. 6C illustrate exemplary methods for measuring eye position in an eye socket using capacitive skin sensors in a contact lens according to embodiments of the present disclosure.
  • position of eye 604 within its eye socket may be measured using capacitive skin sensors (e.g., 606).
  • Smart contact lens 602 may comprise several capacitive sensors that may be built-in and used to detect the degree of skin (here, eye lid) that covers a number of sensors (e.g., 610).
  • a capacitive reading will be greater for parts of contact lens 602 that are obscured by skin, and the capacitive reading will be lower for those parts that, at a given angle, are covered less by skin.
  • top sensor 610 and bottom sensor 606 are both covered by skin, whereas left sensor 616 and right sensor 616 are not.
  • FIG. 6B once the user looks upwards, the top bottom sensor 606 is no longer covered by skin.
  • FIG. 6C when the user looks to the right, in addition to both top and bottom sensors 610, 606 remaining covered, right sensor 616 is also covered by skin.
  • capacitive readings may serve as a measure of rotation, i.e., the relative angle, of eye 604.
  • a suitable number of capacitive sensors may be selected to achieve a desired accuracy.
  • tools arranged along visible or partially visible paths may be activated in various ways. For example, as shown in FIG. 7, in response to detecting that a user looks upward towards partially visible ring 702 in virtual scene 700, tool 704 may be activated and reveal a previously not visible item, here, a car icon located at perimeter. In addition, driving time tool 706 is displayed at a predetermined location, e.g., at another angle.
  • tool 806 may be a virtual object that exist on visible section 802 of ring 804, i.e., located within user’s SoE. Ring 804 may provide a visible guide 808 to other tools (not shown in FIG. 8).
  • This visual framework will allow a user to identify and select a series of virtual tools that are related by visually following guide feature 808 that identifies a second virtual tool related to the first tool 806. Certain embodiments of the activation of related virtual tools are described in more detail below.
  • FIG. 9 illustrates how an exemplary tool in a hierarchical tool set may reveal the presence of selectable sub-tools according to embodiments of the present disclosure. A user’s gaze may be used to reveal objects in exemplary virtual scene 900.
  • FIG. 9 illustrates how an exemplary tool in a hierarchical tool set may reveal the presence of selectable sub-tools according to embodiments of the present disclosure.
  • a user’s gaze may be used to reveal objects in exemplary virtual scene 900.
  • FIG. 9 depicts a two-dimensional arrangement of virtual objects that comprises multi-level hierarchical navigation tools.
  • Two hierarchy levels are represented by tool 906 (labeled home) and sub tool 908 (labeled music, thermostat, security, and solar) that are displayed as words arranged along ring 804 to lead the user’s attention from one sub-tool 909 to another.
  • connector 904 between virtual objects guides the user’s gaze in the coordinate space of virtual scene 900.
  • the content of visible section 802 is controlled, in concert with the user’s eye motion, to smoothly transition and display different sections of virtual scene 900. This way, the user has the experience of “looking around” in virtual scene 900.
  • tool 906 may be used as a selectable navigation tool that, once invoked by one or more of the methods previously mentioned, reveals sub-tool 909, which itself may be selectable.
  • Sub-tool 909 may reveal other levels of hierarchy (not shown), thereby, facilitating the navigation of a multi-level hierarchy, advantageously, without the need for employing external or auxiliary selection devices.
  • this embodiment visually separates two levels of hierarchy. However, this is not intended as a limitation on the scope of the present disclosure.
  • the user’s gaze may be directed in any other way to select any hierarchy of tools.
  • a tool e.g., displayed in the form of an icon
  • a tool may be activated and highlighted, for example, by visibly changing the appearance of the tool to distinguish it from other virtual or real-world objects, e.g., by animating it or by altering the characteristics or the appearance (color, shape, size, depth, etc.) of the selected tool and/or any item associated therewith. This may indicate that the tool is ready to be activated or ready to invoke another tool.
  • the tool may, upon being selected, immediately invoke or activate another tool. For example, once the eye reaches a tool, the tool may be activated and projected at or near the center of the user’s range of motion that may or may not be the direction the user’s gaze is directed towards.
  • FIG. 10 A - FIG. 10D illustrate an exemplary method for highlighting tools in one or more steps according to embodiments of the present disclosure. As depicted in FIG. 10A, a user may move the eye from nominal position 1002 within the eye’s range of motion 502 toward a designated area at periphery 1004 of range of motion 502 to wake and/or instantly activate the system with or without receiving visual feedback of successful activation.
  • Such activation may cause tool 1006 to be immediately available within a virtual scene when, or even before the user’s gaze arrives at the location of tool 1006.
  • tool 1006 may be made be visible within visible area 1008 of virtual scene.
  • the to-be- activated tool 1006 may be already activated and available by the time the user’s gaze returns to staring point 1002 such as to allow for rapid tool activation.
  • a user’s tendency to direct eyes toward the edges (e.g., 1004) of the eye’s range of motion 502 when turning the head to look around is a potential source for triggering false activations. Therefore, to reduce the number of false positives, an activation may be suppressed by detecting, e.g., via head-mounted IMUs or by inferring it from a recent history of eye locations or movements, that the user’s head has moved just before or just after an eye motion event.
  • the ability to activate a system by, e.g., a glance to the side may be preconditioned on the user’s head motion not exceeding some type of threshold speed, such as distance, angle, and the like.
  • FIG. 11 illustrates exemplary methods for interpreting a user’s eye motion as an activation or tentative activation of the system according to embodiments of the present disclosure.
  • eye gesture related data may be evaluated to determine whether an eye motion or a sequence of eye motions was made intentionally.
  • the glance upward may be interpreted as a tentative activation of the system.
  • the subsequent glance that may involve a relatively large saccade may be interpreted as an intent to initiate an activation.
  • the direction of the second saccade may be used as an indication of which tool the user wants to select.
  • upward saccade 1102 followed by left-hand side saccade 1104 may invoke tool 1; a relatively small upward saccade 1110 followed by another upward saccade 1112 may invoke tool 2; an upward saccade 1120 followed by a right-hand side saccade 1122 may invoke tool 3, and so on.
  • an upward saccade 1102 or 1120 followed by a “normal” pattern e.g., glancing around with no discernible pattern that matches a set of predetermined patterns or directions may be discarded and/or interpreted as the user’s intent to not (yet) activate the system or select a tool.
  • Eye gestures that may be interpreted as an intent to activate the system comprise the user glancing to an extreme direction and pausing momentarily, or the user making a long saccade in one direction followed by a long saccade in the opposite direction to the starting point, e.g., up-down, down-up, left-right, or right-left.
  • any gesture such as those exemplified in FIG. 11
  • some of the disclosed approaches herein are compatible with systems, such as existing AR/VR technologies, that do not utilize head tracking, eye tracking, or tracking of the eye within the eye socket.
  • FIG. 12 illustrates an eye-based activation and tool selection system according to embodiments of the present disclosure.
  • eye-based activation and tool selection system 1200 comprises processor(s) 1220 that is communicatively coupled to and coordinates functions of individual modules of system 1200.
  • the modules may comprise power and communication controller 1202, activation threshold detector 1204, motion detector 1206, coordinate space display manager 1208, tool selector 1210, and virtual object generator 1212.
  • system 1200 may be coupled to auxiliary device 1214. It is understood that any part of activation and tool selection system 1200 may be implemented on a contact lens and/or an accessory device (not shown) that communicate with each other according to embodiments presented herein.
  • power and communication controller 1202 may aid in distribution, harvesting, monitoring, and control of power to facilitate operation of activation and tool selection system 1200, including internal and external communication of data and control commands between components and sub-components.
  • coordinate space display manager 1208 may define a virtual space according to a coordinate system as shown in FIG. 2B to map virtual objects onto the virtual space. Coordinate space display manager 1208 may control content and spatial relationships of virtual objects within the coordinate system that is fixed in one or more degrees of freedom with respect to at least one real-world object, such as a user’s headgear, or with respect to gravity and earth magnetic field.
  • coordinate space display manager 1208 may be communicatively coupled to a display controller that may determine what images the display optics renders on the user’s retina.
  • Activation threshold detector 1204 controls the generation, appearance, and location of an activation threshold relative to the user’s eye range of motion.
  • Tool selector 1210 may reveal or conceal the presence of virtual objects in response to data input from motion detector 1206 that may comprise motion and other sensors. Data gathered from motion detector 1206 is used to track and interpret a user’s eye-movements in a manner such as to distinguish between eye and/or head movements that are aimed at initiating an action involving an activation versus an action involving a selection of one or more virtual objects, such as navigation tools that may be used to select the type(s) of information to be displayed based on the user’s eye movements.
  • FIG. 13 illustrates a process for using an eye-based activation and tool selection system according to embodiments of the present disclosure.
  • Process 1300 may begin, at step 1302, when at least one of a position, an orientation, or a motion of an eye is tracked in one or more degrees of freedom (e.g., relative to a reference frame) to generate tracking data.
  • degrees of freedom e.g., relative to a reference frame
  • Eye-tracking may be performed according to any of the methods used herein.
  • the generated tracking data may comprise information that is indicative of an intent of a user.
  • an eye motion may comprise any number of eye gestures indicative of the user’s intent to perform an action, such as activating a tool, selecting a tool, or any combinations thereof.
  • a tool may be activated and/or selected in accordance with the user’s intent.
  • a location may be chosen to display the tool, e.g., in a visible section of a virtual scene.
  • the tool may be so displayed.
  • FIG. 14 illustrates another process for using an eye-based activation and tool selection system according to embodiments of the present disclosure.
  • Process 1400 may begin, at step 1402, when in response to a user activating an auxiliary device associated with an electronic contact lens, e.g., a smart watch, a set of virtual tools is activated, for example, based on context, such as the user’s real-world environment.
  • a user activating an auxiliary device associated with an electronic contact lens e.g., a smart watch
  • a set of virtual tools is activated, for example, based on context, such as the user’s real-world environment.
  • at step 1404 at least one of a position, an orientation, or a motion of an eye is tracked, e.g., in one or more degrees of freedom relative to a reference frame such as the user’s eye socket, to generate tracking data indicative of a tool selection by a user.
  • a location to display the tool(s) may be selected, e.g., in a visible section of a virtual scene.
  • the tool may then be displayed in that virtual scene.
  • FIG. 15 illustrates revealing nearby virtual objects using a trigger in the visible section of a virtual scene according to embodiments of the present disclosure.
  • trigger 1502 represents any virtual object, such as an element, content (e.g., static or dynamic text alphanumeric character, image, icon, or any arbitrary symbol), or a tool, e.g., a tool to navigate various levels of a hierarchical structure, or a region in the vicinity of such virtual object.
  • a sub-element 1506 revealed by trigger element 1502 may itself be or become a trigger element for another sub-element (not shown in FIG. 15).
  • visible section 310 comprises trigger 1502 that may be invoked by being looked at (i.e., eye-selected either directly or indirectly, e.g., by looking or glancing at a location at or near trigger 1502), or by being identified as a target location of an eye movement, or simply by being highlighted in response to falling within visible section 310.
  • trigger 1502 may be invoked by saccadic eye motions in the direction of trigger 1502, before the eye’ s gaze has reached the trigger, and which are destined to land on or near the trigger or in its direction, as may be determined by using mathematical saccade prediction.
  • Trigger 1502 may further be invoked by a user performing a sequence of eye movements and pauses, also known as an eye gesture.
  • trigger 1502 may initiate a number of actions that result in the trigger 1502, for example, (1) becoming selectable; (2) being selected; (3) revealing a virtual object, such as sub-element 1506 (here, a partially visible object 1506A that appears in the peripheral vision region of SoE 304 with part 1506B of the sub-element not projected onto retina), or the presence thereof (here, the visible part of sub-element 1506 provides a clue or hint as to its presence, such that it can be selected/activated and, e.g., moved to the foveal vision region); (4) partially or fully displaying virtual objects in the visible area; (5) adding objects to the virtual environment outside of the visible area; and/or (6) selecting one or more virtual objects. It is noted that invoking trigger 1502 may have other and additional effects, such as removing elements from the virtual scene, updating or replacing elements, invoking any type of action, or any combinations thereof.
  • a virtual object such as sub-element 1506 (here, a partially visible object 1506A that appears in
  • Selecting a particular object may enable any number of possible subsequent selections and determine the type and manner of such selection, e.g., according to a hierarchy of selectable objects or indicators.
  • invoking a selection may be accompanied by a feedback mechanism that may comprise any combination of temporary visual, auditory, haptic, or other type of feedback.
  • the feedback mechanism may comprise altering the characteristics or the appearance (color, shape, size, depth, etc.) of the selected item and/or any item associated therewith.
  • the selection of a particular indicator or virtual object may further animate the object by highlighting it, which may comprise visibly changing the appearance of the object in a manner such as to distinguish it from other virtual or real-world objects.
  • a selection may also result in moving a virtual object to or near the center or the edges of a visible location or pathway.
  • selecting may comprise changing the size of a selectable object or alternating between appearances.
  • virtual objects may be placed or re-arranged at locations close to each other, e.g., in response a selection, to support rapid navigation and reduce eye travel time and reduce long-distance eye movements to prevent premature fatigue and increase eye-tracking accuracy.
  • embodiments presented herein may equally be used non-EMD systems, such as AR, VR, MR, and XR displays, in related applications to enable a clutter- free, naturally flowing, and user-friendly navigation.
  • FIG. 16A illustrates a virtual object that utilizes a connector according to embodiments of the present disclosure.
  • a trigger 1502 comprises connector 1602 that is visibly connected to another virtual object such as sub-element 1606A.
  • connector 1602 may serve as a guide, lead, or clue that implies, signals, or reveals the location and/or presence of sub-element 1606A.
  • Connector 1602 may be visible in conjunction with trigger 1502, or it may become visible once trigger 1502 has been invoked, such that connector 1602 is included into the virtual scene in conjunction with sub element 1606A.
  • sub-element 1606A may be a selectable object that is partially or fully located outside of SoE 304 and may be only partially visible, or not visible to the observer at the moment of invocation. It is understood that the connector 1602 itself may also be in vocable to initiate an action or a series of actions, such as those mentioned with reference to FIG. 15. It is further understood that any number of virtual objects may be arranged to spatially overlap with each other.
  • FIG. 16B illustrates a virtual object that, without utilizing a visible connector, reveals the presence of an otherwise not visible virtual object according to embodiments of the present disclosure.
  • a trigger or element 1610 in FIG. 16B may be a section indicator such as a word that can be selected by looking at it, and that comprises no connector, proxy, or pointer.
  • the presence of trigger/element 1610 itself may serve as a clue that reveals sub-element 1606B that due to its location outside of SoE 304 is not rendered on the retina and is not visible to the eye in FIG. 16B.
  • invoking trigger/element 1610 may be used to add or remove sub elements (e.g., 606B) in the virtual scene that are outside SoE 304 and, thus, not immediately visible until the observer gazes in that direction to render the sub-element 1606B into SoE 304.
  • sub elements e.g., 606B
  • FIG. 16C illustrates a proxy or pointer with a connector according to embodiments of the present disclosure.
  • proxy/pointer 1604 comprises a dot that is displayed as a filled circle. It is noted, however, that any mark or symbol, or location near such symbol, may be employed as a proxy/pointer.
  • proxy/pointer 1604 draws the user’s attention to the presence of sub-element 1606C and provides the user’s eye a place to saccade to, for example, by being positioned such as to indicate the direction of sub-element 1606C located outside of the visible area.
  • proxy/pointer 1604 itself may serve as a trigger element.
  • FIG. 16D illustrates a proxy or pointer without a connector according to embodiments of the present disclosure.
  • a wearer who is reasonably familiar with a spatial relationship between two or more virtual objects or who anticipates such spatial relationship(s) may imply from the existence of element 1620 the presence and/or location of a nearby sub-element 1606D despite its absence from the visible region, for example, from the presence of proxy/pointer 1608 that need not be connected with element 1620.
  • proxy/pointer 1608 in FIG. 16D may draw a user’s attention to the presence of sub-element 1606D located outside of the visible area.
  • proxy/pointer 1608 may be placed near the edge of the visible area and in direction of sub- element 1606D to indicate direction of sub-element 1606D.
  • proxy/pointer 1608 in response to detecting a user’s attempts to look at or toward proxy/pointer 1608 or sub-element 1606D, proxy/pointer 1608 may “move” or point in the direction of sub-element 1606D, i.e., closer to the edge of the visible area, for example, until sub-element 1606D is revealed or partially revealed.
  • proxy/pointer 1608 may jump/move again to the edge, i.e., closer to sub-element 1606D.
  • FIG. 16E illustrates items that serve as hints for the presence of non- visible objects according to embodiments of the present disclosure.
  • guide, lead, or hint 1650 may serve to guide the wearer’s gaze to sub-element 1652 without showing or revealing the trigger itself.
  • Guide 1650 may be, for example, an intermediate element that the gaze passes when being directed from element 1620 to sub-element 1652.
  • virtual objects or indicators such as triggers, elements, and sub-elements, may be arranged within a virtual scene in any desired pattern.
  • virtual objects may be arranged along visible, partly visible, and non-visible paths such as geometric shapes that are easy and intuitive to navigate by eye.
  • virtual objects may be arranged in patterns that make it easier to detect and/or interpret detected eye- motion to distinguish certain eye movements and gaze directions.
  • FIG. 17 illustrates an exemplary arrangement of virtual objects in a virtual scene according to embodiments of the present disclosure.
  • virtual objects e.g., 1704
  • the content of virtual scene 1700 may be mapped flat onto a virtual plane, curved to the inside of a cylinder or sphere, or arranged in any other format in two or three dimensions.
  • path / leading line 1702 may be mapped onto a suitable coordinate system and referenced to one or more frames of reference, such as the wearer’s body, surroundings, etc., as previously discussed with reference to FIG. 2B - FIG. 2D.
  • object 1706 appears in visible section 310 of the user’s field of vision regardless of where the user’s head is turned. In embodiments, this allows the user to scan scene 1700 by moving his/her eye 204 within the user’s range of eye motion. Because scene 1700 is locked to and moves with the user’s head, it is available wherever the user is facing.
  • FIG. 18A and FIG. 18B illustrate a method for using a user’s gaze to reveal objects in an exemplary virtual scene according to embodiments of the present disclosure.
  • Depicted is a two-dimensional arrangement of virtual objects in virtual scene 1800 that comprises multi-level hierarchical navigation tools.
  • two hierarchy levels are represented by element 1806 (labeled home) and sub-elements 1808 (labeled music, thermostat, security and solar) that are displayed as words arranged along leading line 1702 to lead the user’s attention from one sub-element 1808 to another.
  • FIG. 18 A and FIG. 18B The arrangement of element 1806 and sub-elements 1808 in FIG. 18 A and FIG. 18B is chosen such that a set of words representing sub-elements 1808 of element 1806 (home) is separated by connector 1804. Sub-elements 1808 appear on one side of connector 1804 and opposite to element 1806.
  • connector 1804 between virtual objects guides the user’s gaze in the coordinate space of virtual scene 1800.
  • the content of visible section 310 is controlled, in concert with the user’s eye motion, to smoothly transition and display different sections of virtual scene 1800. This way, the user has the experience of “looking around” in virtual scene 1800.
  • element 1806 may be used as a navigation tool that, once invoked by one or more of the methods previously mentioned, reveals sub-element 1808.
  • Sub-element 1808 may reveal other levels of hierarchy (not shown), thereby, facilitating the navigation of a multi-level hierarchy, advantageously, without the need for employing external or auxiliary selection devices.
  • a user may reveal pointer 1802 connected to sub-element 1808 via a connector 1803, such that by glancing at pointer 1802, the user can activate sub element 1808 and cause visible section 310 to move along leading line 1702, until sub element 1808 is within visible section 310.
  • this embodiment visually separates two levels of hierarchy. However, this is not intended as a limitation on the scope of the present disclosure.
  • the user’s gaze may be directed in any other way, which may or may not include a logical or spatial grouping of elements and sub-elements.
  • FIG. 19 illustrates a method for revealing virtual objects in a virtual space according to embodiments of the present disclosure.
  • trigger 1904 is invoked element in any of the aforementioned ways, tool 1908 may appear in a common tool or reveal area 1902 in virtual scene 1900.
  • FIG. 20A and FIG. 20B illustrate a method for visually navigating virtual objects according to embodiments of the present disclosure.
  • FIG. 20A shows element 2002 and sub-elements (e.g., 2004), which are observable within visible section 310, and sub elements (e.g., 2008), which are outside of visible section 310.
  • virtual objects such as element 2002 and sub-element 2004, may appear and slide and in and out of visible section 310 or virtual scene 2000, 2030 in response to a gaze direction being determined.
  • the virtual objects in scene 2000, 2030 may signal their presence and availability via their connectors and by floating, moving, and/or changing their appearance.
  • the movement of virtual objects may aid in visual navigation by guiding the eye to an object and/or revealing one or more underlying or additional objects in virtual scene 2000.
  • element 2002 in visible section 310 may reveal the presence of sub-element 2004 to which element 2002 is coupled via a connector.
  • the connector between sub-elements 2004 and 2008 may reveal the presence of a sub element 2008, which, according to FIG. 20A, is invisible to the eye due to its location outside of visible section 310, i.e., outside of the user’s SoE.
  • Invoking an object may cause that object to move towards the center of visible section 310 or any location in the virtual scene where content may be viewed more comfortably, as predicted or calculated by a controller.
  • objects’ movements may facilitate a smooth flow and create an intuitive transition without requiring long-distance saccades by the eye and without requiring the head to turn to ever increasing angles to reach.
  • having the interaction happen in a defined area in virtual scene 2000, 2030 also permits comfortable navigation of deeper hierarchies.
  • FIG. 21 illustrates another method for visually navigating virtual information according to embodiments of the present disclosure.
  • FIG. 21 shows elements 2182 and sub- elements 2184 that are arranged in a tabular format as entries in tables 2070 and sub-table 2180.
  • Visible section 310 displays a portion of elements 2182 in table 2070.
  • elements 2182 once invoked, e.g., by being looked at, reveal the presence of a next level of hierarchy that is not visible or only partially visible within visible section 310. Once the next level of hierarchy is invoked, it may be displayed in the form of sub-tables (e.g., 2180) that comprise sub-elements 2184.
  • sub-tables e.g., 2180
  • sub table 2180 and its sub-elements 2184 may be activated and become (partially) visible in visible section 310. If a different trigger element in table 2070 is subsequently invoked, such as “Children Songs,” sub-table 2180 may be removed, updated, or otherwise replaced with a different sub-table or content comprising sub-elements associated with that different trigger element. It is understood that the transition to a different content or (sub-)table may involve any number of ways of animating this change.
  • virtual scene 2100 may comprise any navigation tool known in the art that is suitable for selection by eye, such as a grid, a tree, a matrix, a checklist, etc., that may be positioned anywhere within virtual scene 2100.
  • a virtual object may visually indicate the completion of a selection process, a hierarchy, etc.
  • the user may be presented with an entire list of categories that is not limited to the SoE, i.e., visible section 310, such that in response to detecting that the user looks at element 2182, sub-table 2180 may be automatically activated/displayed in virtual scene 2100.
  • element 2182 in response to detecting that the user looks at element 2182, element 2182 may be highlighted and a connector or a proxy/pointer with a connector, such as those discussed, e.g., with reference to FIG. 16C and FIG. 17, may be automatically displayed and serve as a trigger for or leading line to sub-table 2180.
  • Invoking sub-table 2180 may cause sub-table 2180 to move towards a location in virtual scene 2100 where it may be more comfortably viewed, again, facilitating a smooth flow and creating an intuitive transition that significantly reduces eye/head motion when compared with existing AR, VR, and other systems.
  • a lower level hierarchy may remain at least partially invisible until the user unlocks that hierarchy level by looking at a certain trigger element associated with a higher level hierarchy. It is understood that any features of existing systems, such as inertial measurement units built-in to an existing system, may be utilized to accomplish the goal of the present disclosure.
  • a virtual object such as element 2182, may serve as a trigger that may reveal objects by populating sub-table 2180.
  • a sub-element 2184 in sub-table 2180 may, in turn, expand to reveal additional virtual objects, such as a sub-element that comprises textual content (not shown) related to sub-element 2184.
  • table 2070 and sub-table 2180 may move in a manner such as to cause subsequent levels of hierarchy to remain in a defined area, e.g., to reduce eye or neck motion.
  • this may be accomplished by detecting how much the user has twisted their neck. For example, an angle, such as a user’s head angle, or a distance may be detected between a staring location where an interaction has commenced and a target location where certain content is located (or headed to) and, if the angle or distance meets a threshold, an object may start to shift back towards the staring location. In embodiments, such movement back to a starting location, or anywhere else in visible section 310, may be made dependent on the angle or distance, such that the greater the angle or distance, the faster the movement may be made.
  • virtual objects in virtual scene 2100 may be displayed in a manner such as to appear stationary in space, such as to enable a “wide display,” that permits the user to look at virtual objects in virtual scene 2100 by turning their head and/or body. This affords the user a sense of space and a large workspace. Useful if you don’t care about your head motions being discreet.
  • FIG. 22 illustrates a virtual glance revealer for navigating virtual objects according to embodiments of the present disclosure.
  • virtual glance revealer 2200 comprises processor(s) 2220 that are communicatively coupled to and coordinate functions of individual modules of virtual glance revealer 2200. These other modules may include power and communication controller 2202, coordinate space display manager 2204, virtual object generator 2206, sub-element prioritization and selector 2208, trigger element / sub-element manager 1210, and motion detector 2212. It is understood that any part of virtual glance revealer 2200 may be implemented on a contact lens and/or an accessory device that communicates with an EMD system according to embodiments presented herein.
  • power and communication controller 2202 may aid in distribution, harvesting, monitoring, and control of power to facilitate operation of virtual glance revealer 2220, including internal and external communication of data and control commands between components and sub-components of a virtual glance revealer system.
  • coordinate space display manager 2204 may define a virtual space according to a coordinate system as shown in FIG. 2B to map virtual objects onto the virtual space. Coordinate space display manager 2204 may control content and spatial relationships of virtual objects within the coordinate system that is fixed in one or more degrees of freedom with respect to at least one real-world object, such as a user’s headgear, or with respect to gravity and earth magnetic field.
  • coordinate space display manager 2204 may be communicatively coupled to a display controller that may determine what images the display optics renders on the user’s retina.
  • Virtual object generator 2206 controls the generation, appearance, and location of virtual objects within sections of the virtual space that are visible to the user’s eye. Location and appearance information for one or more virtual objects, such as elements, sub elements, and triggers may be provided based on a decision by sub-element prioritization and selector module 2208 that determines which virtual objects to reveal. These revealed virtual objects may be selected in response to data input from motion detector 2212 that may be used to distinguish between a user action, such an eye-movement, a selection of an element by eye, or a head motion.
  • Sub-element prioritization and selector module 2208 defines the appearance of one or more navigation tools by selecting the type(s) of information to be displayed based on the user action.
  • the selection of elements may be facilitated by trigger element / sub-element manager by revealing or concealing the presence of virtual objects according to embodiments presented herein.
  • Any number of components of virtual glance revealer 2220 may utilize data input from motion detector 2212 that comprises motion and other sensors according to embodiments presented herein.
  • FIG. 23 illustrates a process for using a virtual glance revealer system to navigate virtual tools according to embodiments of the present disclosure.
  • Process 2300 may begin at step 2302 when a virtual scene comprising a visible section, which may appear to be stationary with respect to a reference frame, is generated.
  • a display projects the visible section, which may be defined by an SoE, onto a user’s retina.
  • a first virtual object such as a trigger or a sub-element that is associated with a second virtual object, is displayed within the visible section.
  • a motion with respect to a reference frame is detected by measuring, inferring, or anticipating a motion that comprises an eye or head motion or any combination thereof. This motion is representative of the user invoking the trigger or is interpreted as a trigger command as opposed to an unintended or inadvertent motion.
  • the motion is detected by detecting a target location based on a start location of a saccade of an eye motion. Before the user directs his gaze to the target location, an action is initiated that indicates the presence of the second virtual object that aids the user to navigate the virtual space.
  • the second virtual object may be invoked in response to the action, i.e., prior to being in the visible section.
  • the action may comprise an action that is used to reduce the likelihood of a false positive detection.
  • navigating virtual tools may comprise transitioning between reference frames. For example, a first frame of reference may move with the user’s head and a second frame of reference may move with the user’s body.
  • a transition may comprise changing an appearance of the content or the navigation tool.
  • in response to the user directing his gaze in a predetermined direction or a predetermined distance or angle away from the content or the navigation tool will result in that content or navigation tool to be deactivated.
  • looking away from a user-selectable virtual object may cause that object to be deactivated, for example, by changing its appearance by dimming the object or by changing its motion relative to other virtual or real objects.
  • looking away from a group of selectable objects may cause the entire group of selectable objects to be deactivated, for example, depending on the distance of the eye motion or the fixation point of the eyes being located well beyond or well in front of the virtual content.
  • FIG. 24 illustrates an exemplary partitioning structure in which zones are identified to organize where virtual objects (including virtual tools, virtual peek windows and virtual detailed windows) are displayed within the virtual scene. It is important to note that this description of the partitioning of a virtual environment into zones is provided to explain certain concepts relevant to the invention and is not needed in many implementations of the inventions. In this particular description, the zones are in a head-fixed coordinate system since they relate to eye movements with respect to the head. FIG. 24 shows a virtual environment that is partitioned into three zones.
  • a first zone 2401 is located proximate to the center of the virtual environment, a second zone 2402 is located beyond the first zone 2401, and a third zone 2403 is located at the periphery of the user’s field of view.
  • the third zone 2403 defines a virtual area proximate to the periphery of a user where it is difficult for a user to maintain an eye- focused position in this zone for an extended period of time.
  • Activation of the user interface within the virtual environment may be initiated by a user’s eye-movement to a particular location(s) within this third zone 2403 for a particular period of time.
  • a user is less likely to inadvertently activate the interface or virtual tools during use of the EMD 102.
  • intent to activate the interface and/or one or more virtual tools is more accurately identified by positioning the activation mechanism within an area of the virtual environment that is not commonly looked at during normal operation of the EMD and/or less comfortable for a user to maintain focus.
  • the first zone 2401 is much easier for a user to focus on for an extended period of time relative to the third zone 2403.
  • This first zone 2401 provides a space in the virtual environment where a user can comfortably review large amounts of information or otherwise interact with virtual objects for a longer period of time. Control of virtual objects within this first zone 2401 by a user’ s eye-movement may also be more sensitive due to the user’s ability to control eye-movement more precisely within this zone. Thus, more nuanced virtual objects may be placed within this first zone 2401 that allow user interaction with smaller eye movements.
  • virtual objects such a detailed text windows, dynamic control of electronics within the user’s geographical space, detailed calendars, books, web browsing and other virtual objects known to one of skill in the art.
  • the second zone 2402 may function as a transition zone where summary information or basic control related to an activated tool is provided to the user.
  • Virtual objects within this second zone 2402 may provide a user a summary of content or simple controller that bridges an activated virtual tool to a detailed virtual window subsequently displayed in the first zone 2401.
  • a peek window described in more detail below, may appear in this second zone 2402 after a virtual tool is selected.
  • This peek window may provide summary information or basic control of content associated with the virtual tool. If a user wants more information or control of this content, then the user may initiate a transition to a detailed window within the first zone 2401 that corresponds to this peek window.
  • FIG. 25 illustrates one example of the virtualized eye-movement control framework according to various embodiments of the disclosure.
  • An activation threshold 2510 is provided near the edge of the user’s eye’s range of motion 2505 and is allows a user to activate a virtual user interface to interact with various virtual objects.
  • the shape of this threshold 2510 may vary across various embodiments and may be symmetrical, asymmetrical, jointed, disjointed or take any shape that fits within the virtual environment.
  • the shape of the threshold 2510 may be bean shaped so as to correlate with the user’s natural eye range of motion.
  • FIG. 25 illustrates this threshold 2510 as a ring proximate to the edge of the user’s eye range of motion 2505.
  • the threshold 2510 may be placed within zone three 2403, at the border between zone three 2403 and zone two 2402, or a combination thereof.
  • the threshold 2510 is positioned at a preferred distance from the edge of the eye range of motion 2505 which is defined after a calibration procedure is performed that identifies the user’s specific field of view.
  • the EMD 102 will have a user look in a variety of directions to his/her farthest extent and track these eye movements to construct a model.
  • the activation threshold 2510 is placed within a virtual scene at a point sufficiently close to the edge of the range of motion such that user’s intent to activate interface may be reliably predicted.
  • the activation threshold 2510 is positioned sufficiently close to the edge of the range of motion 2505 where the user’s eye position is commonly located to minimize erroneous activation of the interface.
  • the threshold 2510 functions as a threshold and not shown within the virtual environment when in a deactivated/dismissed state.
  • the threshold 2510 is visible when in a deactivated/dismissed state to visually guide a user to the activation threshold.
  • An activation movement is defined as a user’s eye movement that crosses the activation threshold. This activation movement may constitute one or multiple eye movements. The system may also consider saccaded eye movement to the periphery in determining whether an activation threshold is crossed.
  • a time threshold is applied after the user’s eye movement crosses the activation threshold such that the user must maintain an eye position beyond the activation threshold for a predetermined period of time.
  • Systems may track eye movement towards the periphery in various ways dependent on the AR/VR system being used.
  • eye motion is tracked by inertial sensors mounted in a contact lens as described in more detail within the specification.
  • the inertia sensors may operate independently and exclusively to track eye motion to the periphery or may leverage an auxiliary device, such as a headband, that tracks head movement. If this head mounted auxiliary is employed, then the system may track eye movement to the periphery by combining the measured eye movement using the inertial sensors and the measured head movement using the auxiliary device.
  • Eye motion is tracked in a world-fixed (reference vectors: magnetic north, gravity down) frame of reference. Eye motion toward the periphery of the eye socket may be difficult to track directly, but it can be inferred by leveraging a user’s eye movement characteristics that suggest such movement. For example, it can be inferred by keeping track of the history of eye angle over short periods of time. Most of the time people keep their eyes roughly centered in their eye sockets.
  • An activation and tool selection instructions to a user might be leveraged to define user patterns and movement ranges. For example, the system may instruct a user to “Took straight ahead” and the to “look toward the periphery” while the system monitors these movements. This specific motion can be stored and used to identify when eye movement is towards the periphery.
  • head motion is tracked directly by sensors in the body of the head-mounted display.
  • the eye motion is tracked by small cameras in the head-mounted display that are aimed at the eyeballs. These cameras may be located with virtual reality or augmented reality goggles or glasses.
  • eye motion is tracked in a head- fixed frame of reference such that eye motion toward the periphery of the eye socket is tracked directly.
  • these types of head-mounted and camera-based do not need to infer anything.
  • a set of virtual tools are displayed along a path that creates an outline of an inner area 310 and outer area within the virtual scene. This activation of the set of virtual tools may occur immediately after the interface is activated. As such, the activation of the interface and the display of the virtual tools may appear to a user as being simultaneous events.
  • the inner area 310 may relate to a closed area or an open area depending on the shape of the threshold.
  • the virtual tools 2550A - 2550D are positioned within the activation threshold 2510.
  • this outline on which virtual tools 2550A - 2550D are positioned may vary across embodiments and may be symmetrical, asymmetrical, jointed, disjointed or take any shape that fits within the virtual environment.
  • the shape of the outline 2530 may also be bean shaped, circular, oval, rectangular, an arc, a line of tools, etc.
  • FIG. 25 illustrates this outline 2530 as a ring within the interior of the activation threshold 2510.
  • the ring 2530 is not shown within the virtual scene and the virtual tools 2550A - 2550D appear as discrete icons.
  • the ring 2530 is visible and connects the virtual tools 2550A - 2550D along its outline.
  • the line may convey to the user that there are additional tools that fall outside of the span of eccentricity of the visible projection.
  • the line may also guide the user’s glance toward other available tools.
  • the line will aid the user in understanding where certain tools are located within the virtual environment or further organize the tools within the virtual scene.
  • the virtual tools 2550A - 2550D represent content, functionality, controls, menus or other things that may be viewed or manipulated within the virtual scene. Examples of these tools may include textual icons, time/date symbols, device controls, menu symbols, or other icons representative of virtual content.
  • an eye-tracker monitors the user’s eyes within the virtual scene to determine when the user glances at or proximate to the tools in order to select a particular tool.
  • This glance may take into account a variety of factors in determining whether to select the tool including the period of time the glance focuses at or proximate to the tool, head movement (or lack thereof) associated with the glance, saccadic characteristics of the eye movement of the glance, the eye distance traveled by the glance and other eye movements that may indicate an intent to activate a particular tool.
  • content or functionality is provided to the user within the interior area 310 in an organized manner such that the user may interact with the system to access content and/or control at a variety of granularities.
  • selected tools and/or windows are locked in the virtual space relative to the user’s head, body or physical environment to allow the user to interact with it more efficiently.
  • This organization of content is aligned to the way in which an individual visually interacts with his/her environment.
  • Detailed content or nuanced virtual control is positioned near the center of the user’s field of view while summary information is located at a distance proximate to the center. This organization provides a preferred virtual interface that is more comfortable and which reduces errors when this interaction is controlled by tracking the eye movements of the user.
  • FIG. 26A illustrates an example of a peek window being displayed in response to a virtual tool being selected according to various embodiments.
  • a virtual tool 2550C positioned on the tool ring 2530 is selected by a user.
  • a peek window 2620 is displayed within the inner area 310 and provides general information related to the selected tool 2550C.
  • this peek window 2620 may provide basic summary information related to the tool 2550C or may provide a basic control mechanism that allows the user to interact with another virtual device or device external to the virtual environment.
  • the peek window 2620 may be positioned within the virtual scene at a location closer to the center point relative to the virtual tool 2550C.
  • the peek window 2620 may be positioned in a location related to zone two 2502 in the previous discussion. This positioning allows the user to interact more comfortably using eye-moving with the peek window 2620 than the virtual tool 2550C, but still not be in an ideal center zone (e.g., zone three 2503).
  • the user may dismiss the peek window 2620 by looking away from the window for a predetermined period of time.
  • the system may identify when a user looks away from the window by measuring the angle of a glance relative to the window. If this glance angle goes beyond a threshold for a predetermined amount of time, then a reasonable inference of the user’s intent to dismiss the window may be inferred.
  • the dismissal of the peek window 2620 results in the window disappearing from the virtual scene.
  • Another method of dismissal some tools are immersive, meaning they cover all or a substantial portion of the sphere around the user’s head and body. In these cases, there may not be a clear or easy to reach place to look away from the displayed virtual content.
  • An alternate approach to dismissing it is to repeat the activation gesture (e.g. hold head steady and look to the periphery of the range of motion again)
  • one or more activation symbols 2630 are provided proximate to the peek window 2620 that allows a user to select and initiate a second window that displays more detailed information or provides more complex control related to the peeks window 2620.
  • this activation symbol 2630 is shown as a triangle adjacent to the peek window 2620, one skilled in the art will recognize that this symbol 2630 may be of any form and located in any position proximate to the window 2620.
  • Figure 26B illustrates an activated virtual window in accordance with various embodiments of the disclosure.
  • the activated virtual window 2640 may provide more detailed information, larger amount of text, more complex control, or any other content related to the peek window 2620. Further, the virtual window 2640 is positioned nearer to the center of the virtual environment (i.e., within zone one 2501) to allow the user the most comfortable eye and head position to interact with the virtual window 2640 in certain embodiments.
  • the user may dismiss the virtual window 2640 by looking away from the window for a predetermined period of time. The dismissal of the virtual window 2640 results in the window disappearing from the virtual scene and possibly the system entering a standby or sleep state.
  • FIGS 27A and 27B illustrate an example in which time, date and calendar information are provided to a user in a virtual environment according to various embodiments of the invention.
  • a clock icon 2703C is shown as a virtual tool on the ring 2530 after activation.
  • a peek window 2720 is displayed within the inner area 310 of the virtual scene.
  • the peek window 2720 displays the current time and date and summary information about the user’s calendar.
  • a virtual window 2740 is displayed that provides detailed information from the user’s personal calendar. This virtual window 2740 is displayed at or near the center of the virtual scene.
  • FIGS 28A and 28B illustrate an example in which music controllers are provided to a user in a virtual environment according to various embodiments of the invention.
  • a music icon 2803C is shown as a virtual tool on the ring 2530 after activation.
  • a peek window 2820 is displayed within the inner area 310 of the virtual scene.
  • the peek window 2820 displays a basic music controller that provides basic control to allow a user to play, pause or skip songs being played on a musical device.
  • a virtual window 840 is displayed a more dynamic music controller that provides the user an ability to control a variety of functions of a musical device.
  • This virtual window 2840 is displayed at or near the center of the virtual scene.
  • FIGS 29A and 29B illustrate an example in which text is provided to a user in a summary format or a complete format according to various embodiments of the invention.
  • a text icon 2903C is shown as a virtual tool on the ring 2530 after activation.
  • a peek window 2920 is displayed within the inner area 310 of the virtual scene.
  • the peek window 2920 displays a list of books, texts, or summary of texts that allows a user to select a topic.
  • a virtual window 2940 is displayed in which more detailed text is provided to the user.
  • the user’s eye position is monitored as the text is read so that text is scrolled within the window 2940.
  • Other control features may be included in the virtual window 2940 to allow the user to skip through the text.
  • This virtual window 2940 is displayed at or near the center of the virtual scene.
  • FIG. 30 illustrates an eye-tracking user interface manager in accordance with various embodiments of the disclosure.
  • the manager 3000 may be implemented as hardware, software, firmware or a combination thereof.
  • the manager 3000 may be implemented in an EMD, an auxiliary device that interfaces with the EMD, a cloud device that interfaces with the EMD, or any other device that controls various eye-tracking features that enable a user to activate, select and dismiss virtual objects within a virtual scene.
  • the manager 3000 comprises a processing unit 3020 that interfaces with various sensors and components.
  • a ring calibration and initial setup module 3002 initializes the ring and defines a location of the ring within the virtual scene. This module 3002 may define this location of the ring by identifying a user’s field of view via a series of eye monitoring tests that define the edges of the field of view.
  • a tool placement module 3004 places the plurality of virtual of tools along the ring. This placement of tools may depend on the frequency that a user selects one or more tools and placing those tools in locations that most accurately identify a user’s intent to select tools.
  • An eye motion and glance detector 3012 receives eye- movement data from sensors and translates this data into references frames correlated to the virtual scene. Using this functionality, the system can track eye movement relative to virtual objects within the virtual scene.
  • a tool-to-peek window control 3006 manages the transition from a selected virtual tool to a peak window in accordance with the above-described methods.
  • a peek-to- virtual window control 3008 controls the transition from a selected activation symbol to a virtual window in accordance with the above-described methods.
  • the manager 3000 may also contain a user history and system optimizer 3010 to adjust various parameters or characteristics of the user interface based on an analysis of how the user interacts with the virtual objects. This optimizer 3010 may records errors generated during the use of the user interface and adjust parameters to improve the accuracy of a user activating, selecting or dismissing virtual objects within the virtual scene.
  • a viewport may be defined as a section of an associated virtual content and be visible when the user’s gaze intersects it.
  • Each viewport may comprise one or more scrolling zones to enable scrolling of the virtual content within the viewport.
  • a viewport may be akin to a window in a desktop graphical user interface (GUI).
  • GUI desktop graphical user interface
  • a GUI interface may comprise one or more windows with each window rendering separate content.
  • a scene visible to the user may comprise one or more viewports. Contents within each of the one or more viewports may be related or independent from each other.
  • a viewport may be typically rectangular, but may be any shape.
  • a full content e.g., a complete image or text
  • Scrolling also known as panning, of the full content within the viewport may be needed to allow a user to access parts of the content that are outside the viewport.
  • FIG. 31 illustrates a viewport with scrolling zones for scrolling contents within the viewport according to embodiments of the present disclosure.
  • the viewport 3104 has a dimension which allows a part of a virtual scene 3102, e.g., a complete image or video, a magazine layout with multiple images and text, etc., to be rendered within the viewport.
  • the viewport 3104 may have one or more designated scrolling zones to enable scrolling the virtual scene.
  • the virtual scene 3102 is scrolled in one or more directions such that previously concealed contents of the virtual scene 3102 are rendered in the viewport 3104 in a desired manner.
  • the term “scroll,” “scrolling,” or “scrolled” is referred as related to shifting of the virtual scene, such as the complete image or video as shown in FIG. 31, relative to the viewport.
  • scrolling of the virtual scene 3102 starts upon the gaze point or viewpoint 3120 of the user within a scrolling zone or upon the gaze point or viewpoint of the user within a scrolling zone for a time longer than a threshold, e.g., 0.1 second.
  • one or more scrolling zones within the viewport are established to make scrolling possible.
  • Scrolling zones may typically be at the four edges of the viewport, and they may overlap.
  • scrolling zones may comprise a top vertical scrolling zone 3105, a bottom vertical scrolling zone 3106, a left horizontal scrolling zone 3107, and a right horizontal scrolling zone 3108.
  • Each scrolling zone may have its own associated scrolling action upon the gaze point or viewpoint of the user being within the scrolling zone. For example, a user’s action of looking into the top vertical scrolling zone 3105 has the effect of pushing the complete image 3102 to opposite direction (down) within the viewport.
  • Such a scrolling may be known as scrolling down, or panning down.
  • the complete image / video may be scrolled / panned up by a user’s action of looking at the bottom vertical scrolling zone 3106.
  • Horizontal scrolling may be achieved similarly, by looking to the left horizontal scroll zone 3107 for right scrolling and to the right horizontal scroll zone 3108 for left scrolling.
  • the viewport 3104 may comprise a no-scroll zone 3105. The complete image 3102 does not scroll when a user is gazing a point within the no-scroll zone or when a scrolling limit for the complete image 3102 is reached.
  • horizontal and vertical scrolling zones may overlap to form multiple overlap or corner scrolling areas 3112. Gazing at a point in the overlapping areas may cause simultaneous horizontal and vertical scrolling. For example, looking at the bottom right area 3112 where the right and bottom scrolling zones overlap may simultaneously scroll the complete image 3102 to the left and upwards.
  • scrolling speed of the complete image 3102 may be constant or a variable depending on a location of the gaze point in one of the one or more scrolling zones.
  • the scrolling speed may be regulated, such that the scrolling becomes faster when a user is gazing within a scrolling zone farther towards an outer edge of the scrolling zone and becomes slower when the user is gazing in the scrolling zone farther inwards towards an inner edge of the scrolling zone. In this manner, the user may smoothly follow a point within the complete image while it scrolls into view.
  • the scrolling speed may be regulated to be dependent on the time of a gaze point within a scrolling zone.
  • scrolling speed of the complete image 3102 in one direction may be the same or different from scrolling speed of the complete image 3102 in another direction.
  • Horizontal scrolling speed may be designated to be proportional to vertical scrolling speed according to the aspect ratio of the complete image 3102. For example, when the complete image 3102 is a panorama photo with a length much larger than the height, it would be desirable to set the horizontal scrolling speed at a higher value compared to the vertical scrolling speed.
  • FIG. 32 illustrates a viewport with scrolling zones for text scrolling within the viewport according to embodiments of the present disclosure. Similar to the viewport 3104 shown in FIG. 31, the viewport 3204 has a dimension which allows only part of a larger virtual scene 3202, e.g., a complete text message or document, to be rendered within the viewport.
  • the viewport 3204 may have a top scrolling zone 3205 and a bottom scrolling zone 3206 to enable scrolling the text message down or up accordingly.
  • eye tracking data identify that a gaze point or viewpoint of the user is within the top or bottom scrolling zones, the virtual scene 3202 is scrolled down or up for additional content for user review.
  • scrolling of the text message 3202 starts upon the gaze point or viewpoint 3220 of the user within the top or bottom scrolling zone, or upon the gaze point or viewpoint of the user within the top or bottom scrolling zone for a time longer than a threshold, e.g., 0.1 second.
  • the viewport 3204 may also have a non-scroll zone 3210.
  • the text message 3202 does not scroll when a user is gazing a point within the no-scroll zone 3210.
  • FIG. 32 only shows plain text in the viewport, one skilled in the art shall understand that various formats, including text, image, video, icons, emoji, etc., may be displayed in the viewport individually or in combination.
  • the viewport 3204 may be larger than a visible section 3212 which is defined by the SoE 304.
  • the visible section 3212 with the gaze point 3220 in the center, displays sections of the viewport 3204 that fall within the visible section 3212.
  • some scrolling zones e.g., the top scrolling zone 3205
  • the user needs to look up to move the visible section upward to reveal at least some part of the top scroll zone 3205 to enable downward scrolling.
  • FIG. 33 illustrates a process for virtual scene scrolling in a viewport according to embodiments of the present disclosure.
  • Process 3300 begins, at step 3302, when at least part of a visible section of a virtual scene is projected from a contact lens onto a user’s retina.
  • the visible section comprises one or more viewports.
  • Each viewport may be defined as a section of an associated virtual content with a full dimension larger than the viewport and be visible when the user’s gaze intersects it.
  • Each viewport may comprise one or more scrolling zones to enable scrolling of the virtual content within the viewport.
  • Eye tracking may be performed by one or more sensors disposed within the contact lens that projects the virtual scene onto a retina of the user.
  • step 3306 in response to tracking date representing a gaze point or a viewpoint of the user within a scrolling zone of one viewport, scroll the virtual content within the viewport in a predetermined direction and a scrolling speed.
  • the virtual content may be an image, text, video, etc.
  • Embodiments of scrolling in association with FIG. 31 and FIG. 32 may also be applicable in step 3306.
  • scrolling is stopped when one or more scrolling stop conditions are met.
  • the one or more scrolling stop scrolling conditions may be a gaze point within a non-scroll zone, the virtual content being scrolled to an end position (e.g., the topmost, the bottommost, the leftmost, or the rightmost), etc.
  • a virtual content such as an image
  • a virtual scene zooming / unzooming using gaze is zoomable such that a user may change the scale of the viewed area of the virtual scene in order to view more details or less.
  • FIG. 34A and FIG. 34B illustrate a viewport for zooming and unzooming within the viewport according to embodiments of the present disclosure.
  • the viewport 3404 displays at least a part of a virtual scene 3402, e.g., a complete image or video.
  • the viewport 3404 may have designated zooming area 3410 and unzooming area 3406.
  • eye tracking data generated by one or more sensors disposed within a contact lens, identify that a gaze point or viewpoint 3408 of the user is within a predetermined range, e.g., a 2° circle, inside the zooming area 3410 for at least a threshold of time, e.g., 0.2 second
  • the virtual scene 3402 is zoomed such that the scale of the virtual scene 3402 is enlarged to reveal more details around the gaze point.
  • zooming may be done in conjunction with scrolling, to center the content (and therefore the gaze point) as much as possible within the viewport.
  • the zooming area 3410 and the unzooming area 3406 may correspond to the no-scroll zone 3110 and the scrolling zones (3105-3108) respectively.
  • the gaze point may be referred as an intersection point or area between an eye orientation, as indicated by eye tracking data, and visible section of the virtual scene.
  • the threshold of time for zooming initiation may be a time interval close to zero or even as zero, such that zooming may start immediately, or almost immediately, when the gaze point is within the predetermined range inside a zooming area for an instant, or an almost instant, feedback.
  • FIG. 34A when the user gazes at the gaze point 3408 (the saxophone) for a time longer than a threshold, the virtual scene inside the viewport starts zooming, and simultaneously shifts up and to the left to center around the gaze point 3408 (the saxophone), as shown in FIG. 34B.
  • zooming speed of the complete image 3402 may be constant or a variable.
  • zooming speed may be regulated in proportion to how near or far along the gaze point is between the edge of the viewport and the border of the zooming area. The closer the gaze point is to the zooming area boarder, the slower the zooming speed becomes.
  • zooming speed may be set to be dependent on the time of a gaze point within the zooming area, such that the longer the user is gazing a spot within the zooming area, the faster the zooming becomes.
  • the zooming speed may have a predetermined maximum value.
  • unzooming may use the unzooming area 3406 which is around edges of the viewport.
  • the unzooming area 3406 may be the same or different from the scrolling zones described earlier. Gazing in an unzooming area starts to zoom out towards the direction of that unzooming area — in other words, in such a way that reveals more of the complete image in the direction the user is looking towards.
  • unzooming speed of the complete image 3402 within the viewport 3404 may be constant or a variable.
  • unzooming speed may be set as related to the distance between the gaze point and an edge of the viewport. The closer the gaze point is to an edge of the viewport, the faster the unzooming.
  • FIG. 35 illustrates a process for virtual scene zooming according to embodiments of the present disclosure.
  • Process 3500 begins, at step 3502, when at least part of a virtual scene is projected from a contact lens onto a user’s retina.
  • the virtual scene may comprise one or more viewports.
  • Each viewport may be defined as a section of an associated virtual content with a full dimension larger than the viewport and be visible when the user’s gaze intersects it.
  • Each viewport comprises a zooming area to enable zooming of the virtual content within the viewport.
  • At step 3504 at least one of a position, an orientation, or a motion of a user’s eye within a viewport is tracked to generate tracking data indicative of an intent of the user.
  • Eye tracking may be performed by one or more sensors disposed within the contact lens that projects the virtual scene onto a retina of the user.
  • step 3506 in response to tracking date representing a gaze point or a viewpoint of the user is within a predetermined range inside the zooming area of a viewport for at least a threshold of zooming time, zoom the virtual content within the viewport with a predetermined zooming speed.
  • the virtual content may be an image, text, video, etc.
  • the predetermined zooming speed may be constant or dependent on the distance between the gaze point and an edge of the viewport.
  • zooming is stopped when one or more zooming stop conditions are met.
  • the one or more zooming stop conditions may be a gaze point out of the zooming area around which the zooming starts, a gaze point within an unzooming area, the virtual content being zoomed to an end position (e.g., the maximum scale supported by the contact lens), etc.
  • FIG. 36 illustrates a process for virtual scene unzooming according to embodiments of the present disclosure.
  • FIG. 36 illustrates a process 3600 for virtual scene zooming according to embodiments of the present disclosure.
  • Process 3600 begins, at step 3602, when at least part of a virtual scene is projected from a contact lens onto a user’s retina.
  • the virtual scene may comprise one or more viewports.
  • Each viewport may be defined as a section of an associated virtual content with a full dimension larger than the viewport and be visible when the user’s gaze intersects it.
  • Each viewport comprises an unzooming area to enable unzooming of the virtual content within the viewport.
  • Step 3604 may be similar to step 3504 shown in FIG. 35.
  • step 3606 in response to tracking date representing a gaze point or a viewpoint of the user is inside an unzooming area of a viewport for a time longer than a threshold, unzoom the virtual content within the viewport with a predetermined unzooming speed to reveal more contents in the viewport.
  • the predetermined unzooming speed may be constant or dependent on the distance between the gaze point and an edge of the viewport.
  • unzooming may happen in parallel with scrolling / panning.
  • virtual content displayed in the viewport starts unzooming and may simultaneously be shifted horizontally, vertically or both in such a way as to move the point the user is gazing towards the center of the viewport during unzooming.
  • unzooming is stopped when one or more unzooming stop conditions are met.
  • the one or more unzooming stop conditions may be a gaze point within a zooming zone, the virtual content being unzoomed to an end position (e.g., the broadest view supported by the contact lens), etc.
  • processes showing in FIG. 35 and FIG. 36 for zooming/unzooming may be implemented in combination with processes showing in FIG. 33 for scrolling.
  • a zooming area and an unzooming area of a viewport may also be configured as a no-scroll zone and a scrolling zone respectively.
  • the zooming area (no-scroll zone) and the unzooming area (scrolling zone) may be designated for zooming and unzooming by default.
  • the zooming area (no-scroll zone) and the unzooming area (scrolling zone) may then be functioned for scrolling control.
  • the zooming area (no-scroll zone) and the unzooming area (scrolling zone) may then be functioned for scrolling control.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Optics & Photonics (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Acoustics & Sound (AREA)
  • Otolaryngology (AREA)
  • User Interface Of Digital Computer (AREA)
EP20878692.1A 2019-10-24 2020-10-19 Augenbasierte aktivierungs- und werkzeugauswahlsysteme und verfahren Pending EP4049118A4 (de)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US16/662,842 US10901505B1 (en) 2019-10-24 2019-10-24 Eye-based activation and tool selection systems and methods
US202062957734P 2020-01-06 2020-01-06
US16/940,152 US11662807B2 (en) 2020-01-06 2020-07-27 Eye-tracking user interface for virtual tool control
PCT/US2020/056376 WO2021080926A1 (en) 2019-10-24 2020-10-19 Eye-based activation and tool selection systems and methods

Publications (2)

Publication Number Publication Date
EP4049118A1 true EP4049118A1 (de) 2022-08-31
EP4049118A4 EP4049118A4 (de) 2024-02-28

Family

ID=75620082

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20878692.1A Pending EP4049118A4 (de) 2019-10-24 2020-10-19 Augenbasierte aktivierungs- und werkzeugauswahlsysteme und verfahren

Country Status (3)

Country Link
EP (1) EP4049118A4 (de)
CN (1) CN115004129A (de)
WO (1) WO2021080926A1 (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11333902B2 (en) * 2017-12-12 2022-05-17 RaayonNova LLC Smart contact lens with embedded display and image focusing system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102271817B1 (ko) * 2014-09-26 2021-07-01 삼성전자주식회사 증강현실을 위한 스마트 콘택렌즈와 그 제조 및 동작방법
WO2016138178A1 (en) * 2015-02-25 2016-09-01 Brian Mullins Visual gestures for a head mounted device
US20170371184A1 (en) * 2015-07-17 2017-12-28 RaayonNova LLC Smart Contact Lens With Orientation Sensor
US20170115742A1 (en) * 2015-08-01 2017-04-27 Zhou Tian Xing Wearable augmented reality eyeglass communication device including mobile phone and mobile computing via virtual touch screen gesture control and neuron command
US10649233B2 (en) * 2016-11-28 2020-05-12 Tectus Corporation Unobtrusive eye mounted display
CN108227239A (zh) * 2016-12-15 2018-06-29 索尼移动通讯有限公司 智能隐形眼镜以及包括该智能隐形眼镜的多媒体系统
US20180173011A1 (en) * 2016-12-21 2018-06-21 Johnson & Johnson Vision Care, Inc. Capacitive sensing circuits and methods for determining eyelid position using the same
US10642352B2 (en) * 2017-05-18 2020-05-05 Tectus Coporation Gaze calibration via motion detection for eye-mounted displays

Also Published As

Publication number Publication date
EP4049118A4 (de) 2024-02-28
CN115004129A (zh) 2022-09-02
WO2021080926A1 (en) 2021-04-29

Similar Documents

Publication Publication Date Title
KR102196975B1 (ko) 실제 객체 및 가상 객체와 상호작용하기 위한 생체기계적 기반의 안구 신호를 위한 시스템 및 방법
US20210124415A1 (en) Eye-based activation and tool selection systems and methods
US11662807B2 (en) Eye-tracking user interface for virtual tool control
US10564714B2 (en) Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
US9035878B1 (en) Input system
US11907417B2 (en) Glance and reveal within a virtual environment
US8643951B1 (en) Graphical menu and interaction therewith through a viewing window
US11995285B2 (en) Methods for adjusting and/or controlling immersion associated with user interfaces
US20160025980A1 (en) External user interface for head worn computing
US20190385372A1 (en) Positioning a virtual reality passthrough region at a known distance
US20210303107A1 (en) Devices, methods, and graphical user interfaces for gaze-based navigation
EP3807745B1 (de) Pinning von durchgangsregionen von virtueller realität zu standorten der realen welt
US11720171B2 (en) Methods for navigating user interfaces
US20230336865A1 (en) Device, methods, and graphical user interfaces for capturing and displaying media
EP4049118A1 (de) Augenbasierte aktivierungs- und werkzeugauswahlsysteme und verfahren
US20230092874A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
US20240103803A1 (en) Methods for interacting with user interfaces based on attention
US20240103681A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Window Controls in Three-Dimensional Environments
US20240152245A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Window Controls in Three-Dimensional Environments
US20240152256A1 (en) Devices, Methods, and Graphical User Interfaces for Tabbed Browsing in Three-Dimensional Environments
US20240103712A1 (en) Devices, Methods, and Graphical User Interfaces For Interacting with Three-Dimensional Environments
US20240103678A1 (en) Devices, methods, and graphical user interfaces for interacting with extended reality experiences
US20240094819A1 (en) Devices, methods, and user interfaces for gesture-based interactions
US20240103685A1 (en) Methods for controlling and interacting with a three-dimensional environment
US20240103636A1 (en) Methods for manipulating a virtual object

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220524

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RIC1 Information provided on ipc code assigned before grant

Ipc: G02C 11/00 20060101ALI20231019BHEP

Ipc: G02B 27/01 20060101ALI20231019BHEP

Ipc: G06F 1/16 20060101ALI20231019BHEP

Ipc: G02C 7/04 20060101ALI20231019BHEP

Ipc: G06F 3/0485 20220101ALI20231019BHEP

Ipc: G06F 3/04842 20220101ALI20231019BHEP

Ipc: G06F 3/0482 20130101ALI20231019BHEP

Ipc: G06F 3/01 20060101AFI20231019BHEP

A4 Supplementary search report drawn up and despatched

Effective date: 20240125

RIC1 Information provided on ipc code assigned before grant

Ipc: G02C 11/00 20060101ALI20240119BHEP

Ipc: G02B 27/01 20060101ALI20240119BHEP

Ipc: G06F 1/16 20060101ALI20240119BHEP

Ipc: G02C 7/04 20060101ALI20240119BHEP

Ipc: G06F 3/0485 20220101ALI20240119BHEP

Ipc: G06F 3/04842 20220101ALI20240119BHEP

Ipc: G06F 3/0482 20130101ALI20240119BHEP

Ipc: G06F 3/01 20060101AFI20240119BHEP