WO2018201150A1 - Control system for a three dimensional environment - Google Patents

Control system for a three dimensional environment Download PDF

Info

Publication number
WO2018201150A1
WO2018201150A1 PCT/US2018/030279 US2018030279W WO2018201150A1 WO 2018201150 A1 WO2018201150 A1 WO 2018201150A1 US 2018030279 W US2018030279 W US 2018030279W WO 2018201150 A1 WO2018201150 A1 WO 2018201150A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
control
control feature
virtual
reference frame
Prior art date
Application number
PCT/US2018/030279
Other languages
French (fr)
Inventor
Montana REED
Original Assignee
Mira Labs, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mira Labs, Inc. filed Critical Mira Labs, Inc.
Publication of WO2018201150A1 publication Critical patent/WO2018201150A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • Head Mounted Displays produce images intended to be viewed by a single person in a fixed position related to the display.
  • HMDs may be used for Virtual Reality (VR) or Augmented Reality (AR) experiences.
  • VR Virtual Reality
  • AR Augmented Reality
  • the HMD of a virtual reality experience immerses the user's entire field of vision and provides no image of the outside world.
  • the HMD of an augmented reality experience renders virtual, or pre-recorded images superimposed on top of the outside world.
  • Mac Macitosh
  • the virtual objects can overlap real world objects, obstruct the user's field of view, and interfere with the immersive nature of the system and environment. Therefore, an adaptation or integration of the conventional 2D menu bar or system feels out of place in the virtual, immersive environment and the look detracts from the immersive feel of the experience. Accordingly, a new control system, and user interface is desired for use in systems that operate in an immersive environment.
  • Exemplary systems described herein include control systems for use in a three dimensional display or environment.
  • Exemplary embodiments include control features pinned to a user's head, pinned to location in the three dimensional world (virtual world or physical world), pinned to the system or a component thereof, pinned to a physical object within the physical world, and combinations thereof.
  • Exemplary embodiments include transitional methods to transfer the system from one control system to another control system.
  • Exemplary embodiments described herein include a number of unique features and components. No one feature or component is considered essential to the invention and may be used in any combination or incorporated on any other device or system.
  • exemplary embodiments described herein are generally in terms of an augmented reality system, but features and components described herein may be equally applicable to virtual reality systems or other head mounted systems. Accordingly, headset system is intended to encompass any head mounted system including, but not limited to, augmented reality and virtual reality systems.
  • FIGS. 1A-1B illustrate an exemplary headset that may incorporate the control system described herein.
  • FIGS. 2A-2B illustrate exemplary control features according to embodiments described herein.
  • FIG. 3 illustrates an exemplary physical environment defining different viewing areas for overlay with virtual objects.
  • FIGS. 4A-4Q illustrates exemplary user experiences according to embodiments described herein.
  • FIG. 5 illustrates an exemplary display of control features within a viewing area of a user superimposed over a physical environment.
  • FIGS. 6A-6B illustrates exemplary embodiments of changing a reference frame of a control feature.
  • FIGS. 7-8 illustrate exemplary associations between a physical target object and the corresponding control feature.
  • FIGS. 9-10 illustrate an exemplary tracked object, and a plurality of control features fixed relative to the tracked object.
  • FIGS. 11 A-l IB illustrate an exemplary application in which a single tracked object may have different control features associated with it depending on the orientation of the tracked object relative to the headset system.
  • headset system has a frame with a compartment configured to support a mobile device, and an optical element coupled to the frame configured to reflect an image displayed on the mobile device.
  • the headset may include an attachment mechanism between the frame and the optical element for removable and/or pivotable attachment of the optical element to the frame.
  • the headset may include alignment mechanism to position the optical element relative to the frame when engaged.
  • the headset may include a retention feature to position the inserted mobile device in a predefined, relative location to the frame.
  • the headset may include a head restrain system to couple the headset to a user's head.
  • exemplary embodiments may also be used with virtual reality, immersive, or other three dimensional display systems.
  • the applications described herein are not limited to any particular headset configuration or design.
  • Exemplary three dimensional viewing systems include a camera for receiving images in a user's field of view or approximate thereto, and a display system for superimposing a virtual image into the user's field of view.
  • FIGS. 1 A-1B illustrate an exemplary headset that may incorporate the control system described herein.
  • FIG. 1 A illustrates an exemplary front prospective view of an exemplary headset system 10 according to embodiments described herein.
  • FIG. IB illustrates an exemplary rear prospective view of an exemplary headset system 10 according to embodiments described herein.
  • the headset system 10 includes a frame 12, optical element 14, and mounting system 16.
  • the headset system 10 is configured to position an inserted mobile device 18 relative to the optical element 14 and the user.
  • Exemplary embodiments of the frame 12 may include a compartment for securing the mobile device to the headset system.
  • the compartment may include a back plane, lateral edges, and a lower edge to retain the mobile device 18 within the compartment.
  • the lower edge may include a lip to limit forward movement to the mobile device relative to the compartment when the mobile device abuts the lip.
  • the compartment may include an elastic cover that deforms when the mobile device is inserted therein.
  • the frame may also include access features to permit the user to control the mobile device from outside of the frame, without having to remove the mobile device from the frame.
  • the optical element 14 is configured to reflect the displayed virtual image from a screen of the mobile device to the user to superimpose the image in the field of view of the user.
  • the optical element may include first and second portions that are mirrored
  • the first and second portions may define spherical lens of a single radius.
  • the first and second portions may include a layer or coating for anti-reflection, reflection, hardness, scratch resistance, durability, smudge resistance, dirt resistance, hydrophobic, and combinations thereof.
  • the concave surface of the first and second portions includes a reflective coating.
  • the convex surface of the first and second portions includes an anti-reflective coating.
  • Exemplary embodiments of the frame 12 are configured to mate and/or retain the optical element 14.
  • the headset system may include an attachment mechanism to couple the frame to the optical element.
  • the attachment mechanism may include a first plurality of magnets in a first attachment mechanism of the frame and a second plurality of magnets in a second attachment mechanism of the optical element wherein adjacent ones of the first plurality of magnets alternate orientations such that the first plurality of magnets alternate polarity in a forward facing direction.
  • the second plurality of magnets may be positioned and oriented such that each of the second plurality of magnets aligns and mates with one of the first plurality of magnets, and the second plurality of magnets have an opposing polarity directed toward a corresponding one of the first plurality of magnets.
  • the attachment mechanism may also or alternatively include an alignment mechanism that can orient and position the optical element in a desired position relative to the frame.
  • the alignment mechanism may include mated surfaces, such that a first surface on the frame is the mated match to a second surface on the optical element.
  • the mated surfaces may, for example, include an indent and detent.
  • the head restrain system 16 may include a pair of straps extending from the frame. At least one of the pair of straps may have a taper such that a first end of the at least one of the pair of straps is thinner than a second end of at least one of the pair of straps. Each of the pair of straps may include an indentation on an inner surface of each of the pair of straps, the indentation defining an ovoid shape. At least one of the pair of straps may include a buckle such that the other of the at least one of the pair of straps may be thread through the buckle and secure the headset system to a user's head.
  • the headset system is configured to position a screen of the mobile device at an angle, away from the frame and toward the optical element, the frame configured to not obstruct light coming into a forward facing camera of the mobile device.
  • Augmented Reality (AR) and Virtual Reality (VR) systems may track objects in an environment. Tracking an object means that an object's position and orientation is defined relative to a coordinate or reference frame. Rotationally tracked systems position virtual objects in a virtual or augmented reality world through an inertial tracking system such that objects are oriented relative to the rotational position of the observer.
  • Positionally tracked systems allow for six-degree of freedom movement of a headset or virtual object by setting a reference plane to either a tracker object or the world outside of the headset as seen by one or more sensors (camera, depth, radar, paramount, etc.). Positionally tracked systems may also use an external camera or sensor to track the position of the headset relative to that camera.
  • Exemplary embodiments comprise incorporating a control system into the virtual environment tagged to the user or a body part of the user, a specific field of view of the user, the headset, a physical object within the user's environment, or combinations thereof.
  • the system includes software and hardware configured to display and project virtual control features and bring the virtual control features into a user's field of view based on the motion of the user's head, motion of the user's eyes, direction of the user's head, direction of the user's eyes, detection of a physical object within the user's environment, or combinations thereof.
  • the system may be configured to position the virtual control features outside of a user's normal field of view, and bring one or more virtual control features into a user's field of view when the user looks or turns in a predefined direction or when an object is detected or moved in a predefined configuration within the user's detectable field of view.
  • the system may also receive additional user inputs for selecting, manipulating, engaging, or otherwise activating the control features.
  • the system may also be configured to transition from one control system to another control system.
  • An exemplary embodiment of an augmented reality headset system is configured to define and/or display to a user one or more control features in a static location relative to a user's head.
  • a plurality of control features include icons or representative objects displayed to a user in a static location relative to a user's normal head orientation.
  • a normal head orientation may be considered facing directly forward in a relaxed state.
  • a normal head orientation may be any initial or first head orientation.
  • the one or more control options may include icons in a static location, such as at eye level (generally in front of the relaxed user's position in the user's field of view), above eye level (generally above the normal viewing area outside of or at a periphery of the user's field of view), below eye level (generally below the normal viewing area, positioned outside of or at a periphery of the user's field of view), or along a lateral side of eye level (generally outside of the normal viewing area).
  • Exemplary embodiments may therefore provide one or more icons, objects, text, colors, menus, lists, displays, etc. (described herein as control features) as virtual objects used for navigation, control, or other features that are at an edge of, adjacent to, or outside of a normal field of view.
  • FIGS. 1 A- IB illustrate different perspective views of a model user 20, a viewing region 22, and a plurality of control features 22 fixed relative to a user's head.
  • the plurality of control features 22 are above the normal field of view 22.
  • the viewing region is represented by a planar representation of the focal plane of a normal field of view.
  • the field of view is the area or volume in which a user can see, and a normal field of view is a specific field of view providing for the user's position, head alignment, etc.
  • an exemplary normal field of view is with the user's head in a generally relaxed position, facing forward.
  • a viewing region 22 is a space in the field of view where the virtual objects are overlaid, and is defined by the headset and display system.
  • the viewing region is illustrated as a square in space, but the perimeter may not be so geometric or discontinuous and is determined by the headset hardware, including the optical element(s) and/or display.
  • the viewing region may be within the user's field of view.
  • the plurality of control features may be anywhere relative to the head location, headset, normal viewing region, and combinations thereof.
  • the plurality of control features 22 may be positioned adj acent the normal viewing region, above the normal viewing region, below the normal viewing region, along an edge of the normal viewing region, within the normal viewing region, outside of the normal viewing region, on the transitional edge of the normal viewing region to normal non-viewing region, and combinations thereof.
  • the plurality of control features are displayed relative to a user's head above the normal viewing region.
  • the plurality of control features may resemble a complete or partial halo around the top of the user's head.
  • the plurality of control features may also be positioned around a user's lower body portion, below the head.
  • the plurality of control features may for a portion or all of a belt or hoop around a user's midsection near a user's waist.
  • the plurality of control features may not be seen by a user.
  • the control features are above the viewing area.
  • the virtual objects are not displayed to the user in the normal field of view.
  • the system may track a position of the plurality of control features, such that the plurality of control features may be displayed to the user's when the viewing area encompasses one or more of the plurality of control features.
  • the system is configured to detect a user's head movement.
  • the user may access the control features by rotating the head toward the control features.
  • the system may determine motion in different ways.
  • the inserted mobile phone may include inertial sensors, GPS, geomagnetic field sensors, accelerometer, among others, and combinations thereof (referred to collectively as sensors).
  • the system may be configured to receive input signals from the one or more sensors, and use a processor configured to execute non-transitory machine readable instructions stored in memory to perform the functions described herein.
  • the system may be configured to determine a device orientation from the received sensor input. The device orientation may be used to
  • the system may be configured to determining a device motion, change in orientation, and combinations thereof to determine a transitional direction.
  • the system may be configured to use the orientation, motion, change in orientation, and combinations thereof to determine which control features to display to a user.
  • the system may use a reference frame or initial starting location in which to then relate other positions and orientations thereto.
  • the user may be instructed to put the headset on and initiate the headset with the user head position at a normal field of view.
  • the normal field of view, or the initial field of view may be used as a reference field of view to determine a frame of reference for any control features.
  • the user may initially put on the headset and look forward.
  • the system may be configured to set that orientation and position as the reference frame.
  • the system may thereafter set the control features to be positioned above or outside of the reference frame.
  • the system may then be configured to track a position, change in position, change in orientation, change in direction, and combinations thereof to determine whether control features should be brought into the field of view of the user and displayed or projected to the user.
  • the system may use image recognition to identify an object in the user's field of view.
  • the system may be configured to track the object to determine a position of the control features relative to the object. For example, the system may recognize a chair in a room. When the user looks up, the chair moves toward the bottom of the user's field of view.
  • the system may be configured to display the virtual control features based on the position of the recognized object within the user's field of view.
  • the reference field of view may be determined based on recognized features in the field of view.
  • the static viewing area may be relative to a first/previous field of view, such that one or more objects may be identified within a first viewing area, and motion of the objects tracked to determine relative direction/distance to bring the control features into the field of view.
  • a user engages or views a plurality of control features by rotating in a predefined direction.
  • the predefined direction may be relative to the normal viewing area. For example, the user looks up, down, to a side, or combinations thereof.
  • the predefined direction may be relative to a reference viewing area.
  • the direction may be determined by one or more sensors to detect a position of the device, orientation of the device, direction of the device, change in position of the device, change of orientation of the device, change in direction of the device, an object in a field of view of the device, the relative position of the object within a field of view of the device, and combinations thereof.
  • the user may view the plurality of control features by simply looking up.
  • the plurality of features may become visible once the user has looked up for a certain rotational amount.
  • the rotational amount may be detected by inertial tracking and/or visual tracking.
  • the rotational amount may be detected by motion of the augmented reality headset (i.e. inertial tracking from accelerometer, GPS, gyroscope, compass, etc.), detected by comparison of movement of recognized objects in a current field of view compared to a previous field of view, and combinations thereof.
  • the plurality of control features is pinned to a user's head.
  • the user may simply look up and see the plurality of user features as centered on the user and centered in the elevated field of view. The user may then access additional control features by turning side to side.
  • Pinned to the user's head means that the plurality of control features are positioned relative to the user's head. Therefore, no matter the initial direction of the user, the user can look up (or any predefined direction) relative to a starting position and see the same perspective (such as the centered view) of the plurality of control features. Additional rotation in the same or different directions can bring in additional control features.
  • Exemplary embodiments may be used with other tracking systems, such that even as the headset translates through a physical world, such as when the user walks through a room, or whether additional virtual objects are tracked to the physical environment or other reference frame, the control features may be available relative to the user by simply looking in the predefined direction.
  • control features may circumscribe the user's head.
  • the plurality of control features may be continuous or discrete virtual objects virtually positioned 360 degrees around the user's head.
  • control features may be contained in a plane.
  • the plane may be horizontal.
  • the plurality of control features may be contained in parallel planes.
  • Control features in a plane may define a partial or full circular or ovoid position about the user's head.
  • Control features may be positions at discrete locations or continuously in a dome shape over the user's head.
  • Exemplary embodiments may position control features relative to the user by a priority ranking.
  • the most accessed, most common, most useful, most desired, or other ranking condition may be positioned in a center position, such that a single rotation out of the normal viewing area is needed to access it.
  • a high priority control feature may be positioned directly above the normal viewing area.
  • Lesser control features may be positioned such that additional directional movement is required, such as further in the same or different directions.
  • lower priority control features may be positioned further above or to the side of the high priority control feature. Therefore, the control features may be positioned in a three dimensional position based on a hierarchy relative to the user, present field of view, normal viewing area, and combinations thereof.
  • Exemplary embodiments described herein generally refer to being positioned or pinned to a user's head, however, the disclosure is not so limited. Exemplary embodiments may include a relational position based on a user's body part, such as the head, waist, mid-section, a system component such as the headset or system controller, environmental component such as a detected object, and combinations thereof.
  • Exemplary control features include, for example, icons, menus, text, lists, objects, pictures, applications, etc.
  • Exemplary control features may be used to create or define menus such as system menus (wifi, airplane mode, brightness, universal music controls), application-specific menus (for example, the file/edit/view/help top menu in a traditional Windows/Mac program), or contextual menus (specifically relating to object a user may be interacting with, whether physical or virtual).
  • Exemplary control features may also be applications and the system described herein is used as an application launcher. Therefore, instead of selecting or expanding a menu item, an application can be selected and launched.
  • Exemplary embodiments may also track a user's direction of eyes relative to the head, such that a user may see one or more control features by keeping their head still, but looking into the peripheral field of view around a normal viewing area in the predefined direction.
  • the user's head or eye direction may be used to control the display of control features for the user, but not necessarily tag a control feature to the user, user's head, and/or headset.
  • the system may not have static control features tagged to a user's head, such as the configurations illustrated herein with respect to FIGS. 2A-2B. Instead, the system may simply display control features in any display pattem when the user looks in a predefined direction (either by moving their eyes or their head). Therefore, a user may turn their head upwards by raising their chin, which activates the control function display.
  • the control function display may bring down control features into a field of view, or may simply have control features appear or translate into the field of view from any direction or position to any final location within the field of view.
  • An exemplary system comprises a virtual reality or augmented reality headset configured to display virtual objects and control features to a user.
  • the virtual objects and control features appear in a user's field of view.
  • the system may include a software (non- transitory machine readable medium) stored in memory and, when executed by a processor, perform functions described herein, including displaying one or more control features to a user when a user changes a field of view in a predetermined direction.
  • the predetermined direction is upwards.
  • the system displays the plurality of control functions oriented relative to a user's head or the headset.
  • the system displays the plurality of control functions oriented relative to a user's normal field of view.
  • the user may navigate through a plurality of control features by rotating the user's head.
  • the user may activate one or more of the plurality of control features in different ways.
  • the user may bring the desired control feature into a predefined viewing area.
  • a static target area may be displayed to the user.
  • the static target area may be static to the viewing space and not relative to any specific field of view.
  • a target area may be displayed at the center of the user's viewing area.
  • the user may activate an item by bringing the control feature into the target viewing area.
  • the user may activate the item by hovering or bringing the control feature into the target viewing area for a predetermined amount of time.
  • the target viewing area make be identified to a user with a virtual overlay displayed to the user.
  • the target viewing area may not be visually identified to a user.
  • the target viewing area may be identified to a user with a virtual overlay displayed to the user when a control feature is in proximity to a target viewing area, but otherwise the target viewing area may not be visually identified to a user when a control feature is not within a viewing area of the user or when a control feature is not within a predefined proximate distance to the target viewing area.
  • a user in order to select, modify, interact, control, manipulate, adjust, launch, or otherwise engage (hereinafter referred to collectively as activate) a control feature, a user can make a selection through the movement of the user's head or eyes.
  • the selection through user visual direction may be used in various combinations as described herein.
  • a selection to activate a control feature may include first bringing the control feature to a predefined portion of the user's field of view. For example, a user may identify a plurality of available control features. The user may then move such that a specific control feature is brought into a specific location of the viewing area. For example, the control feature may be brought into the center of the user's field of view or the center of the viewing area. Once at the predefined portion of the viewing area, the system may activate the control feature based on the movement of the user.
  • the user may activate the control feature (such as to launch an application associated with the control feature) by then looking or moving the head in a first predefined direction or may cancel or not activate the control feature by looking or moving the head in a second predefined direction different from the first direction.
  • the selection of the control feature based on movement may occur within the lapse of a predetermined amount of time.
  • the user may look further upward to activate the control feature, thereby bringing the object out of the target viewing area in a defined direction. If the user looks up, the control feature would leave the lower portion of the target viewing area.
  • control feature leaves the lower portion of the target viewing area, or if the system detects the head or eye movement in an upward direction, the system may then activate the control feature. Conversely, if the user looks downward and the control feature leaves the target viewing area on an upward perimeter of the area, then the control feature may not be activated. Any predefined combination of first and second directions may be used to indicate the selection or non-selection of the control feature.
  • different methods may be used to distinguish an intention to activate or cancel a control feature verses an intention to move the position of the displayed virtual objects.
  • the speed of the user's motion of either the head or the eye(s) may be used to distinguish an intention to activate a control feature verses and intention to move the positional display of the virtual objects.
  • a specific speed may be used to identify a purposeful selection and separate the movement from natural movement or a desire to not make a selection.
  • the position of the virtual object within the target viewing area may also be used to distinguish an intention to activate a control features verses the intention to move a position of the virtual object within the display. For example, once a control feature is within a target viewing area, the control feature is selected to either activate or cancel.
  • Any further action of the user is for either activating or cancelling the control feature.
  • the specific direction of motion after a control feature is within the target viewing area may also identify intention. For example, once the control feature is in the target viewing area, further motion in a specific direction may activate the control feature, while all other directions would continue to simply control the positional display of the virtual objects overlaid on the perception of the physical environment.
  • the target viewing area may be displayed to a user.
  • the target viewing area may be displayed before, after, and/or while a control feature is within the target viewing area, may be displayed after or while a control feature is within a viewing area, at all times, or never.
  • a circle may be displayed to a user to indicate the target viewing area.
  • the predefined direction may also be visually indicated.
  • a perimeter or direction of the target viewing area may indicate the direction to activate the control feature.
  • a portion of the perimeter of the circle may change colors and/or be accompanied by words, colors, arrows, or other indicators to inform a user of a direction to activate the control feature.
  • the system may display any number of selection option associated with a predefined direction that a user may select simply by looking or turning in that direction.
  • additional control feature are presented to the user.
  • the additional control feature(s) may be positioned around the control feature such that each additional control feature corresponds to a unique direction relative to the control feature. Therefore, the additional dis control feature may be directionally arranged around the original control feature. A user may thereafter select the additional control feature may moving the head or eyes (essentially looking toward) the desired additional control feature.
  • the additional control feature may be indications of intent to activate and not to activate an associated option/application of the control feature.
  • the user may use another input device to activate a control feature.
  • a controller such as a remote, button, joystick, keyboard, mouse, etc. may be used to provide an input to the system.
  • the system may recognize a selection when a predefined combination of inputs is detected.
  • the system may detect one control feature in the field of view and a control input, such as a push of a control button, to determine a selection was made for the one control feature in the field of view.
  • the system may detect one control feature in a target location within the field of view and a control input, such as a push of a button, to determine a selection was made for the control feature in the target location.
  • control features may be used, such as recognition of a pointer, object, hand, etc. in the field of view aligned with the control feature.
  • a user may physically select a control feature by pointing to its virtual location represented in physical space.
  • the system may receive location information of the object, such as by using a camera, detect the object as a control selection object, and determine that the control selection object is pointing at a specific control feature.
  • Control selection objects may be specifically defined and recognized object or may be any object detected as proximate the physical location of the virtually represented control feature.
  • Other gestures of the object may also be used, such as swiping across, circling, or otherwise indicating a selection.
  • the input device may be used to bring control features into view.
  • virtual control features may be tagged to a user's head or the headset.
  • the control features may translate, rotate, or otherwise appear in the field of the user.
  • the virtual objects are displayed to the user based on an initial user input from a user.
  • the control features are dropped or otherwise moved from their resting position tagged to the user's head into an active position for display and use by the user.
  • the user can select or define viewing areas in which to display virtual objects.
  • the system may be configured to launch or display virtual objects in relation to a position of a control feature that launched or is associated with a virtual object.
  • a control feature may be pinned to a user's head, such that a control feature is displayed as a user looks up.
  • the user may therefore define a rotation position in which to activate a control feature.
  • the associated virtual object related to the control feature can thereafter be displayed in relation to the activated control feature.
  • the associated virtual object may therefore be displayed to a user below the control feature that launched the virtual object.
  • the user may therefore rotationally position the virtual object by first rotating to a desired position before activating a control feature.
  • Exemplary embodiments may therefore be used to display reference material to a user in a field of view outside of a working field of view of a user.
  • the user may be working on an object in a working field of view.
  • the user may wish to display reference material, such as building instructions, user guide, or other information in another field of view outside of the working field of view.
  • the user may simply look to the left or right to retrieve the desired information without obstructing the field of view in which the user is working.
  • the user may therefore look back into the working field of view to remove the virtual objects from their viewing area.
  • FIGS. 4A-4Q illustrate an exemplary stepwise view of using an exemplary control system according to embodiments described herein.
  • FIG. 3 illustrates an exemplary background that a user can perceive in a physical environment across multiple field of views creating a panorama view of a physical environment.
  • the dotted lines are representative of a physical environment and illustrate various straight and curved lines to orient the different views of a user.
  • the boxes illustrate exemplary viewing areas of a user based on different fields of view. For example, box A may be taken as the normal viewing area or reference viewing area. Box B would then represent a viewing area available when a user looks up from the field of view encompassing the reference viewing area.
  • Box C would be a viewing area when a user turns to the side or the right from the normal viewing area
  • Box D would be a viewing area when a user looks up from the Box C viewing area.
  • FIG. 3 is provided simply to illustrate different background environmental areas associated with different viewing directions that may be used to identify and control the control features described in FIGS. 4A-4Q.
  • FIG. 4A illustrates an exemplary physical environment as seen through a viewing area of a augmented reality headset according to embodiments described herein.
  • FIG. 4A represents the physical environment in dashed lines, the box is an exemplary viewing area in which virtual objects may be superimposed over the physical environment. As illustrated in FIG. A, there are no virtual objects superimposed in the viewing area.
  • FIG. 4A may represent a reference viewing area.
  • the reference viewing area may be associated with a normal viewing position of the user.
  • the normal viewing position may be defined by the user looking forward with a level head in a first direction.
  • FIG. 4B illustrates a transition as the user looks upward from the reference viewing area of FIG. 4A.
  • a control feature 22A appears in the viewing area and is brought down into the user's viewing area corresponding with the direction the user is looking.
  • the control feature 22A is brought down into the viewing area.
  • the system may also display other control features.
  • a target viewing area may be display, or instructions on how to navigate or launch a control feature may be displayed.
  • FIG. 4C illustrates the viewing area B of FIG. 3 with a control feature displayed.
  • the control feature has moved to the center of the viewing area based on the movement, direction, orientation, or other input of the system or user as described herein.
  • a target viewing area may be displayed to indicate that a control feature is in an area to launch or select the control feature.
  • the illustrated target viewing area 42 may be animated, change, or otherwise provide an indication that a selection is being made. For example, when the control feature is brought within the target viewing area for a preselected amount of time, the control feature may be launched or selected.
  • a representation of the target viewing area may therefore change size, shape, color, move, or otherwise indicate that the preselected amount of time is running to make a selection. The user may then cancel the selection by moving the control feature out of the target viewing area before the conclusion of the preselected amount of time.
  • the representation of the target viewing area may be any representation including a shape, area, dot, etc.
  • FIG. 4D illustrates the transition from FIG. C to FIG. 4E.
  • the user has selected the control feature of FIG. 4C and launched the associated application.
  • the control feature is a launch document feature that, when selected, opens a document.
  • the control feature may change.
  • the control feature changes from the open document control feature to a close document control feature. Therefore, once a control feature is activated, a corresponding counter control feature may replace the activated control feature to undo the action of the activated control feature.
  • the opened document may be displayed or brought into the viewing area of the system.
  • FIG. 4E illustrates the viewing area of FIG. 4A with a virtual object 44A overlaid in the physical field of view of the user.
  • the virtual object 44 A may include one or more control feature 22C 22D as well.
  • the virtual object 44 A is a document opened by an open document control feature.
  • the additional control feature may therefore be a move to sequential pages control feature 22C or jump to a selected or specific page control feature 22D.
  • the user may select one or more of the control features by moving the control feature into the target viewing area.
  • the target viewing area is the center of the viewing area.
  • FIG. 4F illustrates an example of the effect of the virtual object within the viewing area as the user looks in a different direction.
  • FIG. 4F illustrates a transition from viewing area A to viewing area C.
  • FIG. 4F also illustrates an exemplary target viewing area 42.
  • the target viewing area may always be displayed or may only be displayed as a control feature is brought within a predefined range of the viewing area.
  • the predefined range may be that the control feature overlaps or is fully within the viewing area.
  • the representation of the viewing area may change based on the proximity of a control feature.
  • the target viewing area may be represented by a smaller, more transparent, less obtrusive color, or combinations thereof when the control feature is outside of or away from the target viewing area.
  • the control area may be represented by a larger, more visible, different color, or combinations thereof when the control feature is within or proximate to the target viewing area.
  • FIG. 4G illustrates the transition after FIG. 4F once a user has selected the sequential page control feature of FIG. 4F. As illustrated the virtual object has changed to the next page of the document.
  • FIG. 4H illustrates the virtual object overlaid on the physical object as the user looks in a different direction such as toward the jump to a selected or specific page control.
  • FIG. 4H illustrates an exemplary target viewing area 42.
  • the representation of the target viewing area may depend on the control feature within or proximate to the target viewing area.
  • the jump to a selected or specific page control includes a plurality of control features in close proximity.
  • the displayed target viewing area may therefore be represented by a smaller area in order to delineate and accurately select one control feature from an adjacent control feature.
  • the target viewing area may be larger or less accurately defined.
  • FIG. 41 illustrates the control feature brought into the viewing area when a user looks up from FIG. 4H.
  • the control feature may be a control feature to undo the activated control feature.
  • the control feature is that to close a document.
  • FIG. 4K illustrates the control feature as displayed to a user once the user has selected the control feature from FIG. 4J.
  • the control feature is static and does not change.
  • the control feature may disappear once activated, where the control feature cannot be activated more than once.
  • FIG. 4L illustrates the field of view of the user after selecting the control feature of FIG. 4J.
  • the selected control feature of FIG. 4J closed the document and therefore takes the virtual object out of the viewing area of the user.
  • the FIG. 4L viewing area includes that to the right of the reference viewing area, or down and to the right of the control feature viewing area of FIG. 4J. From the position of FIG. 4L, the user can then look up and see the same control feature orientation. Therefore, even if rotated from the original reference viewing area, the control features may be displayed relative to the user's head. In this way, the user can look up from any orientation and launch the control feature system. If more than one control feature is available, the user may then look to one side or other or further up or down to bring additional control features into view or into a target viewing area according to embodiments described herein.
  • FIG. 4N illustrates an exemplary virtual overlay once the user has selected the control feature of FIG. 4M.
  • the control feature of FIG. 4M includes the open document control feature, which launches a document.
  • the launched document is opened below the control feature the launched the document. Therefore, the document appears in the target viewing area rotated from the reference viewing area since the control feature launching the document was activated when the user was rotationally oriented with respect to the referencing view area.
  • the user may define viewing areas to display virtual object in relation to the user and/or the user's environment.
  • FIG. 40 illustrates an exemplary representation of the reference viewing area without a displayed or overlayed virtual obj ect as the virtual object was launched rotationally out of the field of view of the reference viewing area.
  • FIG. 4P illustrates the display of the control feature from any viewing area. Therefore, FIG. 4P illustrates, again, the control feature displayed in relation to a user's head once the user looks up from the FIG. 40 position. The user can then select the control feature of FIG. 4P, and close the document.
  • FIG. 4Q therefore illustrates the viewing area of FIG. 4N without the displayed virtual object.
  • Exemplary systems described herein include display and control systems for use in a three dimensional display or virtual environment.
  • Exemplary embodiments include control features rotationally and positionally tracked to create unique and engaging interfaces.
  • a control feature is rotationally tracked, such as pinned to a virtual reality or augmented reality headset, and then selected, modified, interacted with, controlled, manipulated, adjusted, engaged (collectively referred to herein as activated) through use of a positionally tracked object.
  • Exemplary embodiments therefore include a transition between different tracked reference frames for the same virtual control feature.
  • a first reference frame is relative to the headset, and a second reference frame is relative to a physically tracked object.
  • any combination of different reference frames may be used, such as pinned to a physical environment, pinned to a physical obj ect, pinned to the headset, etc.
  • Exemplary embodiments include selection, orientation, viewing, and presentation of one or more control features controlled by relative movement of the target object relative to the field of view, headset, another tracked object, a tracked world/reference system, or other visual or non-visual tracking systems.
  • a reference frame is any reference system for orienting, positioning, or otherwise locating an object.
  • the reference frame does not necessitate a full six degrees of freedom, but may include any combination of position and orientation, such as linear distance in one, two, or three dimensions, and/or rotational orientation about one, two, or three axis.
  • a reference frame includes a point object in which a two dimensional translational distance can be measured.
  • a reference frame also includes more complex relationships in one, two, or three dimensional reference system including both translational and rotational positioning and orientation.
  • An exemplary embodiment is configured to define and/or display to a user one or more control features in a location relative to a given reference frame.
  • An exemplary reference frame may be one, two, or three dimensional.
  • the control features may or may not depend on the rotational orientation of the user from the normal viewing area. For example, as illustrated in FIGS. 4 J and 4M, the same control features are displayed regardless of whether the user looks up from the normal viewing area A or from a rotated viewing area C. Therefore, the control features are present in a static location relative to a one dimensional reference frame (i.e. up and down).
  • exemplary embodiments include the display of the control features dependent on the rotational orientation of the user relative to the normal viewing area.
  • the control features may be in a static location relative to a two dimensional reference frame.
  • Exemplary embodiments also include transitioning a virtual object from the first reference frame to a second reference frame to activate the control feature. Any combination of tracking control features to a given reference frame, including the first and second reference frames, may be used with embodiments of the present disclosure.
  • a virtual control feature is provided to a user in a virtual or augmented reality environment, where the virtual control feature is tracked relative to a first reference frame.
  • the first reference frame may be to the headset creating the virtual world.
  • the control feature(s) and reference frame may be those as described herein, such as those of FIGS. 2-4. Therefore, in one embodiment, a menu of control features may appear in a 360 degree or partial 360 degree arrangement at any height or distance around the user.
  • control features and their location, orientation, presentation, etc. are also contemplated herein.
  • a shelf or grid arrangement may be provided to one or more lateral sides of the viewer's field of view. Therefore, a plurality of control features may appear in a shelf-like arrangement.
  • control features are pinned to a first reference frame.
  • control features are in reference to the headset, such that their position is rotationally tracked as the user moves their head rotationally.
  • the control features are fixed in reference to the viewing area, such that their position says in the same place as viewed by the user regardless of the movement of the user's head or environment.
  • Exemplary embodiments include a combination of reference systems for virtually objects, such that control features described herein may be in reference to the first reference frame, while other virtual objects and/or control features creating the virtual world may be in relation to another reference frame, such as the physical environment.
  • an immersive environment may be created with respect to the augmented or virtual world, while providing a set of control features such as for controlling the system that are in reference to another reference frame, such as the headset or the field of view of the user for easy identification, viewing, retrieval, etc.
  • a menu system may easily be "found” simply by looking in a predefined direction, such as up, to activate the virtual settings or environment of the AR or VR system.
  • a user in order to activate a control feature, can change the reference frame from a first reference frame to a second reference frame.
  • the second reference frame may be to a positionally tracked real- world object, such as a target object. Exemplary embodiments of what can be done with control features once it is tracked to a target object is described herein.
  • the user changes the control feature's reference frame by bringing the target object near the perceived physical location of the displayed virtual control feature and attaching the virtual control feature to the physical target object. The relative position and/or orientation of the control feature may thereafter be maintained relative to the physical target obj ect. For example, a globe may be the control feature.
  • a target physical object may be positioned as perceived by a user under the globe.
  • the association of the target physical object to the virtual globe obj ect may be made.
  • the globe may then be brought closer to the user and rotated to bring the different sides of the globe into the field of view of the user by moving the physical target object closer to the user and rotating the physical target object.
  • the transfer or attachment of the control feature to the target physical object may present a different virtual object/display to the user.
  • the control feature may be displayed as an icon for a map when tracked to the first reference frame.
  • the map application is launched and the virtual object attached to the target physical object is not the icon but is the launched application including a representation of a map of an area.
  • the control feature therefore need not maintain a common or the same appearance to a user when transitioning or in a first reference frame or a second reference frame.
  • the association of the target physical object to the virtual control feature or the change of the first reference frame to the second reference frame may occur through different control mechanisms.
  • a user may move the physical target object to a predefined relative location to the virtual control feature and maintain the relative position thereto for a predetermined amount of time. After the predetermined amount of time has lapsed, then the virtual object attaches to the physical target object, and the reference frame handoff occurs.
  • an indication of the time association may be provided to a user. Therefore, before an association is made, a visual or audio indicator may be provided to the user. For example, a visual signal may be shown to the user indicating the amount of time before the transition between reference frames, or the association to the physical target obj ect is made. For example, an audio signal such as a beep or music may sound before or when an association is made.
  • a target viewing area may be associated with each control feature. Instead of using the direction of the head and/or eye to position the control feature within the target viewing area, a physical target obj ect may be brought to or within the target viewing area to indicate a transfer between reference frames for the selected control feature.
  • the target viewing area may be displayed to a user or not.
  • the target viewing area may provide an indication of the time elapse.
  • the target viewing area may be presented at different times or in different ways depending on the proximity of the physical target object to the target viewing area.
  • the virtual environment may change or provide an indicator that a physical target object is within a target viewing area associated with a control object.
  • the association of the physical target object to the control feature may be through a time elapse of the physical target object within the target viewing area for a predetermined amount of time.
  • the association of the physical target object to the control feature may be by a directional navigation of the physical target object with respect to the control feature and/or in or out of the target viewing area associated with the control feature.
  • a user may move the physical target object adjacent or near a virtual control feature and provide an external input to the system such as a spoken command, a control input, or other indicator that an association should be made.
  • a control input may be a push of a button, swipe, or other gesture on an input device, or any other input.
  • the control input may be associated or on the physical target obj ect or may be separate therefrom.
  • the physical target object may include a button or physical space that when pushed (or obstructed) as perceived by the VR or AR system, a selection is made. Separate controlling devices may also be used.
  • FIG. 5 illustrates an exemplary display of control features within a viewing area of a user superimposed over a physical environment.
  • the dashed lines indicate features of the physical environment, while the solid lines indicate virtual objects superimposed in the perception of the viewer over the physical environment.
  • the control features 22 may be positioned in reference to a first reference frame. For example, the icons about a user's head may be used.
  • a reference system may be to the target viewing area such that the control features are displayed regardless of the viewing area and/or direction of the user relative to the physical environment.
  • a reference system may also be with respect to the user and/or headset, such that the control features are brought into and out of the viewing area by rotational or directional orientation of the headset or the user's body or headset.
  • FIG. 6A illustrates an exemplary embodiment in which a control feature is selected from among the plurality of control features and pinned to a target physical object.
  • the selected control feature 22E transitions to a first reference frame to a second reference frame.
  • the second reference frame is that of the physical target object 62.
  • the transition from the first reference frame to the second reference frame may be through any mechanism described herein, such as through an elapse of a predetermined amount of time with the physical target object within a target viewing area associated with the control feature, the presence of the physical target object into the viewing area while a control feature is within the viewing area, the input into the system to indicate the transition, the directional, orientational, rotational, or other intentional manipulation of the target physical object or other obj ect within the detection of the headset system, and combinations thereof.
  • the target physical object is brought in proximity to the control feature and within the target viewing area.
  • the control feature to be selected 22E may change appearance when the target control feature is brought within the associated target viewing area.
  • the shading, color, size, or other feature of the control feature may change when the physical target object is within the target viewing area of the control feature.
  • Other indications of the association or potential association may also be provided to the user.
  • a virtual obj ect or indicator may appear to show the physical target object is within the target viewing area
  • an indication of the target viewing area a time elapse, or other indicator may show the transition from the first reference frame to a second reference frame in progress, etc.
  • FIG. 6B illustrates the control feature 22E pinned to the physical target object 62.
  • the manipulation of the physical target object may activate the control feature.
  • translation and/or rotational of the physical target object may also translate and/or rotate the selected control feature.
  • the associated control feature is moved from its original position with reference to the first reference frame and is moved within the viewing area by its association to the physical target object.
  • FIG. 6B illustrates the association of one of the control features with the target object, such that the control feature has been removed from the first reference frame (rotationally tracked to the headset) and pinned to a second reference frame (the positionally tracked target object).
  • FIGS. 7-8 illustrate exemplary associations between the physical target object and the corresponding control feature.
  • the rotation of the physical control object may be used to enlarge or decrease a size of the control feature.
  • the rotation of the physical control object may rotate the selected control feature.
  • Exemplary embodiments of a physical target object include any physical object that may be recognized by the system.
  • the target object defines a non-symmetric identifiable recognizable image. Therefore, the object may be recognized by the system as a target physical object and may also recognize and/or detect the orientation, rotation, translation, and other manipulation of the target object in the physical space.
  • Exemplary embodiments include methods of controlling a virtual reality or augmented reality system.
  • the method may include providing one or more virtual control features within a field of view of a user, where the one or more virtual control features are pinned to a first reference frame.
  • the first reference frame may be a rotational reference frame relative to the headset.
  • a user may thereafter move a physical target object to a predefined position relative to one of the virtual control features.
  • a virtual representation or target location may be virtually represented in the user's field of view. This virtual representation may be used to select one of the virtual control features.
  • the user may thereafter move the physical target object in proximity to one of the virtual control features.
  • the virtual control feature may be removed from the first reference frame and pinned to a second reference frame. The user may thereafter move the physical target object to move the control feature.
  • the virtual control feature when the physical target object is in a predefined relative location to the control feature, the virtual control feature may be removed from the first reference frame and pinned to a second reference frame.
  • the virtual representation of the physical target object or the virtual target location may indicate when the physical target object is in proximity to the virtual control feature.
  • the virtual representation or virtual target location may change colors, shapes, size, etc.
  • An audible indication may also or alternatively be provided.
  • the virtual representation is a first color when the physical target object is recognized by the system.
  • the virtual representation is a second color when the transition from the first reference frame to the second reference frame is about to take place.
  • the system may gradually transition from the first color to the second color, such that the user may identify when the transition may be complete.
  • the system may also stepwise transition between the first color and second color, such as to act as a count down (for example by transitioning from yellow, to orange, to red in a predefined amount of time).
  • the remaining control features may reposition relative to each other to close the space left by the removed control feature. For example, additional control features can appear on the side and virtually appear to push the existing control features to fill the gap. In an exemplary embodiment, another control feature not previously present may take the place of the removed control feature.
  • a user can manipulate the control feature by moving the target object relative to the headset.
  • additional control features may be viewed or the control feature reoriented by movement of the target object.
  • a first set of control features may be presented when the target object is detected in a first orientation
  • a second set of control features may be presented when the target object is detected in a second orientation different from a first orientation.
  • the first set and second set of control features may be the same, may be different, may be entirely different, or may have a partial overlapping of objects or features.
  • the first orientation and second orientation may be different.
  • a static control set of control features may be displayed to a user when a target object is detected.
  • control features may be orientated relative to the target object, such that control features may be moved relative to the user by moving the target object. For example, rotating the target object may rotate the set of control features so that different perspectives or groups of control features are more easily identified by a user or brought into view or moved out of view of the user.
  • the attached control feature may be removed, replaced, or repositioned relative to the control features not attached to the target object (i.e. the remaining objects).
  • the attached control feature may be moved by moving the target object, while the remaining control feature stay pinned to their original reference frame and location.
  • the target object may be moved such that it appears the attached control feature is moved away from the remaining control features.
  • the remaining control features may move or reposition to fill the void or space previously occupied by the selected control feature in the reference frame of the physical target object.
  • the user may move the selected control feature near the remaining control features.
  • the remaining control features may move or reposition to create a void or space to accommodate the selected control feature.
  • the system may receive an input such as an elapse of a predetermined amount of time, a control input such as an electronic input from a button push (for example), or other signal to the system to change the reference frame.
  • the reference frames may change to or from the rotational reference frame to the target object reference frame.
  • the attached control feature may change from the target object by detaching the control feature from the target object and thereafter attaching to the rotational reference frame. The control feature may therefore transition from the second reference frame back to the first reference frame.
  • the user may activate a plurality of control features by rotating, translating, or otherwise physically moving the physical target object within the field of view of the headset or otherwise within a space detectable by the headset or in communication with the headset.
  • An exemplary embodiment is configured to define and/or display to a user one or more control features in a static location relative to a target object (also referred to herein as a tracked object).
  • the tracked object is a recognized object such that the dimensions, orientation, or other spatial characteristics of the object are known to the system.
  • a tracked object may include a cardboard sheet having a predefined, non-symmetric image printed thereof.
  • Exemplary embodiments may therefore provide any combination of control features, including, for example, one or more icons, objects, text, colors, menus, lists, displays, controls, commands, etc.
  • FIG. 9 illustrates an exemplary tracked object 92, and a plurality of control features 22F-22G fixed relative to the tracked object. As shown, the plurality of control features are above the target object in a static or predefined orientation, location, and distance from the tracked object. The plurality of control features may be anywhere relative to the tracked object.
  • FIG. 10 illustrates the static association of the plurality of control features to the tracked object, such that the rotation of the tracked object similarly rotates the control features.
  • a control feature may be produced such that its relative location to the user may be changed.
  • the control feature may be presented to have a depth variation that may change as the user interacts with the virtual object.
  • depth may be determined based on a physically tracked object detected.
  • depth may be changed based on an input from an outside controller, such as a remote control.
  • Exemplary embodiments described herein include associating one or more control features to a tracked object.
  • the one or more control features may be displayed to the user relative to the tracked obj ect once the tracked object is identified by the system.
  • the tracked object may be identified by the system through one or more input sensors, such as visual detection from a camera pf the headset system viewing the target object, signal (such as infrared, Bluetooth, short wave, radio frequency, acoustic frequency, etc.) detection through a sensor in communication with the headset system of an output signal from the target object.
  • the headset system may include a database or other look up system for associating an input to a tracked object, such that one or more attributes (such as dimension, size, shape, color, associated control features, etc.) of the tracked object may be known to the system.
  • Exemplary embodiments described herein include displaying or presenting one or more control features relative to a tracked obj ect based on the position, orientation, identification, and combinations thereof of a tracked obj ect.
  • a first tracked object different from a second tracked object may have a first set of control features associated with it and displayed to a user when the first tracked obj ect is detected by the headset system and within the viewing area of a user.
  • the second tracked object may have a different set of control features associate with it and displayed to the user when the second tracked object is detected by the headset system and within the viewing area.
  • the difference of control features associated with a tracked object may include overlap of control features.
  • a first tracked object may have associated with it a plurality of control features.
  • the display of one or more of the plurality of control features may depend on the orientation of the tracked object detected by the headset system. For example, a first set of control features may be presented to a user with respect to a first tracked object in a first orientation, and a second set of control features different from the first set of control features may be presented to a user with respect to the same first tracked object in a second orientation different form the first orientation. The second orientation may be rotated from the first orientation.
  • FIGS. 11 A-l IB illustrates an exemplary application in which a single tracked object may have different control features associated with it that are displayed depending on the orientation of the tracked object relative to the headset system.
  • FIG. 11A illustrates an exemplary tracked object 92A and associated control feature 22H.
  • the displayed control feature 22H is displayed based on the detected and recognized tracked object 92 A and the orientation of the tracked object 92A.
  • a specific record albumn is defined as the tracked object 92A.
  • the orientation of the tracked item on its side relative to the headset system indicates a first control feature to display to the user.
  • the track list associated with the album is displayed to the user.
  • FIG. 1 IB illustrates an exemplary tracked object 92B in another orientation and thereby displaying a different associated control feature.
  • the displayed control feature 221 is displayed based on the detected and recognized tracked object 92A and the orientation of the tracked object 92 A.
  • the album of FIG. 11A is rotated.
  • the associated control feature 221 is therefore different than that of FIG. 11A as the detected orientation of the tracked object is different.
  • the same album is displayed but rotated 90 degrees from that of FIG. 11 A.
  • the control features associated with the tracked obj ect this time illustrates related songs to that of the album.
  • the related songs may be displayed as different albums or songs by the same artist, in the same genre, etc.
  • a target viewing area 42 may be displayed to the user.
  • the target viewing area 42 may be represented in any way as described herein.
  • the target viewing area may be displayed when a control feature or target object is brought in proximity to the target viewing area.
  • the representation or other virtual display of the system may change or be displayed based on the proximity to a target viewing area or elapse time within in or proximate to a target viewing area.
  • the target obj ect may be manipulated or moved to bring an associated control feature 221 within a target viewing area 42 to activate an individual control feature.
  • FIGS. 1 1 A-l IB illustrate exemplary views of using a physical object to generate and activate a virtual object.
  • the physical obj ect may be used to generate a specific virtual image, select, move, rotate, alter, modify, interact, control, manipulate, adjust, engage or otherwise activate the virtual image, or combinations thereof.
  • a plurality of different sets of virtual obj ects is associated with a physical obj ect.
  • the system selects which set of the plurality of different sets of virtual obj ects to display to a user based on the physical object, including its position, orientation, direction, and combinations thereof in three dimensional space in a specific reference frame.
  • FIGS. 1 1 A and 1 IB different virtual control features are displayed depending on the orientation (rotation) of the target object.
  • the generated virtual object may depend on movement of the target object.
  • a first virtual object may be displayed to a user once a physical obj ect is recognized.
  • a user may thereafter cycle through or view different virtual control features by rotating, translating, or otherwise moving the physical target object.
  • the virtual control feature first displayed to a user may be the album's track list.
  • the system may thereafter recognize that the album is rotated, and other virtual control features, such as related songs, etc. are sequentially displayed to the user.
  • a user engages or views a plurality of control features by bringing a predefined and/or recognizable physical obj ect into the field of detection (view) of the headset.
  • Exemplary embodiments permit a user to manipulate the plurality of control features by moving the target obj ect relative to the headset.
  • additional control features may be viewed or the control feature reoriented by movement of the target object.
  • a first set of control features may be presented when the target object is detected in a first orientation
  • a second set of control features may be presented when the target object is detected in a second orientation different from a first orientation.
  • the first set and second set of control features may be the same, may be different, may be entirely different, or may have a partial overlapping of objects or features.
  • the first orientation and second orientation may be different.
  • control feature may be positioned in a three dimensional position based on a hierarchy relative to the target object and/or user's field of view.
  • control feature may be viewed or displayed by looking in a predefined direction relative to the target object.
  • the control features may be viewed by keeping the headset stationary and moving the physical target object, by keeping the physical target object stationary and moving the headset, or combinations thereof, such that the relative motion between the headset and the physical target object is in the predefined direction. Therefore, a physical object may be manipulated while the user's physical field of view is maintained in order to bring control features into a field of view.
  • the system is configured to receive images or input corresponding to one or more objects, and recognizes target objects from the received images or input.
  • the system may, for example, receive an image through the camera, and detect objects within the image.
  • the system may comprise a library of target objects.
  • the objects are not symmetric such that an orientation of the object may be determined.
  • symmetric objects may also be used.
  • the library may associate one or more control features to the target object.
  • the library may associate one or more orientations of the target object with one or more control features.
  • the library may associate different orientations of the same target object with one or more control features that may be the same or different.
  • the library may associate a presentation of the one or more control features relative to an orientation of the target object, such that a configuration of the control feature is virtually displayed as static relative to the physical target object.
  • the library may contain dimensional information of the target object, such that a relative distance may be determined and the size, orientation, or selection may be determined based on the relative distance of the target object.
  • the library may be a database in communication with the headset.
  • a target object may be used to determine which plurality of control features to be displayed to a user.
  • a system may be configured with an input device, such as an optical device for receiving an image.
  • the system may be configured to recognize a target object within the received image.
  • the system may be configured to look up a set of one or more control features to display to the user based on the target object.
  • the system may also store and be configured to look up a system for selecting from a plurality of different control features associated with the target object.
  • the system may store and be configured to select a specific control feature based on the orientation of the target object or the relative movements of the target object.
  • the control feature may be oriented or displayed in a given relative position to or from the target object.
  • the system may be configured to track the target object in order to activate the control feature.
  • the control features may be virtually displayed such that the illusion is created that the control feature is at a static orientation or configuration relative to the target object.
  • the system may be configured to detect and track motion of the target object.
  • a selection or interaction with the control feature may occur by rotation, translation, or other manipulation of the target object.
  • the control feature is the available tracks of an album
  • the user may select a given track from the list by translating the target object up or down or by rotating the target object about an axis in plane with the album.
  • a user may make a selection by moving the target object in a predefined direction.
  • the control feature may be information of a track, a song, or a display interface representing a playback system, where the user may translate the album (target object) in any predefined direction to control the play features of the track list associated with the album.
  • a user may lift the album upward to pause the playback, may translate to one side to skip forward, may translate to another side to skip backward, etc.
  • different orientations of the target object may present independent and separate control features.
  • a target object in a first configuration may present a first set of control features.
  • the target object may be rotated 90 or 180 degrees, such that an orthogonal orientation is recognized by the system.
  • This orientation of the target object may indicate a second set of control features to be displayed to the user.
  • the first and second set of control features may be independent, such as relating to different control functions, and/or relating to different information presented to the user.
  • the movement of the target such as the rotation or translation of the physical target object may advance a counter.
  • the counter may be any incremented or decremented value stored in code or displayed to a user.
  • the rotation of the physical target obj ect may increment or decrement a counter such that each turn increases or decreases a value.
  • Exemplary values may include, but are not limited to, those associated with volume, zoom, opacity, brightness, game levels, etc.
  • the increment or decrement may be based on the relative rotation, such as one rotational direction increments and the opposite relative rotational direction decrements the counter.
  • a counter may be incremented/decremented based on partial rotation of the target physical object.
  • a counter may be increment/decremented past a full 360 degree rotation of the target physical object such that it continuously counts past sequential revolutions. Exemplary embodiments may reset a counter at full rotation of the target physical object.
  • Exemplary embodiments may include one or a plurality of physical targets that may work alone or in tandem to activate a control feature, code, a display, or other virtual, electronic, digital, or physical object of the virtual reality or augmented reality environment or internet of things devices.
  • two trackers may be used in relation to each other to increment or decrement a counter, similar to the rotational control describe above with respect to a single counter. Therefore, a greater relative separation between two tracker objects may increase the counter, while a closer relative separation distance decrement the counter.
  • the relative position and orientation of two physical tracker objects may be used. The relationship may, for example, permit a user to adjust a first feature of the system by the relative distance between the target obj ect, and a second feature of the system by the relative orientation between the target objects. Any combination of position, display, orientation, etc. may be used to select, modify, interact, control, manipulate, adjust or otherwise engage features of the system.
  • the relative distance between two target objects may act to zoom a virtual object
  • the relative position may act to orient or rotate a virtual object
  • the relative rotational orientation may act to select a virtual object.
  • Exemplary embodiments also include a single target object, such as a tracker, used in relation to another virtual or physical object.
  • the other obj ect can be pinned to a different reference frame and not to the tracker object. Therefore, for example, a virtual object may be pinned to the physical environment, and a relative distance, position, orientation, or combinations thereof may be between the target object, such as the tracker, and the virtual object pinned to the physical environment.
  • the one or more target objects may be virtual objects, physical objects, or combinations thereof.
  • the target object may be virtual objects displayed on a physical screen and identified by the system.
  • the one or more target objects are displayed and manipulated through a touch interface of a control feature, such as a phone or tablet.
  • the user may bring the desired control feature into a predefined viewing area.
  • a static target area may also be displayed to the user.
  • the static target area may be static to the viewing space and not relative to any specific field of view.
  • a target area may be displayed at the center of the user's viewing area.
  • the user may select an item by bringing the control feature into the target viewing area.
  • the user may select the item by hovering or bringing the control feature and/or target object into the target viewing area for a
  • the target viewing area may be identified to a user with a virtual overlay displayed to the user.
  • the target viewing area may not be visually identified to a user.
  • An indication may be given to a user that a control feature is within the target viewing area, such as by changing the appearance of the control feature, such as its size, shape, color, etc.
  • Other indicators may also be provided such as sound or other virtually displayed indicators that a control feature is in the target viewing area.
  • a counter may be virtual displayed to indicate a time until a control feature is selected if it remains the target area for the remaining counter time.
  • a user may orient the physical object to control or select the control feature.
  • the physical object may be rotated such that the desired control feature is perceived to be the closest object relative to the user.
  • the user may use other gestures or inputs to indicate a selection of this object.
  • the user may move the target object closer or farther away from the user such that the object may be selected by a virtual push or pull of the object.
  • Other input devices may also be used to make a selection as described herein.
  • the user may use another input device to select a control feature.
  • a controller such as a remote, button, joystick, keyboard, mouse, etc. may be used to provide an input to the system.
  • the system may recognize a selection when a predefined combination of inputs is detected. For example, the system may detect one control feature in the field of view, or determine a control feature perceived to be the closest relative to the user, and a control input, such as a push of a control button, to determine a selection was made for the one control feature or perceived nearest control feature in the field of view. For example, the system may detect one control feature in a target location within the field of view and a control input, such as a push of a button, to determine a selection was made for the control feature in the target location. Other control features may be used, such as recognition of a pointer, object, hand, etc. in the field of view.
  • a user may physically select a control feature by pointing to its virtual location represented in physical space with a control object.
  • the system may receive location information of the control object, such as by using a camera, detect the control object as a control selection object, and determine that the control selection object is pointing at a specific control feature as perceived by the user in the augmented reality environment.
  • Control selection objects may be specifically defined and recognized objects or may be any object detected as proximate the physical location of the virtually represented control feature. Other gestures of the object may also be used, such as swiping across, circling, or otherwise indicating a selection with the target object or with another object relative to the control feature and/or perceived location of the control feature.
  • Exemplary embodiments and description herein with respect to the tracked object may be applied to the transition between reference frames described herein and visa-versa.
  • different target objects may be associated with different control features.
  • the associated control objects Once a target object is brought into a viewing area, the associated control objects may be displayed.
  • One or more of the control features of the target object may be positioned in a target viewing area to transition the reference frame of the selected control feature from the target object to the headset system.
  • exemplary embodiments described herein may be used to configure the augmented reality headset system and the associated control feature available to a user.
  • control features according to any embodiment described herein may be associated with a target object according to embodiments described herein. Accordingly, a user may make their own library of control features that can store and display information to a user by selecting and brining in a specific target object within the field of detection of the headset system.
  • the reference frame may be relative to a user's head in a given position, relative to the headset, relative to a viewing area, in one, two, or three dimensions.
  • the reference frame may be relative to a target object.
  • Exemplary embodiments may transition from different reference frames for controlling a virtual control feature.
  • a virtual control feature may be positioned in a first reference and then transitioned into a second reference frame.
  • the first reference frame may provide three dimensional tracking, and thereby permit the virtual control feature to be adjusted based on a perceived depth.
  • the second reference frame may provide two dimensional tracking, and thereby not permit the virtual control feature to be adjusted based on a perceived depth.
  • the virtual control feature may be moved in the two dimensional space in front of the user, but maintain a same size/dimension to represent the same apparent depth.
  • the first and second reference frames may be switched, such that the system transitions from the second reference frame to the first reference frame.
  • the perceived dimension in the second reference frame of the virtual control feature may remain static or determined based on the last perceived dimension of the virtual control feature when attached to the first reference frame.
  • a virtual control feature may be positioned relative to a recognized physical target object.
  • the virtual control feature may remain in the user's field of view and transition to a static position relative to the user's field of view. Therefore, the representation of the virtual control feature may transition to a reference frame pinned to the recognized physical object to a new references frame pinned to the headset, or physical environment.
  • the new reference frame may also be pinned to another physical object in the recognized field of view.
  • the virtual control feature relative position may reattach to the recognized physical object.
  • the virtual control feature may have a reassigned relative position/orientation relative to the physical object, or the virtual control feature may move to the pinned position/orientation relative to the physical object before the physical object was removed from the detected field of view.
  • the virtual control feature may therefore appear to jump or teleport to a new location from the new reference frame back to the original reference frame.
  • a virtual control feature may be positioned relative to a headset or head orientation of the user.
  • a physical target object may be used to transition the virtual control feature's reference frame to that controlled by the physical obj ect.
  • the physical object may be a recognized physical tracker or a remote controller.
  • a hierarchy or multiplicity of virtual control feature representations may be used in relation to different reference frames.
  • a first set of one or more virtual control features may be positioned relative to a user's viewing position (such as the system described with respect to FIGS. 2-4 herein)
  • a second set of one or more virtual control features may be paired or moved based on a physical object (such as a physical tracked object described with respect to FIGS. 9-10 or remote control)
  • a third set of one or more virtual control features may be paired or moved based on a the headset (such as a static position within the user's field of view).
  • Virtual control features may be moved from the first, second, third, and any combination thereof sets by inputs to the system.
  • Exemplary control features include, for example, virtually displayed obj ects, icons, menus, text, lists, objects, pictures, applications, short cuts, etc.
  • Exemplary control features may be used to create or define menus such as system menus (wifi, airplane mode, brightness, universal music controls), application-specific menus (similar to the
  • Control features may be any virtual object provided to the user in response to a system input and/or any virtual object used to change (such as control or manipulate) the virtual experience.
  • a set of control features are displayed to a user based on the target object and/or an orientation of the target object.
  • a set of control features are displayed to a user based on a user configuration, an identified user, an available set of applications, a launched application, etc.
  • the software is configured to retrieve images from the forward facing camera.
  • the received images may be adjacent, proximate to or overlap a physical field of view of the user.
  • the software may be configured to use the received images to determine placement of virtual objects, determine which virtual objects to use, determine a size, location, or orientation of the virtual objects, recognize objects within a field of view of a user, determine movement of the headset, tracking of objects and corresponding position placement of virtual objects, and combinations thereof.
  • Exemplary embodiments may therefore include a smartphone having a front display screen and a front facing camera.
  • the smartphone may include a processor for executing software and memory for storing software in a non-transitory, machine readable medium.
  • Exemplary embodiments include software that when executed by the processor performs any combination of features described herein. Exemplary embodiments are described in terms of smartphones for the display and processing power of the exemplary embodiments.
  • any mobile electronic device may be used.
  • dedicated electronic devices, tablets, phablets, gaming consoles, miniature televisions, smart displays, or other electronic displays may be used.
  • Exemplary embodiments also encompass displays having remote processing power such that the execution of the methods described herein may be performed at the electronic device, remote from the electronic device and communicated to the electronic device, or combinations thereof.
  • the electronic device includes a front facing camera for receiving images to perform functions described herein.
  • Exemplary aspects of the software supporting the augment reality system may include a computer vision component, relational positioning between the position determined by the computer vision component and the software rendering cameras, stereoscopic rendering in stereoscopic embodiments, counter distortion shaders, and combinations thereof.
  • Exemplary embodiments of the computer vision component may process a realtime video stream from the front-facing camera to determine the headset's position in the physical world.
  • the computer vision component may allow for "six degree of freedom" positional tracking.
  • the computer vision component tracks preprogrammed markers that may include two-dimensional images or three-dimensional objects.
  • the computer vision component may be able to identify a single marker individually, multiple markers independently, or multiple markers simultaneously.
  • the computer vision component tracks environmental features without global mapping.
  • the computer vision component uses Simultaneous Locating and Mapping (SLAM) techniques to build and reference a closed-loop global map from environmental features.
  • SLAM Simultaneous Locating and Mapping
  • the computer vision component is implemented by plugging-in a pre-existing computer vision, augmented reality tracking, or SLAM library.
  • the front-facing camera feed is pre-undistorted before being fed into the computer vision component in order to improve the quality or mapping of the tracking.
  • the computer vision component may produce an x, y, z, pitch, yaw, roll coordinate in a coordinate system pre-defined by the implementation.
  • the computer vision component may be configured such that references are from the origin point and the component outputs a displacement vector of any identified markers from the origin point.
  • Implementations of embodiments described herein may have a pre-calculated positional relationship between the front-facing camera and the virtual camera or cameras that produce output such that virtual objects appear in the correct position when displayed on the smartphone screen and combined through the optical element with the physical world.
  • the method for calculating this positional relationship may depend on the design of the optical element.
  • the pre-calculated positional relationship may provide positions for two software cameras to produce separate imagery for left and right eyes.
  • the software methods are implemented in a game engine.
  • these methods may be distributed as part of a software development kit (SDK) to allow for developers to create apps integrating these methods without having to implement these methods themselves.
  • SDK software development kit
  • Exemplary embodiments of a physical object such as the target object described herein may be a remote controller having a track pad, buttons, push button, throttle, paddle button, touch pad, or other input, and any combination thereof.
  • the remote controller may also include motion detection such as accelerometer, GPS, and other sensors for detecting and determining motion.
  • the remote controller may be used to activate a virtual object by entering an input through one or more of the inputs, such as pushing a zoom button to bring a virtual object closer or retreat button to send a virtual object further away, other control configurations may also be used, such as by moving the controller to create corresponding motions in the virtual object.

Abstract

Exemplary systems described herein include control systems for use in a three dimensional display or environment. Exemplary control features may include virtual objects positioned relative to one or more references frames. Exemplary embodiments include activation of the control features. Exemplary embodiments include transitioning control features between reference frames.

Description

U.S. PATENT AND TRADEMARK OFFICE
CONTROL SYSTEM FOR A THREE DIMENSIONAL
ENVIRONMENT
PRIORITY
[0001] This application claims priority to U.S. Application No. 62/491,942, filed April 28, 2017; U.S. Application No. 62/491,997, filed April 28, 2017; U. S. Application No.
62/492,033, filed April 28, 2017; U.S. Application No. 62/492,051, filed April 28, 2017; and U.S. Application No. 62/533,614, filed July 17, 2017; each of which is incorporated by reference in its entirety into this application.
BACKGROUND
[0002] Head Mounted Displays (HMDs) produce images intended to be viewed by a single person in a fixed position related to the display. HMDs may be used for Virtual Reality (VR) or Augmented Reality (AR) experiences. The HMD of a virtual reality experience immerses the user's entire field of vision and provides no image of the outside world. The HMD of an augmented reality experience renders virtual, or pre-recorded images superimposed on top of the outside world.
[0003] Conventional computer systems use menus to navigate different programs executed on the system. For example, a conventional Macitosh (Mac) based system has a menu bar at a top of a screen that identifies options for controlling functions of the Mac and/or programs running thereon.
[0004] These menu systems are ideal for two dimensional display systems where a majority of the display space is used for the active program, and a small, localized part of the display is used for a navigation system. In this case, the display space is static, defined by the size of the screen. Therefore, the trade-offs between display space for active programs and menus are similarly static. Given the static environment of the two-dimensional display, it is easy to select the most appropriate space for displaying menus that does not interfere or minimally interferes with the usable space of the display. [0005] Virtual reality and augmented reality systems provide images in a three dimensional, immersive space. Because of the immersive environment, there is not a convenient, dedicated location for a menu or control system to be positioned. For example, if a dedicated portion of the field of view is used for a control system, the virtual objects can overlap real world objects, obstruct the user's field of view, and interfere with the immersive nature of the system and environment. Therefore, an adaptation or integration of the conventional 2D menu bar or system feels out of place in the virtual, immersive environment and the look detracts from the immersive feel of the experience. Accordingly, a new control system, and user interface is desired for use in systems that operate in an immersive environment.
SUMMARY
[0006] Exemplary systems described herein include control systems for use in a three dimensional display or environment. Exemplary embodiments include control features pinned to a user's head, pinned to location in the three dimensional world (virtual world or physical world), pinned to the system or a component thereof, pinned to a physical object within the physical world, and combinations thereof. Exemplary embodiments include transitional methods to transfer the system from one control system to another control system.
[0007] Exemplary embodiments described herein include a number of unique features and components. No one feature or component is considered essential to the invention and may be used in any combination or incorporated on any other device or system. For example, exemplary embodiments described herein are generally in terms of an augmented reality system, but features and components described herein may be equally applicable to virtual reality systems or other head mounted systems. Accordingly, headset system is intended to encompass any head mounted system including, but not limited to, augmented reality and virtual reality systems.
DRAWINGS
[0008] FIGS. 1A-1B illustrate an exemplary headset that may incorporate the control system described herein.
[0009] FIGS. 2A-2B illustrate exemplary control features according to embodiments described herein. [0010] FIG. 3 illustrates an exemplary physical environment defining different viewing areas for overlay with virtual objects.
[0011] FIGS. 4A-4Q illustrates exemplary user experiences according to embodiments described herein.
[0012] FIG. 5 illustrates an exemplary display of control features within a viewing area of a user superimposed over a physical environment.
[0013] FIGS. 6A-6B illustrates exemplary embodiments of changing a reference frame of a control feature.
[0014] FIGS. 7-8 illustrate exemplary associations between a physical target object and the corresponding control feature.
[0015] FIGS. 9-10 illustrate an exemplary tracked object, and a plurality of control features fixed relative to the tracked object.
[0016] FIGS. 11 A-l IB illustrate an exemplary application in which a single tracked object may have different control features associated with it depending on the orientation of the tracked object relative to the headset system.
DESCRIPTION
[0017] The following detailed description illustrates by way of example, not by way of limitation, the principles of the invention. This description will clearly enable one skilled in the art to make and use the invention, and describes several embodiments, adaptations, variations, alternatives and uses of the invention, including what is presently believed to be the best mode of carrying out the invention. It should be understood that the drawings are diagrammatic and schematic representations of exemplary embodiments of the invention, and are not limiting of the present invention nor are they necessarily drawn to scale.
[0018] In an exemplary embodiment, the user interface and transitions between control systems described herein may be used with an augmented or virtual reality headset. For example, in the augmented reality systems described by Applicant's co-pending headset applications (US App. No. 15/944,711, filed April 3, 2018, and incorporated by reference in its entirety herein), headset system has a frame with a compartment configured to support a mobile device, and an optical element coupled to the frame configured to reflect an image displayed on the mobile device. The headset may include an attachment mechanism between the frame and the optical element for removable and/or pivotable attachment of the optical element to the frame. The headset may include alignment mechanism to position the optical element relative to the frame when engaged. The headset may include a retention feature to position the inserted mobile device in a predefined, relative location to the frame. The headset may include a head restrain system to couple the headset to a user's head. Although generally described herein as applicable to augmented reality, exemplary embodiments may also be used with virtual reality, immersive, or other three dimensional display systems. The applications described herein are not limited to any particular headset configuration or design. Exemplary three dimensional viewing systems include a camera for receiving images in a user's field of view or approximate thereto, and a display system for superimposing a virtual image into the user's field of view.
[0019] FIGS. 1 A-1B illustrate an exemplary headset that may incorporate the control system described herein. FIG. 1 A illustrates an exemplary front prospective view of an exemplary headset system 10 according to embodiments described herein. FIG. IB illustrates an exemplary rear prospective view of an exemplary headset system 10 according to embodiments described herein. The headset system 10 includes a frame 12, optical element 14, and mounting system 16. The headset system 10 is configured to position an inserted mobile device 18 relative to the optical element 14 and the user.
[0020] Exemplary embodiments of the frame 12 may include a compartment for securing the mobile device to the headset system. The compartment may include a back plane, lateral edges, and a lower edge to retain the mobile device 18 within the compartment. The lower edge may include a lip to limit forward movement to the mobile device relative to the compartment when the mobile device abuts the lip. The compartment may include an elastic cover that deforms when the mobile device is inserted therein. The frame may also include access features to permit the user to control the mobile device from outside of the frame, without having to remove the mobile device from the frame.
[0021] The optical element 14 is configured to reflect the displayed virtual image from a screen of the mobile device to the user to superimpose the image in the field of view of the user. The optical element may include first and second portions that are mirrored
configurations about the vertical axis between the first and second portion. The first and second portions may define spherical lens of a single radius. The first and second portions may include a layer or coating for anti-reflection, reflection, hardness, scratch resistance, durability, smudge resistance, dirt resistance, hydrophobic, and combinations thereof. In an exemplary embodiment, the concave surface of the first and second portions includes a reflective coating. In an exemplary embodiment, the convex surface of the first and second portions includes an anti-reflective coating.
[0022] Exemplary embodiments of the frame 12 are configured to mate and/or retain the optical element 14. The headset system may include an attachment mechanism to couple the frame to the optical element. The attachment mechanism may include a first plurality of magnets in a first attachment mechanism of the frame and a second plurality of magnets in a second attachment mechanism of the optical element wherein adjacent ones of the first plurality of magnets alternate orientations such that the first plurality of magnets alternate polarity in a forward facing direction. The second plurality of magnets may be positioned and oriented such that each of the second plurality of magnets aligns and mates with one of the first plurality of magnets, and the second plurality of magnets have an opposing polarity directed toward a corresponding one of the first plurality of magnets. The attachment mechanism may also or alternatively include an alignment mechanism that can orient and position the optical element in a desired position relative to the frame. The alignment mechanism may include mated surfaces, such that a first surface on the frame is the mated match to a second surface on the optical element. The mated surfaces may, for example, include an indent and detent.
[0023] In an exemplary embodiment, the head restrain system 16 may include a pair of straps extending from the frame. At least one of the pair of straps may have a taper such that a first end of the at least one of the pair of straps is thinner than a second end of at least one of the pair of straps. Each of the pair of straps may include an indentation on an inner surface of each of the pair of straps, the indentation defining an ovoid shape. At least one of the pair of straps may include a buckle such that the other of the at least one of the pair of straps may be thread through the buckle and secure the headset system to a user's head.
[0024] In an exemplary embodiment, the headset system is configured to position a screen of the mobile device at an angle, away from the frame and toward the optical element, the frame configured to not obstruct light coming into a forward facing camera of the mobile device. [0025] Augmented Reality (AR) and Virtual Reality (VR) systems may track objects in an environment. Tracking an object means that an object's position and orientation is defined relative to a coordinate or reference frame. Rotationally tracked systems position virtual objects in a virtual or augmented reality world through an inertial tracking system such that objects are oriented relative to the rotational position of the observer. Positionally tracked systems allow for six-degree of freedom movement of a headset or virtual object by setting a reference plane to either a tracker object or the world outside of the headset as seen by one or more sensors (camera, depth, radar, lider, etc.). Positionally tracked systems may also use an external camera or sensor to track the position of the headset relative to that camera.
[0026] Exemplary embodiments comprise incorporating a control system into the virtual environment tagged to the user or a body part of the user, a specific field of view of the user, the headset, a physical object within the user's environment, or combinations thereof. The system includes software and hardware configured to display and project virtual control features and bring the virtual control features into a user's field of view based on the motion of the user's head, motion of the user's eyes, direction of the user's head, direction of the user's eyes, detection of a physical object within the user's environment, or combinations thereof. The system may be configured to position the virtual control features outside of a user's normal field of view, and bring one or more virtual control features into a user's field of view when the user looks or turns in a predefined direction or when an object is detected or moved in a predefined configuration within the user's detectable field of view. The system may also receive additional user inputs for selecting, manipulating, engaging, or otherwise activating the control features. The system may also be configured to transition from one control system to another control system.
[0027] An exemplary embodiment of an augmented reality headset system is configured to define and/or display to a user one or more control features in a static location relative to a user's head. In an exemplary embodiment, a plurality of control features include icons or representative objects displayed to a user in a static location relative to a user's normal head orientation. For example, a normal head orientation may be considered facing directly forward in a relaxed state. A normal head orientation may be any initial or first head orientation. The one or more control options may include icons in a static location, such as at eye level (generally in front of the relaxed user's position in the user's field of view), above eye level (generally above the normal viewing area outside of or at a periphery of the user's field of view), below eye level (generally below the normal viewing area, positioned outside of or at a periphery of the user's field of view), or along a lateral side of eye level (generally outside of the normal viewing area). Exemplary embodiments may therefore provide one or more icons, objects, text, colors, menus, lists, displays, etc. (described herein as control features) as virtual objects used for navigation, control, or other features that are at an edge of, adjacent to, or outside of a normal field of view.
[0028] FIGS. 1 A- IB illustrate different perspective views of a model user 20, a viewing region 22, and a plurality of control features 22 fixed relative to a user's head. As shown, the plurality of control features 22 are above the normal field of view 22. The viewing region is represented by a planar representation of the focal plane of a normal field of view. The field of view is the area or volume in which a user can see, and a normal field of view is a specific field of view providing for the user's position, head alignment, etc. As shown, an exemplary normal field of view is with the user's head in a generally relaxed position, facing forward. A viewing region 22 is a space in the field of view where the virtual objects are overlaid, and is defined by the headset and display system. The viewing region is illustrated as a square in space, but the perimeter may not be so geometric or discontinuous and is determined by the headset hardware, including the optical element(s) and/or display. The viewing region may be within the user's field of view.
[0029] The plurality of control features may be anywhere relative to the head location, headset, normal viewing region, and combinations thereof. For example, the plurality of control features 22 may be positioned adj acent the normal viewing region, above the normal viewing region, below the normal viewing region, along an edge of the normal viewing region, within the normal viewing region, outside of the normal viewing region, on the transitional edge of the normal viewing region to normal non-viewing region, and combinations thereof. As shown, the plurality of control features are displayed relative to a user's head above the normal viewing region. In an exemplary embodiment, the plurality of control features may resemble a complete or partial halo around the top of the user's head. The plurality of control features may also be positioned around a user's lower body portion, below the head. For example, the plurality of control features may for a portion or all of a belt or hoop around a user's midsection near a user's waist.
[0030] When the user is oriented with their head to see a normal field of view (e.g. the user's head in a generally relaxed position, facing forward, generally level), the plurality of control features may not be seen by a user. As seen in FIGS. 2A-2B, the control features are above the viewing area. In other words, the virtual objects are not displayed to the user in the normal field of view. The system may track a position of the plurality of control features, such that the plurality of control features may be displayed to the user's when the viewing area encompasses one or more of the plurality of control features.
[0031] In exemplary embodiments, the system is configured to detect a user's head movement. The user may access the control features by rotating the head toward the control features. The system may determine motion in different ways. For example, the inserted mobile phone may include inertial sensors, GPS, geomagnetic field sensors, accelerometer, among others, and combinations thereof (referred to collectively as sensors). The system may be configured to receive input signals from the one or more sensors, and use a processor configured to execute non-transitory machine readable instructions stored in memory to perform the functions described herein. The system may be configured to determine a device orientation from the received sensor input. The device orientation may be used to
approximate a direction of a user's head. The system may be configured to determining a device motion, change in orientation, and combinations thereof to determine a transitional direction. The system may be configured to use the orientation, motion, change in orientation, and combinations thereof to determine which control features to display to a user. In an exemplary embodiment, the system may use a reference frame or initial starting location in which to then relate other positions and orientations thereto. For example, the user may be instructed to put the headset on and initiate the headset with the user head position at a normal field of view. The normal field of view, or the initial field of view may be used as a reference field of view to determine a frame of reference for any control features. For example, the user may initially put on the headset and look forward. The system may be configured to set that orientation and position as the reference frame. The system may thereafter set the control features to be positioned above or outside of the reference frame. The system may then be configured to track a position, change in position, change in orientation, change in direction, and combinations thereof to determine whether control features should be brought into the field of view of the user and displayed or projected to the user.
[0032] In an exemplary embodiment, the system may use image recognition to identify an object in the user's field of view. The system may be configured to track the object to determine a position of the control features relative to the object. For example, the system may recognize a chair in a room. When the user looks up, the chair moves toward the bottom of the user's field of view. The system may be configured to display the virtual control features based on the position of the recognized object within the user's field of view. The reference field of view may be determined based on recognized features in the field of view. In this case, the static viewing area may be relative to a first/previous field of view, such that one or more objects may be identified within a first viewing area, and motion of the objects tracked to determine relative direction/distance to bring the control features into the field of view.
[0033] In an exemplary embodiment, a user engages or views a plurality of control features by rotating in a predefined direction. The predefined direction may be relative to the normal viewing area. For example, the user looks up, down, to a side, or combinations thereof. The predefined direction may be relative to a reference viewing area. The direction may be determined by one or more sensors to detect a position of the device, orientation of the device, direction of the device, change in position of the device, change of orientation of the device, change in direction of the device, an object in a field of view of the device, the relative position of the object within a field of view of the device, and combinations thereof.
[0034] For example, the user may view the plurality of control features by simply looking up. The plurality of features may become visible once the user has looked up for a certain rotational amount. The rotational amount may be detected by inertial tracking and/or visual tracking. For example, the rotational amount may be detected by motion of the augmented reality headset (i.e. inertial tracking from accelerometer, GPS, gyroscope, compass, etc.), detected by comparison of movement of recognized objects in a current field of view compared to a previous field of view, and combinations thereof.
[0035] In an exemplary embodiment, the plurality of control features is pinned to a user's head. For example, the user may simply look up and see the plurality of user features as centered on the user and centered in the elevated field of view. The user may then access additional control features by turning side to side. Pinned to the user's head means that the plurality of control features are positioned relative to the user's head. Therefore, no matter the initial direction of the user, the user can look up (or any predefined direction) relative to a starting position and see the same perspective (such as the centered view) of the plurality of control features. Additional rotation in the same or different directions can bring in additional control features. Exemplary embodiments may be used with other tracking systems, such that even as the headset translates through a physical world, such as when the user walks through a room, or whether additional virtual objects are tracked to the physical environment or other reference frame, the control features may be available relative to the user by simply looking in the predefined direction.
[0036] In an exemplary embodiment, control features may circumscribe the user's head. For example, the plurality of control features may be continuous or discrete virtual objects virtually positioned 360 degrees around the user's head. In an exemplary embodiment, control features may be contained in a plane. The plane may be horizontal. The plurality of control features may be contained in parallel planes. Control features in a plane may define a partial or full circular or ovoid position about the user's head. Control features may be positions at discrete locations or continuously in a dome shape over the user's head.
[0037] Exemplary embodiments may position control features relative to the user by a priority ranking. For example, the most accessed, most common, most useful, most desired, or other ranking condition may be positioned in a center position, such that a single rotation out of the normal viewing area is needed to access it. For example, a high priority control feature may be positioned directly above the normal viewing area. Lesser control features may be positioned such that additional directional movement is required, such as further in the same or different directions. For example, lower priority control features may be positioned further above or to the side of the high priority control feature. Therefore, the control features may be positioned in a three dimensional position based on a hierarchy relative to the user, present field of view, normal viewing area, and combinations thereof. Exemplary embodiments described herein generally refer to being positioned or pinned to a user's head, however, the disclosure is not so limited. Exemplary embodiments may include a relational position based on a user's body part, such as the head, waist, mid-section, a system component such as the headset or system controller, environmental component such as a detected object, and combinations thereof.
[0038] Exemplary control features include, for example, icons, menus, text, lists, objects, pictures, applications, etc. Exemplary control features may be used to create or define menus such as system menus (wifi, airplane mode, brightness, universal music controls), application-specific menus (for example, the file/edit/view/help top menu in a traditional Windows/Mac program), or contextual menus (specifically relating to object a user may be interacting with, whether physical or virtual). Exemplary control features may also be applications and the system described herein is used as an application launcher. Therefore, instead of selecting or expanding a menu item, an application can be selected and launched.
[0039] Exemplary embodiments may also track a user's direction of eyes relative to the head, such that a user may see one or more control features by keeping their head still, but looking into the peripheral field of view around a normal viewing area in the predefined direction.
[0040] In an exemplary embodiment, the user's head or eye direction may be used to control the display of control features for the user, but not necessarily tag a control feature to the user, user's head, and/or headset. For example, the system may not have static control features tagged to a user's head, such as the configurations illustrated herein with respect to FIGS. 2A-2B. Instead, the system may simply display control features in any display pattem when the user looks in a predefined direction (either by moving their eyes or their head). Therefore, a user may turn their head upwards by raising their chin, which activates the control function display. In an exemplary embodiment, the control function display may bring down control features into a field of view, or may simply have control features appear or translate into the field of view from any direction or position to any final location within the field of view.
[0041] An exemplary system comprises a virtual reality or augmented reality headset configured to display virtual objects and control features to a user. The virtual objects and control features appear in a user's field of view. The system may include a software (non- transitory machine readable medium) stored in memory and, when executed by a processor, perform functions described herein, including displaying one or more control features to a user when a user changes a field of view in a predetermined direction. In an exemplary embodiment, the predetermined direction is upwards. In an exemplary embodiment, the system displays the plurality of control functions oriented relative to a user's head or the headset. In an exemplary embodiment, the system displays the plurality of control functions oriented relative to a user's normal field of view.
[0042] In an exemplary embodiment, the user may navigate through a plurality of control features by rotating the user's head. The user may activate one or more of the plurality of control features in different ways. [0043] In an exemplary embodiment, the user may bring the desired control feature into a predefined viewing area. In this case, when a control feature is displayed to a user, a static target area may be displayed to the user. The static target area may be static to the viewing space and not relative to any specific field of view. For example, a target area may be displayed at the center of the user's viewing area. The user may activate an item by bringing the control feature into the target viewing area. The user may activate the item by hovering or bringing the control feature into the target viewing area for a predetermined amount of time. The target viewing area make be identified to a user with a virtual overlay displayed to the user. The target viewing area may not be visually identified to a user. The target viewing area may be identified to a user with a virtual overlay displayed to the user when a control feature is in proximity to a target viewing area, but otherwise the target viewing area may not be visually identified to a user when a control feature is not within a viewing area of the user or when a control feature is not within a predefined proximate distance to the target viewing area.
[0044] In an exemplary embodiment, in order to select, modify, interact, control, manipulate, adjust, launch, or otherwise engage (hereinafter referred to collectively as activate) a control feature, a user can make a selection through the movement of the user's head or eyes. The selection through user visual direction (through either eye direction or head direction) may be used in various combinations as described herein.
[0045] In an exemplary embodiment, a selection to activate a control feature may include first bringing the control feature to a predefined portion of the user's field of view. For example, a user may identify a plurality of available control features. The user may then move such that a specific control feature is brought into a specific location of the viewing area. For example, the control feature may be brought into the center of the user's field of view or the center of the viewing area. Once at the predefined portion of the viewing area, the system may activate the control feature based on the movement of the user. For example, once a specific control feature is within the predefined portion of the viewing area, the user may activate the control feature (such as to launch an application associated with the control feature) by then looking or moving the head in a first predefined direction or may cancel or not activate the control feature by looking or moving the head in a second predefined direction different from the first direction. The selection of the control feature based on movement may occur within the lapse of a predetermined amount of time. [0046] In an exemplary embodiment, the user may look further upward to activate the control feature, thereby bringing the object out of the target viewing area in a defined direction. If the user looks up, the control feature would leave the lower portion of the target viewing area. If the control feature leaves the lower portion of the target viewing area, or if the system detects the head or eye movement in an upward direction, the system may then activate the control feature. Conversely, if the user looks downward and the control feature leaves the target viewing area on an upward perimeter of the area, then the control feature may not be activated. Any predefined combination of first and second directions may be used to indicate the selection or non-selection of the control feature.
[0047] In an exemplary embodiment, different methods may be used to distinguish an intention to activate or cancel a control feature verses an intention to move the position of the displayed virtual objects. For example, the speed of the user's motion of either the head or the eye(s) may be used to distinguish an intention to activate a control feature verses and intention to move the positional display of the virtual objects. A specific speed may be used to identify a purposeful selection and separate the movement from natural movement or a desire to not make a selection. The position of the virtual object within the target viewing area may also be used to distinguish an intention to activate a control features verses the intention to move a position of the virtual object within the display. For example, once a control feature is within a target viewing area, the control feature is selected to either activate or cancel. Any further action of the user is for either activating or cancelling the control feature. The specific direction of motion after a control feature is within the target viewing area may also identify intention. For example, once the control feature is in the target viewing area, further motion in a specific direction may activate the control feature, while all other directions would continue to simply control the positional display of the virtual objects overlaid on the perception of the physical environment.
[0048] In an exemplary embodiment, the target viewing area may be displayed to a user. The target viewing area may be displayed before, after, and/or while a control feature is within the target viewing area, may be displayed after or while a control feature is within a viewing area, at all times, or never. For example, a circle may be displayed to a user to indicate the target viewing area. In an exemplary embodiment, the predefined direction may also be visually indicated. For example, a perimeter or direction of the target viewing area may indicate the direction to activate the control feature. For example, a portion of the perimeter of the circle may change colors and/or be accompanied by words, colors, arrows, or other indicators to inform a user of a direction to activate the control feature. The system may display any number of selection option associated with a predefined direction that a user may select simply by looking or turning in that direction.
[0049] In an exemplary embodiment, once a control feature is brought within the target viewing area, additional control feature are presented to the user. The additional control feature(s) may be positioned around the control feature such that each additional control feature corresponds to a unique direction relative to the control feature. Therefore, the additional dis control feature may be directionally arranged around the original control feature. A user may thereafter select the additional control feature may moving the head or eyes (essentially looking toward) the desired additional control feature. In an exemplary embodiment, the additional control feature may be indications of intent to activate and not to activate an associated option/application of the control feature.
[0050] In an exemplary embodiment, the user may use another input device to activate a control feature. For example, a controller, such as a remote, button, joystick, keyboard, mouse, etc. may be used to provide an input to the system. The system may recognize a selection when a predefined combination of inputs is detected. For example, the system may detect one control feature in the field of view and a control input, such as a push of a control button, to determine a selection was made for the one control feature in the field of view. For example, the system may detect one control feature in a target location within the field of view and a control input, such as a push of a button, to determine a selection was made for the control feature in the target location. Other control features may be used, such as recognition of a pointer, object, hand, etc. in the field of view aligned with the control feature. For example, a user may physically select a control feature by pointing to its virtual location represented in physical space. The system may receive location information of the object, such as by using a camera, detect the object as a control selection object, and determine that the control selection object is pointing at a specific control feature. Control selection objects may be specifically defined and recognized object or may be any object detected as proximate the physical location of the virtually represented control feature. Other gestures of the object may also be used, such as swiping across, circling, or otherwise indicating a selection. [0051] In an exemplary embodiment, the input device may be used to bring control features into view. For example, virtual control features may be tagged to a user's head or the headset. When a user indicates an input, such as through the input device, the control features may translate, rotate, or otherwise appear in the field of the user. In this case, the virtual objects are displayed to the user based on an initial user input from a user. In this case, the control features are dropped or otherwise moved from their resting position tagged to the user's head into an active position for display and use by the user.
[0052] In an exemplary embodiment, the user can select or define viewing areas in which to display virtual objects. The system may be configured to launch or display virtual objects in relation to a position of a control feature that launched or is associated with a virtual object. For example, in an exemplary embodiment, a control feature may be pinned to a user's head, such that a control feature is displayed as a user looks up. The user may therefore define a rotation position in which to activate a control feature. The associated virtual object related to the control feature can thereafter be displayed in relation to the activated control feature. The associated virtual object may therefore be displayed to a user below the control feature that launched the virtual object. The user may therefore rotationally position the virtual object by first rotating to a desired position before activating a control feature.
[0053] Exemplary embodiments may therefore be used to display reference material to a user in a field of view outside of a working field of view of a user. In this application, the user may be working on an object in a working field of view. The user may wish to display reference material, such as building instructions, user guide, or other information in another field of view outside of the working field of view. In this way, the user may simply look to the left or right to retrieve the desired information without obstructing the field of view in which the user is working. The user may therefore look back into the working field of view to remove the virtual objects from their viewing area.
[0054] FIGS. 4A-4Q illustrate an exemplary stepwise view of using an exemplary control system according to embodiments described herein. FIG. 3 illustrates an exemplary background that a user can perceive in a physical environment across multiple field of views creating a panorama view of a physical environment. The dotted lines are representative of a physical environment and illustrate various straight and curved lines to orient the different views of a user. The boxes illustrate exemplary viewing areas of a user based on different fields of view. For example, box A may be taken as the normal viewing area or reference viewing area. Box B would then represent a viewing area available when a user looks up from the field of view encompassing the reference viewing area. Box C would be a viewing area when a user turns to the side or the right from the normal viewing area, and Box D would be a viewing area when a user looks up from the Box C viewing area. FIG. 3 is provided simply to illustrate different background environmental areas associated with different viewing directions that may be used to identify and control the control features described in FIGS. 4A-4Q.
[0055] FIG. 4A illustrates an exemplary physical environment as seen through a viewing area of a augmented reality headset according to embodiments described herein. FIG. 4A represents the physical environment in dashed lines, the box is an exemplary viewing area in which virtual objects may be superimposed over the physical environment. As illustrated in FIG. A, there are no virtual objects superimposed in the viewing area. FIG. 4A may represent a reference viewing area. The reference viewing area may be associated with a normal viewing position of the user. The normal viewing position may be defined by the user looking forward with a level head in a first direction.
[0056] FIG. 4B illustrates a transition as the user looks upward from the reference viewing area of FIG. 4A. As the user looks up, a control feature 22A appears in the viewing area and is brought down into the user's viewing area corresponding with the direction the user is looking. As the user continues to look up, the control feature 22A is brought down into the viewing area. As illustrated in the exemplary embodiment of FIG. 4B, when the control feature 22A is displayed to a user, such that the virtual object appears in the viewing area and is overlaid in the field of view of the user, the system may also display other control features. For example, a target viewing area may be display, or instructions on how to navigate or launch a control feature may be displayed.
[0057] FIG. 4C illustrates the viewing area B of FIG. 3 with a control feature displayed. The control feature has moved to the center of the viewing area based on the movement, direction, orientation, or other input of the system or user as described herein. As seen in FIG. 4C a target viewing area may be displayed to indicate that a control feature is in an area to launch or select the control feature. The illustrated target viewing area 42 may be animated, change, or otherwise provide an indication that a selection is being made. For example, when the control feature is brought within the target viewing area for a preselected amount of time, the control feature may be launched or selected. A representation of the target viewing area may therefore change size, shape, color, move, or otherwise indicate that the preselected amount of time is running to make a selection. The user may then cancel the selection by moving the control feature out of the target viewing area before the conclusion of the preselected amount of time. The representation of the target viewing area may be any representation including a shape, area, dot, etc.
[0058] FIG. 4D illustrates the transition from FIG. C to FIG. 4E. The user has selected the control feature of FIG. 4C and launched the associated application. As illustrated, the control feature is a launch document feature that, when selected, opens a document. As seen in FIG. 4D, once the control feature is launched, the control feature may change. As illustrated, once the document is open, the control feature changes from the open document control feature to a close document control feature. Therefore, once a control feature is activated, a corresponding counter control feature may replace the activated control feature to undo the action of the activated control feature. As seen in FIG. 4D, as the user moves the field of view downward toward the reference viewing area, the opened document may be displayed or brought into the viewing area of the system.
[0059] FIG. 4E illustrates the viewing area of FIG. 4A with a virtual object 44A overlaid in the physical field of view of the user. The virtual object 44 A may include one or more control feature 22C 22D as well. For example, as illustrated, the virtual object 44 A is a document opened by an open document control feature. The additional control feature may therefore be a move to sequential pages control feature 22C or jump to a selected or specific page control feature 22D. The user may select one or more of the control features by moving the control feature into the target viewing area. As provided in this example, the target viewing area is the center of the viewing area.
[0060] FIG. 4F illustrates an example of the effect of the virtual object within the viewing area as the user looks in a different direction. FIG. 4F illustrates a transition from viewing area A to viewing area C. FIG. 4F also illustrates an exemplary target viewing area 42. The target viewing area may always be displayed or may only be displayed as a control feature is brought within a predefined range of the viewing area. The predefined range may be that the control feature overlaps or is fully within the viewing area. The representation of the viewing area may change based on the proximity of a control feature. For example, the target viewing area may be represented by a smaller, more transparent, less obtrusive color, or combinations thereof when the control feature is outside of or away from the target viewing area. The control area may be represented by a larger, more visible, different color, or combinations thereof when the control feature is within or proximate to the target viewing area.
[0061] FIG. 4G illustrates the transition after FIG. 4F once a user has selected the sequential page control feature of FIG. 4F. As illustrated the virtual object has changed to the next page of the document.
[0062] FIG. 4H illustrates the virtual object overlaid on the physical object as the user looks in a different direction such as toward the jump to a selected or specific page control. FIG. 4H illustrates an exemplary target viewing area 42. The representation of the target viewing area may depend on the control feature within or proximate to the target viewing area. For example, the jump to a selected or specific page control includes a plurality of control features in close proximity. The displayed target viewing area may therefore be represented by a smaller area in order to delineate and accurately select one control feature from an adjacent control feature. However, when the selection is a turn the page control feature, the target viewing area may be larger or less accurately defined.
[0063] FIG. 41 illustrates the control feature brought into the viewing area when a user looks up from FIG. 4H. As illustrated in FIG. 4D, the control feature may be a control feature to undo the activated control feature. As illustrated, the control feature is that to close a document. FIG. 4K illustrates the control feature as displayed to a user once the user has selected the control feature from FIG. 4J. Just as the control feature changed when the control feature is activated to the converse of the control feature (open to close, launch to terminate, enlarge to shrink, etc.), once the converse of the control feature is selected, the original control feature may be displayed. In an exemplary embodiment, the control feature is static and does not change. In an exemplary embodiment, the control feature may disappear once activated, where the control feature cannot be activated more than once.
[0064] FIG. 4L illustrates the field of view of the user after selecting the control feature of FIG. 4J. The selected control feature of FIG. 4J closed the document and therefore takes the virtual object out of the viewing area of the user. The FIG. 4L viewing area includes that to the right of the reference viewing area, or down and to the right of the control feature viewing area of FIG. 4J. From the position of FIG. 4L, the user can then look up and see the same control feature orientation. Therefore, even if rotated from the original reference viewing area, the control features may be displayed relative to the user's head. In this way, the user can look up from any orientation and launch the control feature system. If more than one control feature is available, the user may then look to one side or other or further up or down to bring additional control features into view or into a target viewing area according to embodiments described herein.
[0065] FIG. 4N illustrates an exemplary virtual overlay once the user has selected the control feature of FIG. 4M. As shown, the control feature of FIG. 4M includes the open document control feature, which launches a document. As illustrated, the launched document is opened below the control feature the launched the document. Therefore, the document appears in the target viewing area rotated from the reference viewing area since the control feature launching the document was activated when the user was rotationally oriented with respect to the referencing view area. In this way, the user may define viewing areas to display virtual object in relation to the user and/or the user's environment. FIG. 40 illustrates an exemplary representation of the reference viewing area without a displayed or overlayed virtual obj ect as the virtual object was launched rotationally out of the field of view of the reference viewing area. FIG. 4P illustrates the display of the control feature from any viewing area. Therefore, FIG. 4P illustrates, again, the control feature displayed in relation to a user's head once the user looks up from the FIG. 40 position. The user can then select the control feature of FIG. 4P, and close the document. FIG. 4Q therefore illustrates the viewing area of FIG. 4N without the displayed virtual object.
[0066] Exemplary systems described herein include display and control systems for use in a three dimensional display or virtual environment. Exemplary embodiments include control features rotationally and positionally tracked to create unique and engaging interfaces. In an exemplary embodiment, a control feature is rotationally tracked, such as pinned to a virtual reality or augmented reality headset, and then selected, modified, interacted with, controlled, manipulated, adjusted, engaged (collectively referred to herein as activated) through use of a positionally tracked object.
[0067] Exemplary embodiments therefore include a transition between different tracked reference frames for the same virtual control feature. In an exemplary embodiment, a first reference frame is relative to the headset, and a second reference frame is relative to a physically tracked object. However, any combination of different reference frames may be used, such as pinned to a physical environment, pinned to a physical obj ect, pinned to the headset, etc. Exemplary embodiments include selection, orientation, viewing, and presentation of one or more control features controlled by relative movement of the target object relative to the field of view, headset, another tracked object, a tracked world/reference system, or other visual or non-visual tracking systems.
[0068] As used herein, a reference frame is any reference system for orienting, positioning, or otherwise locating an object. The reference frame does not necessitate a full six degrees of freedom, but may include any combination of position and orientation, such as linear distance in one, two, or three dimensions, and/or rotational orientation about one, two, or three axis. For example, a reference frame includes a point object in which a two dimensional translational distance can be measured. A reference frame also includes more complex relationships in one, two, or three dimensional reference system including both translational and rotational positioning and orientation.
[0069] An exemplary embodiment is configured to define and/or display to a user one or more control features in a location relative to a given reference frame. An exemplary reference frame may be one, two, or three dimensional. For example, as described above in an exemplary embodiment of the control features above the normal viewing area of a user, the user simply looks up to bring the control features into the viewing area. The display of the control features may or may not depend on the rotational orientation of the user from the normal viewing area. For example, as illustrated in FIGS. 4 J and 4M, the same control features are displayed regardless of whether the user looks up from the normal viewing area A or from a rotated viewing area C. Therefore, the control features are present in a static location relative to a one dimensional reference frame (i.e. up and down). However, exemplary embodiments include the display of the control features dependent on the rotational orientation of the user relative to the normal viewing area. In this case, if the user can change the different control features presented by both looking up to bring the control feature into view and looking at different control features based on the rotational orientation of the user, the control features may be in a static location relative to a two dimensional reference frame.
[0070] Exemplary embodiments also include transitioning a virtual object from the first reference frame to a second reference frame to activate the control feature. Any combination of tracking control features to a given reference frame, including the first and second reference frames, may be used with embodiments of the present disclosure. [0071] In an exemplary embodiment a virtual control feature is provided to a user in a virtual or augmented reality environment, where the virtual control feature is tracked relative to a first reference frame. The first reference frame may be to the headset creating the virtual world. The control feature(s) and reference frame may be those as described herein, such as those of FIGS. 2-4. Therefore, in one embodiment, a menu of control features may appear in a 360 degree or partial 360 degree arrangement at any height or distance around the user. Other control features, and their location, orientation, presentation, etc. are also contemplated herein. For example, instead of a halo or look up control feature arrangement, a shelf or grid arrangement may be provided to one or more lateral sides of the viewer's field of view. Therefore, a plurality of control features may appear in a shelf-like arrangement.
[0072] Regardless of the arrangement, including the position, orientation, and alignment of control features, in an exemplary embodiment, the control features are pinned to a first reference frame. For example, in an embodiment the control features are in reference to the headset, such that their position is rotationally tracked as the user moves their head rotationally. In an exemplary embodiment, the control features are fixed in reference to the viewing area, such that their position says in the same place as viewed by the user regardless of the movement of the user's head or environment. Exemplary embodiments include a combination of reference systems for virtually objects, such that control features described herein may be in reference to the first reference frame, while other virtual objects and/or control features creating the virtual world may be in relation to another reference frame, such as the physical environment. Therefore, an immersive environment may be created with respect to the augmented or virtual world, while providing a set of control features such as for controlling the system that are in reference to another reference frame, such as the headset or the field of view of the user for easy identification, viewing, retrieval, etc. For example, a menu system may easily be "found" simply by looking in a predefined direction, such as up, to activate the virtual settings or environment of the AR or VR system.
[0073] In an exemplary embodiment, in order to activate a control feature, a user can change the reference frame from a first reference frame to a second reference frame. In an exemplary embodiment, the second reference frame may be to a positionally tracked real- world object, such as a target object. Exemplary embodiments of what can be done with control features once it is tracked to a target object is described herein. [0074] In an exemplary embodiment, the user changes the control feature's reference frame by bringing the target object near the perceived physical location of the displayed virtual control feature and attaching the virtual control feature to the physical target object. The relative position and/or orientation of the control feature may thereafter be maintained relative to the physical target obj ect. For example, a globe may be the control feature. A target physical object may be positioned as perceived by a user under the globe. The association of the target physical object to the virtual globe obj ect may be made. The globe may then be brought closer to the user and rotated to bring the different sides of the globe into the field of view of the user by moving the physical target object closer to the user and rotating the physical target object. The transfer or attachment of the control feature to the target physical object may present a different virtual object/display to the user. For example, the control feature may be displayed as an icon for a map when tracked to the first reference frame. Once the icon is attached to the physical target object and the reference frame of the control feature changes to the second reference frame, the map application is launched and the virtual object attached to the target physical object is not the icon but is the launched application including a representation of a map of an area. The control feature therefore need not maintain a common or the same appearance to a user when transitioning or in a first reference frame or a second reference frame.
[0075] The association of the target physical object to the virtual control feature or the change of the first reference frame to the second reference frame may occur through different control mechanisms.
[0076] For example, a user may move the physical target object to a predefined relative location to the virtual control feature and maintain the relative position thereto for a predetermined amount of time. After the predetermined amount of time has lapsed, then the virtual object attaches to the physical target object, and the reference frame handoff occurs. In an exemplary embodiment, an indication of the time association may be provided to a user. Therefore, before an association is made, a visual or audio indicator may be provided to the user. For example, a visual signal may be shown to the user indicating the amount of time before the transition between reference frames, or the association to the physical target obj ect is made. For example, an audio signal such as a beep or music may sound before or when an association is made. [0077] Similar to the target viewing area described above with respect to FIGS. 2-4, a target viewing area may be associated with each control feature. Instead of using the direction of the head and/or eye to position the control feature within the target viewing area, a physical target obj ect may be brought to or within the target viewing area to indicate a transfer between reference frames for the selected control feature. The target viewing area may be displayed to a user or not. The target viewing area may provide an indication of the time elapse. The target viewing area may be presented at different times or in different ways depending on the proximity of the physical target object to the target viewing area. The virtual environment may change or provide an indicator that a physical target object is within a target viewing area associated with a control object. The association of the physical target object to the control feature may be through a time elapse of the physical target object within the target viewing area for a predetermined amount of time. The association of the physical target object to the control feature may be by a directional navigation of the physical target object with respect to the control feature and/or in or out of the target viewing area associated with the control feature.
[0078] In an exemplary embodiment, a user may move the physical target object adjacent or near a virtual control feature and provide an external input to the system such as a spoken command, a control input, or other indicator that an association should be made. A control input may be a push of a button, swipe, or other gesture on an input device, or any other input. The control input may be associated or on the physical target obj ect or may be separate therefrom. For example, the physical target object may include a button or physical space that when pushed (or obstructed) as perceived by the VR or AR system, a selection is made. Separate controlling devices may also be used.
[0079] FIG. 5 illustrates an exemplary display of control features within a viewing area of a user superimposed over a physical environment. The dashed lines indicate features of the physical environment, while the solid lines indicate virtual objects superimposed in the perception of the viewer over the physical environment. The control features 22 may be positioned in reference to a first reference frame. For example, the icons about a user's head may be used. A reference system may be to the target viewing area such that the control features are displayed regardless of the viewing area and/or direction of the user relative to the physical environment. A reference system may also be with respect to the user and/or headset, such that the control features are brought into and out of the viewing area by rotational or directional orientation of the headset or the user's body or headset.
[0080] FIG. 6A illustrates an exemplary embodiment in which a control feature is selected from among the plurality of control features and pinned to a target physical object. The selected control feature 22E transitions to a first reference frame to a second reference frame. The second reference frame is that of the physical target object 62. The transition from the first reference frame to the second reference frame may be through any mechanism described herein, such as through an elapse of a predetermined amount of time with the physical target object within a target viewing area associated with the control feature, the presence of the physical target object into the viewing area while a control feature is within the viewing area, the input into the system to indicate the transition, the directional, orientational, rotational, or other intentional manipulation of the target physical object or other obj ect within the detection of the headset system, and combinations thereof.
[0081] As illustrated in FIG. 6A, the target physical object is brought in proximity to the control feature and within the target viewing area. The control feature to be selected 22E may change appearance when the target control feature is brought within the associated target viewing area. For example, the shading, color, size, or other feature of the control feature may change when the physical target object is within the target viewing area of the control feature. Other indications of the association or potential association may also be provided to the user. For example, a virtual obj ect or indicator may appear to show the physical target object is within the target viewing area, an indication of the target viewing area, a time elapse, or other indicator may show the transition from the first reference frame to a second reference frame in progress, etc.
[0082] FIG. 6B illustrates the control feature 22E pinned to the physical target object 62. Once the transition of the selected control feature to the physical target obj ect, the manipulation of the physical target object may activate the control feature. For example, translation and/or rotational of the physical target object may also translate and/or rotate the selected control feature. As shown in FIG. 6B, the associated control feature is moved from its original position with reference to the first reference frame and is moved within the viewing area by its association to the physical target object. FIG. 6B illustrates the association of one of the control features with the target object, such that the control feature has been removed from the first reference frame (rotationally tracked to the headset) and pinned to a second reference frame (the positionally tracked target object).
[0083] FIGS. 7-8 illustrate exemplary associations between the physical target object and the corresponding control feature. As seen in FIG. 7, the rotation of the physical control object may be used to enlarge or decrease a size of the control feature. As seen in FIG. 8, the rotation of the physical control object may rotate the selected control feature.
[0084] Exemplary embodiments of a physical target object include any physical object that may be recognized by the system. In an exemplary embodiment, the target object defines a non-symmetric identifiable recognizable image. Therefore, the object may be recognized by the system as a target physical object and may also recognize and/or detect the orientation, rotation, translation, and other manipulation of the target object in the physical space.
[0085] Exemplary embodiments include methods of controlling a virtual reality or augmented reality system. The method may include providing one or more virtual control features within a field of view of a user, where the one or more virtual control features are pinned to a first reference frame. The first reference frame may be a rotational reference frame relative to the headset. A user may thereafter move a physical target object to a predefined position relative to one of the virtual control features. When the physical control feature is within a field of view of a camera of the system, a virtual representation or target location may be virtually represented in the user's field of view. This virtual representation may be used to select one of the virtual control features. The user may thereafter move the physical target object in proximity to one of the virtual control features. When the physical target object is in a predefined relative location to the control feature, the virtual control feature may be removed from the first reference frame and pinned to a second reference frame. The user may thereafter move the physical target object to move the control feature.
[0086] In an exemplary embodiment, when the physical target object is in a predefined relative location to the control feature, the virtual control feature may be removed from the first reference frame and pinned to a second reference frame. In an exemplary embodiment, the virtual representation of the physical target object or the virtual target location may indicate when the physical target object is in proximity to the virtual control feature. For example, the virtual representation or virtual target location may change colors, shapes, size, etc. An audible indication may also or alternatively be provided. As shown in the above figures, the virtual representation is a first color when the physical target object is recognized by the system. The virtual representation is a second color when the transition from the first reference frame to the second reference frame is about to take place. The system may gradually transition from the first color to the second color, such that the user may identify when the transition may be complete. The system may also stepwise transition between the first color and second color, such as to act as a count down (for example by transitioning from yellow, to orange, to red in a predefined amount of time).
[0087] In an exemplary embodiment, once a selected control feature attaches to the target object, and the control feature removed from its pinned location in the first reference frame, the remaining control features may reposition relative to each other to close the space left by the removed control feature. For example, additional control features can appear on the side and virtually appear to push the existing control features to fill the gap. In an exemplary embodiment, another control feature not previously present may take the place of the removed control feature.
[0088] In an exemplary embodiment, once pinned to the target object, a user can manipulate the control feature by moving the target object relative to the headset. For example, additional control features may be viewed or the control feature reoriented by movement of the target object. As a first example, a first set of control features may be presented when the target object is detected in a first orientation, and a second set of control features may be presented when the target object is detected in a second orientation different from a first orientation. The first set and second set of control features may be the same, may be different, may be entirely different, or may have a partial overlapping of objects or features. The first orientation and second orientation may be different. As a second example, a static control set of control features may be displayed to a user when a target object is detected. The control features may be orientated relative to the target object, such that control features may be moved relative to the user by moving the target object. For example, rotating the target object may rotate the set of control features so that different perspectives or groups of control features are more easily identified by a user or brought into view or moved out of view of the user.
[0089] In an exemplary embodiment, the attached control feature may be removed, replaced, or repositioned relative to the control features not attached to the target object (i.e. the remaining objects). For example, the attached control feature may be moved by moving the target object, while the remaining control feature stay pinned to their original reference frame and location. The target object may be moved such that it appears the attached control feature is moved away from the remaining control features. In an exemplary embodiment, the remaining control features may move or reposition to fill the void or space previously occupied by the selected control feature in the reference frame of the physical target object. The user may move the selected control feature near the remaining control features. In an exemplary embodiment, the remaining control features may move or reposition to create a void or space to accommodate the selected control feature. The system may receive an input such as an elapse of a predetermined amount of time, a control input such as an electronic input from a button push (for example), or other signal to the system to change the reference frame. The reference frames may change to or from the rotational reference frame to the target object reference frame. For example, the attached control feature may change from the target object by detaching the control feature from the target object and thereafter attaching to the rotational reference frame. The control feature may therefore transition from the second reference frame back to the first reference frame.
[0090] In an exemplary embodiment, the user may activate a plurality of control features by rotating, translating, or otherwise physically moving the physical target object within the field of view of the headset or otherwise within a space detectable by the headset or in communication with the headset.
[0091] An exemplary embodiment is configured to define and/or display to a user one or more control features in a static location relative to a target object (also referred to herein as a tracked object). In an exemplary embodiment, the tracked object is a recognized object such that the dimensions, orientation, or other spatial characteristics of the object are known to the system. For example, a tracked object may include a cardboard sheet having a predefined, non-symmetric image printed thereof. Exemplary embodiments may therefore provide any combination of control features, including, for example, one or more icons, objects, text, colors, menus, lists, displays, controls, commands, etc. as virtual objects used for display, navigation, control, or other features that are at an edge of, above, adjacent to, proximate, or otherwise positioned relative to the tracked object and/or the user's field of view. The tracked object may be any physical object. In an exemplary embodiment, the tracked object is a recognizable, non-symmetric, physical object having known dimensions. [0092] Exemplary embodiments of the headset system described herein may associate one or more control features to a target object. FIG. 9 illustrates an exemplary tracked object 92, and a plurality of control features 22F-22G fixed relative to the tracked object. As shown, the plurality of control features are above the target object in a static or predefined orientation, location, and distance from the tracked object. The plurality of control features may be anywhere relative to the tracked object. FIG. 10 illustrates the static association of the plurality of control features to the tracked object, such that the rotation of the tracked object similarly rotates the control features. In an exemplary embodiment, a control feature may be produced such that its relative location to the user may be changed. For example, the control feature may be presented to have a depth variation that may change as the user interacts with the virtual object. In an exemplary embodiment, depth may be determined based on a physically tracked object detected. In an exemplary embodiment, depth may be changed based on an input from an outside controller, such as a remote control.
[0093] Exemplary embodiments described herein include associating one or more control features to a tracked object. The one or more control features may be displayed to the user relative to the tracked obj ect once the tracked object is identified by the system. The tracked object may be identified by the system through one or more input sensors, such as visual detection from a camera pf the headset system viewing the target object, signal (such as infrared, Bluetooth, short wave, radio frequency, acoustic frequency, etc.) detection through a sensor in communication with the headset system of an output signal from the target object. The headset system may include a database or other look up system for associating an input to a tracked object, such that one or more attributes (such as dimension, size, shape, color, associated control features, etc.) of the tracked object may be known to the system.
[0094] Exemplary embodiments described herein include displaying or presenting one or more control features relative to a tracked obj ect based on the position, orientation, identification, and combinations thereof of a tracked obj ect. For example, a first tracked object different from a second tracked object may have a first set of control features associated with it and displayed to a user when the first tracked obj ect is detected by the headset system and within the viewing area of a user. The second tracked object may have a different set of control features associate with it and displayed to the user when the second tracked object is detected by the headset system and within the viewing area. The difference of control features associated with a tracked object may include overlap of control features. A first tracked object may have associated with it a plurality of control features. The display of one or more of the plurality of control features may depend on the orientation of the tracked object detected by the headset system. For example, a first set of control features may be presented to a user with respect to a first tracked object in a first orientation, and a second set of control features different from the first set of control features may be presented to a user with respect to the same first tracked object in a second orientation different form the first orientation. The second orientation may be rotated from the first orientation.
[0095] FIGS. 11 A-l IB illustrates an exemplary application in which a single tracked object may have different control features associated with it that are displayed depending on the orientation of the tracked object relative to the headset system.
[0096] FIG. 11A illustrates an exemplary tracked object 92A and associated control feature 22H. The displayed control feature 22H is displayed based on the detected and recognized tracked object 92 A and the orientation of the tracked object 92A. As shown, a specific record albumn is defined as the tracked object 92A. The orientation of the tracked item on its side relative to the headset system indicates a first control feature to display to the user. As shown, the track list associated with the album is displayed to the user.
[0097] FIG. 1 IB illustrates an exemplary tracked object 92B in another orientation and thereby displaying a different associated control feature. The displayed control feature 221 is displayed based on the detected and recognized tracked object 92A and the orientation of the tracked object 92 A. In this case, the album of FIG. 11A is rotated. The associated control feature 221 is therefore different than that of FIG. 11A as the detected orientation of the tracked object is different. As shown, the same album is displayed but rotated 90 degrees from that of FIG. 11 A. The control features associated with the tracked obj ect this time illustrates related songs to that of the album. The related songs may be displayed as different albums or songs by the same artist, in the same genre, etc.
[0098] Once a desired control feature is displayed to a user, the user may then activate the control feature similar to embodiments described herein by positioning the tracked object and/or virtual control feature within a target viewing area of the headset system. As shown in FIG. 1 IB, a target viewing area 42 may be displayed to the user. The target viewing area 42 may be represented in any way as described herein. The target viewing area may be displayed when a control feature or target object is brought in proximity to the target viewing area. The representation or other virtual display of the system may change or be displayed based on the proximity to a target viewing area or elapse time within in or proximate to a target viewing area. Similar to features described herein, the target obj ect may be manipulated or moved to bring an associated control feature 221 within a target viewing area 42 to activate an individual control feature.
[0099] FIGS. 1 1 A-l IB illustrate exemplary views of using a physical object to generate and activate a virtual object. The physical obj ect may be used to generate a specific virtual image, select, move, rotate, alter, modify, interact, control, manipulate, adjust, engage or otherwise activate the virtual image, or combinations thereof. In an exemplary embodiment, a plurality of different sets of virtual obj ects is associated with a physical obj ect. The system selects which set of the plurality of different sets of virtual obj ects to display to a user based on the physical object, including its position, orientation, direction, and combinations thereof in three dimensional space in a specific reference frame. As shown by a comparison of FIGS. 1 1 A and 1 IB, different virtual control features are displayed depending on the orientation (rotation) of the target object.
[00100] In an exemplary embodiment, the generated virtual object may depend on movement of the target object. For example, a first virtual object may be displayed to a user once a physical obj ect is recognized. A user may thereafter cycle through or view different virtual control features by rotating, translating, or otherwise moving the physical target object. For example, regardless of the orientation of the album when brought into the field of view, the virtual control feature first displayed to a user may be the album's track list. The system may thereafter recognize that the album is rotated, and other virtual control features, such as related songs, etc. are sequentially displayed to the user.
[00101] In an exemplary embodiment, a user engages or views a plurality of control features by bringing a predefined and/or recognizable physical obj ect into the field of detection (view) of the headset. Exemplary embodiments permit a user to manipulate the plurality of control features by moving the target obj ect relative to the headset. For example, additional control features may be viewed or the control feature reoriented by movement of the target object. As a first example, a first set of control features may be presented when the target object is detected in a first orientation, and a second set of control features may be presented when the target object is detected in a second orientation different from a first orientation. The first set and second set of control features may be the same, may be different, may be entirely different, or may have a partial overlapping of objects or features. The first orientation and second orientation may be different.
[00102] In an exemplary embodiment, the control feature may be positioned in a three dimensional position based on a hierarchy relative to the target object and/or user's field of view.
[00103] In an exemplary embodiment, the control feature may be viewed or displayed by looking in a predefined direction relative to the target object. The control features may be viewed by keeping the headset stationary and moving the physical target object, by keeping the physical target object stationary and moving the headset, or combinations thereof, such that the relative motion between the headset and the physical target object is in the predefined direction. Therefore, a physical object may be manipulated while the user's physical field of view is maintained in order to bring control features into a field of view.
[00104] In an exemplary embodiment, the system is configured to receive images or input corresponding to one or more objects, and recognizes target objects from the received images or input. The system may, for example, receive an image through the camera, and detect objects within the image. In an exemplary embodiment, the system may comprise a library of target objects. In an exemplary embodiment, the objects are not symmetric such that an orientation of the object may be determined. However, in exemplary embodiments, symmetric objects may also be used. In an exemplary embodiment, the library may associate one or more control features to the target object. In an exemplary embodiment, the library may associate one or more orientations of the target object with one or more control features. The library may associate different orientations of the same target object with one or more control features that may be the same or different. The library may associate a presentation of the one or more control features relative to an orientation of the target object, such that a configuration of the control feature is virtually displayed as static relative to the physical target object. In an exemplary embodiment, the library may contain dimensional information of the target object, such that a relative distance may be determined and the size, orientation, or selection may be determined based on the relative distance of the target object. In an exemplary embodiment, the library may be a database in communication with the headset.
[00105] In an exemplary embodiment, a target object may be used to determine which plurality of control features to be displayed to a user. For example, a system may be configured with an input device, such as an optical device for receiving an image. The system may be configured to recognize a target object within the received image. The system may be configured to look up a set of one or more control features to display to the user based on the target object. The system may also store and be configured to look up a system for selecting from a plurality of different control features associated with the target object. The system may store and be configured to select a specific control feature based on the orientation of the target object or the relative movements of the target object. The control feature may be oriented or displayed in a given relative position to or from the target object.
[00106] In an exemplary embodiment, the system may be configured to track the target object in order to activate the control feature. For example, the control features may be virtually displayed such that the illusion is created that the control feature is at a static orientation or configuration relative to the target object. The system may be configured to detect and track motion of the target object. In an exemplary embodiment, a selection or interaction with the control feature may occur by rotation, translation, or other manipulation of the target object. For the example above where the control feature is the available tracks of an album, the user may select a given track from the list by translating the target object up or down or by rotating the target object about an axis in plane with the album. In another example of the album used as a target object to control the system, a user may make a selection by moving the target object in a predefined direction. For example, the control feature may be information of a track, a song, or a display interface representing a playback system, where the user may translate the album (target object) in any predefined direction to control the play features of the track list associated with the album. A user may lift the album upward to pause the playback, may translate to one side to skip forward, may translate to another side to skip backward, etc.
[00107] In an exemplary embodiment, different orientations of the target object may present independent and separate control features. For example, a target object in a first configuration may present a first set of control features. The target object may be rotated 90 or 180 degrees, such that an orthogonal orientation is recognized by the system. This orientation of the target object may indicate a second set of control features to be displayed to the user. The first and second set of control features may be independent, such as relating to different control functions, and/or relating to different information presented to the user. [00108] For example, the movement of the target, such as the rotation or translation of the physical target object may advance a counter. The counter may be any incremented or decremented value stored in code or displayed to a user. For example, the rotation of the physical target obj ect may increment or decrement a counter such that each turn increases or decreases a value. Exemplary values may include, but are not limited to, those associated with volume, zoom, opacity, brightness, game levels, etc. The increment or decrement may be based on the relative rotation, such as one rotational direction increments and the opposite relative rotational direction decrements the counter. In an exemplary embodiment, a counter may be incremented/decremented based on partial rotation of the target physical object. In an exemplary embodiment, a counter may be increment/decremented past a full 360 degree rotation of the target physical object such that it continuously counts past sequential revolutions. Exemplary embodiments may reset a counter at full rotation of the target physical object.
[00109] Exemplary embodiments may include one or a plurality of physical targets that may work alone or in tandem to activate a control feature, code, a display, or other virtual, electronic, digital, or physical object of the virtual reality or augmented reality environment or internet of things devices.
[00110] For example, two trackers may be used in relation to each other to increment or decrement a counter, similar to the rotational control describe above with respect to a single counter. Therefore, a greater relative separation between two tracker objects may increase the counter, while a closer relative separation distance decrement the counter. In an exemplary configuration, the relative position and orientation of two physical tracker objects may be used. The relationship may, for example, permit a user to adjust a first feature of the system by the relative distance between the target obj ect, and a second feature of the system by the relative orientation between the target objects. Any combination of position, display, orientation, etc. may be used to select, modify, interact, control, manipulate, adjust or otherwise engage features of the system. For example, the relative distance between two target objects may act to zoom a virtual object, the relative position may act to orient or rotate a virtual object, and the relative rotational orientation may act to select a virtual object. Exemplary embodiments also include a single target object, such as a tracker, used in relation to another virtual or physical object. The other obj ect can be pinned to a different reference frame and not to the tracker object. Therefore, for example, a virtual object may be pinned to the physical environment, and a relative distance, position, orientation, or combinations thereof may be between the target object, such as the tracker, and the virtual object pinned to the physical environment.
[00111] In an exemplary embodiment, the one or more target objects may be virtual objects, physical objects, or combinations thereof. For example, the target object may be virtual objects displayed on a physical screen and identified by the system. In an exemplary embodiment, the one or more target objects are displayed and manipulated through a touch interface of a control feature, such as a phone or tablet.
[00112] In an exemplary embodiment, the user may bring the desired control feature into a predefined viewing area. In this case, when a control feature is displayed to a user, a static target area may also be displayed to the user. The static target area may be static to the viewing space and not relative to any specific field of view. For example, a target area may be displayed at the center of the user's viewing area. The user may select an item by bringing the control feature into the target viewing area. The user may select the item by hovering or bringing the control feature and/or target object into the target viewing area for a
predetermined amount of time. The target viewing area may be identified to a user with a virtual overlay displayed to the user. The target viewing area may not be visually identified to a user. An indication may be given to a user that a control feature is within the target viewing area, such as by changing the appearance of the control feature, such as its size, shape, color, etc. Other indicators may also be provided such as sound or other virtually displayed indicators that a control feature is in the target viewing area. For example, a counter may be virtual displayed to indicate a time until a control feature is selected if it remains the target area for the remaining counter time.
[00113] In an exemplary embodiment, a user may orient the physical object to control or select the control feature. For example, the physical object may be rotated such that the desired control feature is perceived to be the closest object relative to the user. The user may use other gestures or inputs to indicate a selection of this object. For example, the user may move the target object closer or farther away from the user such that the object may be selected by a virtual push or pull of the object. Other input devices may also be used to make a selection as described herein. [00114] In an exemplary embodiment, the user may use another input device to select a control feature. For example, a controller, such as a remote, button, joystick, keyboard, mouse, etc. may be used to provide an input to the system. The system may recognize a selection when a predefined combination of inputs is detected. For example, the system may detect one control feature in the field of view, or determine a control feature perceived to be the closest relative to the user, and a control input, such as a push of a control button, to determine a selection was made for the one control feature or perceived nearest control feature in the field of view. For example, the system may detect one control feature in a target location within the field of view and a control input, such as a push of a button, to determine a selection was made for the control feature in the target location. Other control features may be used, such as recognition of a pointer, object, hand, etc. in the field of view. For example, a user may physically select a control feature by pointing to its virtual location represented in physical space with a control object. The system may receive location information of the control object, such as by using a camera, detect the control object as a control selection object, and determine that the control selection object is pointing at a specific control feature as perceived by the user in the augmented reality environment.
Control selection objects may be specifically defined and recognized objects or may be any object detected as proximate the physical location of the virtually represented control feature. Other gestures of the object may also be used, such as swiping across, circling, or otherwise indicating a selection with the target object or with another object relative to the control feature and/or perceived location of the control feature.
[00115] Exemplary embodiments and description herein with respect to the tracked object may be applied to the transition between reference frames described herein and visa-versa. For example, different target objects may be associated with different control features. Once a target object is brought into a viewing area, the associated control objects may be displayed. One or more of the control features of the target object may be positioned in a target viewing area to transition the reference frame of the selected control feature from the target object to the headset system. Accordingly, exemplary embodiments described herein may be used to configure the augmented reality headset system and the associated control feature available to a user. Conversely, control features according to any embodiment described herein may be associated with a target object according to embodiments described herein. Accordingly, a user may make their own library of control features that can store and display information to a user by selecting and brining in a specific target object within the field of detection of the headset system.
[00116] Different control systems are described herein including displaying control features relative to a reference frame. The reference frame may be relative to a user's head in a given position, relative to the headset, relative to a viewing area, in one, two, or three dimensions. The reference frame may be relative to a target object.
[00117] Exemplary embodiments may transition from different reference frames for controlling a virtual control feature. In an exemplary embodiment, a virtual control feature may be positioned in a first reference and then transitioned into a second reference frame. In an exemplary embodiment, the first reference frame may provide three dimensional tracking, and thereby permit the virtual control feature to be adjusted based on a perceived depth. In an exemplary embodiment, the second reference frame may provide two dimensional tracking, and thereby not permit the virtual control feature to be adjusted based on a perceived depth. In this case, the virtual control feature may be moved in the two dimensional space in front of the user, but maintain a same size/dimension to represent the same apparent depth. The first and second reference frames may be switched, such that the system transitions from the second reference frame to the first reference frame. In an exemplary embodiment, the perceived dimension in the second reference frame of the virtual control feature may remain static or determined based on the last perceived dimension of the virtual control feature when attached to the first reference frame.
[00118] For example, a virtual control feature may be positioned relative to a recognized physical target object. However, when the recognized physical target object is removed from the detected field of view of the system, then the virtual control feature may remain in the user's field of view and transition to a static position relative to the user's field of view. Therefore, the representation of the virtual control feature may transition to a reference frame pinned to the recognized physical object to a new references frame pinned to the headset, or physical environment. The new reference frame may also be pinned to another physical object in the recognized field of view.
[00119] In an exemplary embodiment, when the recognized physical target object is reintroduced into the detected field of view, the virtual control feature relative position may reattach to the recognized physical object. The virtual control feature may have a reassigned relative position/orientation relative to the physical object, or the virtual control feature may move to the pinned position/orientation relative to the physical object before the physical object was removed from the detected field of view. The virtual control feature may therefore appear to jump or teleport to a new location from the new reference frame back to the original reference frame.
[00120] For example, a virtual control feature may be positioned relative to a headset or head orientation of the user. However, a physical target object may be used to transition the virtual control feature's reference frame to that controlled by the physical obj ect. The physical object may be a recognized physical tracker or a remote controller.
[00121] In an exemplary embodiment, a hierarchy or multiplicity of virtual control feature representations may be used in relation to different reference frames. For example, a first set of one or more virtual control features may be positioned relative to a user's viewing position (such as the system described with respect to FIGS. 2-4 herein), a second set of one or more virtual control features may be paired or moved based on a physical object (such as a physical tracked object described with respect to FIGS. 9-10 or remote control), a third set of one or more virtual control features may be paired or moved based on a the headset (such as a static position within the user's field of view). Virtual control features may be moved from the first, second, third, and any combination thereof sets by inputs to the system.
[00122] Exemplary control features include, for example, virtually displayed obj ects, icons, menus, text, lists, objects, pictures, applications, short cuts, etc. Exemplary control features may be used to create or define menus such as system menus (wifi, airplane mode, brightness, universal music controls), application-specific menus (similar to the
file/edit/view/help top menu in a traditional Windows/Mac program), or contextual menus (specifically relating to objects the user is currently interacting with, whether physical or virtual). Control features may be any virtual object provided to the user in response to a system input and/or any virtual object used to change (such as control or manipulate) the virtual experience. In an exemplary embodiment, a set of control features are displayed to a user based on the target object and/or an orientation of the target object. In an exemplary embodiment, a set of control features are displayed to a user based on a user configuration, an identified user, an available set of applications, a launched application, etc. [00123] Although embodiments of the invention may be described and illustrated herein in terms of augmented reality systems, it should be understood that embodiments of this invention are not so limited, but are additionally applicable to virtual reality systems.
Features of the system may also be applicable to any head mounted system. Exemplary embodiments may also include any combination of features as described herein. Therefore, any combination of described features, components, or elements may be used and still fall within the scope of the instant description.
[00124] In an exemplary embodiment, the software is configured to retrieve images from the forward facing camera. The received images may be adjacent, proximate to or overlap a physical field of view of the user. The software may be configured to use the received images to determine placement of virtual objects, determine which virtual objects to use, determine a size, location, or orientation of the virtual objects, recognize objects within a field of view of a user, determine movement of the headset, tracking of objects and corresponding position placement of virtual objects, and combinations thereof.
[00125] Exemplary embodiments may therefore include a smartphone having a front display screen and a front facing camera. The smartphone may include a processor for executing software and memory for storing software in a non-transitory, machine readable medium. Exemplary embodiments include software that when executed by the processor performs any combination of features described herein. Exemplary embodiments are described in terms of smartphones for the display and processing power of the exemplary embodiments. However, any mobile electronic device may be used. For example, dedicated electronic devices, tablets, phablets, gaming consoles, miniature televisions, smart displays, or other electronic displays may be used. Exemplary embodiments also encompass displays having remote processing power such that the execution of the methods described herein may be performed at the electronic device, remote from the electronic device and communicated to the electronic device, or combinations thereof. Preferably, the electronic device includes a front facing camera for receiving images to perform functions described herein.
[00126] Exemplary aspects of the software supporting the augment reality system may include a computer vision component, relational positioning between the position determined by the computer vision component and the software rendering cameras, stereoscopic rendering in stereoscopic embodiments, counter distortion shaders, and combinations thereof. [00127] Exemplary embodiments of the computer vision component may process a realtime video stream from the front-facing camera to determine the headset's position in the physical world. The computer vision component may allow for "six degree of freedom" positional tracking. In one embodiment, the computer vision component tracks preprogrammed markers that may include two-dimensional images or three-dimensional objects. In this embodiment, the computer vision component may be able to identify a single marker individually, multiple markers independently, or multiple markers simultaneously. In one embodiment, the computer vision component tracks environmental features without global mapping. In one embodiment, the computer vision component uses Simultaneous Locating and Mapping (SLAM) techniques to build and reference a closed-loop global map from environmental features. In one embodiment, the computer vision component is implemented by plugging-in a pre-existing computer vision, augmented reality tracking, or SLAM library. In one embodiment, the front-facing camera feed is pre-undistorted before being fed into the computer vision component in order to improve the quality or mapping of the tracking.
[00128] In an exemplary embodiment, the computer vision component may produce an x, y, z, pitch, yaw, roll coordinate in a coordinate system pre-defined by the implementation. In an exemplary embodiment, the computer vision component may be configured such that references are from the origin point and the component outputs a displacement vector of any identified markers from the origin point.
[00129] Implementations of embodiments described herein may have a pre-calculated positional relationship between the front-facing camera and the virtual camera or cameras that produce output such that virtual objects appear in the correct position when displayed on the smartphone screen and combined through the optical element with the physical world. The method for calculating this positional relationship may depend on the design of the optical element. In an embodiment producing a stereoscopic image, the pre-calculated positional relationship may provide positions for two software cameras to produce separate imagery for left and right eyes.
[00130] In an exemplary embodiment, the software methods are implemented in a game engine. In this embodiment, these methods may be distributed as part of a software development kit (SDK) to allow for developers to create apps integrating these methods without having to implement these methods themselves. [00131] Exemplary embodiments of a physical object such as the target object described herein may be a remote controller having a track pad, buttons, push button, throttle, paddle button, touch pad, or other input, and any combination thereof. The remote controller may also include motion detection such as accelerometer, GPS, and other sensors for detecting and determining motion. The remote controller may be used to activate a virtual object by entering an input through one or more of the inputs, such as pushing a zoom button to bring a virtual object closer or retreat button to send a virtual object further away, other control configurations may also be used, such as by moving the controller to create corresponding motions in the virtual object.
[00132] Although embodiments of this invention have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the present disclosure as defined by the appended claims. Specifically, exemplary components are described herein. Any combination of these components may be used in any combination. For example, any component, feature, step or part may be integrated, separated, sub-divided, removed, duplicated, added, or used in any combination with any other component, feature, step or part or itself and remain within the scope of the present disclosure. Embodiments are exemplary only, and provide an illustrative combination of features, but are not limited thereto.
[00133] When used in this specification and claims, the terms "comprises" and
"comprising" and variations thereof mean that the specified features, steps or integers are included. The terms are not to be interpreted to exclude the presence of other features, steps or components.
[00134] The features disclosed in the foregoing description, or the following claims, or the accompanying drawings, expressed in their specific forms or in terms of a means for performing the disclosed function, or a method or process for attaining the disclosed result, as appropriate, may, separately, or in any combination of such features, be used for realising the invention in diverse forms thereof.

Claims

CLAIMS The invention claimed is:
1. A method of controlling a virtual realty or augmented reality system, comprising: displaying one or more virtual control features to a user.
2. The method of claim 1 , further comprising: defining a normal field of view of the user; and displaying the one or more control features to the user when the user looks in a predefine direction defining a menu field of view relative to the normal field of view of the user.
3. The method of claim 2, wherein the menu field of view is outside the normal field of view.
4. The method of claim 3, wherein the displaying occurs when a user looks upward such that the menu field of view is above the normal field of view.
5. The method of claim 4, wherein the displaying includes displaying a plurality of discrete virtual objects in a position around the user's head.
6. The method of claim 5, wherein the displaying includes displaying a static order of the discrete virtual objects in the position around the user's head regardless of the rotational orientation of the user before the user looks up.
7. The method of claim 1 , further comprising: recognizing the virtual control feature to be in a specific location within a viewing area; detecting a motion of the user's head or eyes indicating a relative direction from the specific location within the user's field of view; and performing a function based on the direction of movement.
8. The method of claim 7, further comprising indicating to the user a plurality of additional options uniquely directionally positioned around the virtual control feature once the virtual control feature is within the specific location.
9. The method of claim 8, wherein a specific function is performed based on the direction of movement relative to the uniquely directionally positioned additional options.
10. The method of claim 1, wherein the displaying comprises positioning the one or more virtual control features relative to a first reference frame; and the system further comprises: positioning a physical target object proximate a perceived position of a select virtual control feature from the one or more virtual control features;
changing a reference frame of the select virtual control feature from the first reference frame to a second reference frame.
11. The method of claim 10, wherein the first reference frame is relative to a headset of the system.
12. The method of claim 11, wherein the second reference frame is relative to the physical target object.
13. The method of claim 12, wherein the transitioning occurs after the physical target object is maintained in a static proximate position relative to the select virtual control feature for a predetermined amount of time.
14. The method of claim 13, wherein after the reference frame of the select virtual control feature is changed to the second reference frame, a remaining set of the plurality of virtual control features reposition to close a space left by a removal of the select virtual object.
15. The method of claims 14, wherein a program is launched after the reference frame of the select virtual control feature is changed to the physical target object.
16. The method of claim 1 further comprising:
positioning a target object in a field of detection of the system,
wherein displaying the one or more control features occurs after the positioning of the target object.
17. The method of claim 16, wherein the displayed control features are based on the target object and a position or orientation of the target object.
18. The method of claim 16, further comprising activating the one or more control features through physical manipulation of the target object.
19. The method of claim 16, further comprising changing the displaying of the first one or more control features to a second one or more control features different from the first one or more control features by changing the orientation of the target obj ect.
20. The method of claim 1, further comprising
wherein the displaying one or more virtual control features to the user is with respect to a first reference frame being a positional reference frame associated with a physical obj ect; transitioning a reference frame of the one or more virtual control features from the first
reference frame to a second reference frame when the physical obj ect is removed from a field of detection of the system, the second reference frame being a rotational reference frame.
PCT/US2018/030279 2017-04-28 2018-04-30 Control system for a three dimensional environment WO2018201150A1 (en)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US201762492033P 2017-04-28 2017-04-28
US201762491942P 2017-04-28 2017-04-28
US201762491997P 2017-04-28 2017-04-28
US201762492051P 2017-04-28 2017-04-28
US62/492,033 2017-04-28
US62/491,942 2017-04-28
US62/491,997 2017-04-28
US62/492,051 2017-04-28
US201762533614P 2017-07-17 2017-07-17
US62/533,614 2017-07-17

Publications (1)

Publication Number Publication Date
WO2018201150A1 true WO2018201150A1 (en) 2018-11-01

Family

ID=63919318

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/030279 WO2018201150A1 (en) 2017-04-28 2018-04-30 Control system for a three dimensional environment

Country Status (1)

Country Link
WO (1) WO2018201150A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113424142A (en) * 2019-02-11 2021-09-21 三星电子株式会社 Electronic device for providing augmented reality user interface and method of operating the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150153913A1 (en) * 2013-12-01 2015-06-04 Apx Labs, Llc Systems and methods for interacting with a virtual menu
WO2015192117A1 (en) * 2014-06-14 2015-12-17 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9628783B2 (en) * 2013-03-15 2017-04-18 University Of Southern California Method for interacting with virtual environment using stereoscope attached to computing device and modifying view of virtual environment based on user input in order to be displayed on portion of display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9628783B2 (en) * 2013-03-15 2017-04-18 University Of Southern California Method for interacting with virtual environment using stereoscope attached to computing device and modifying view of virtual environment based on user input in order to be displayed on portion of display
US20150153913A1 (en) * 2013-12-01 2015-06-04 Apx Labs, Llc Systems and methods for interacting with a virtual menu
WO2015192117A1 (en) * 2014-06-14 2015-12-17 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113424142A (en) * 2019-02-11 2021-09-21 三星电子株式会社 Electronic device for providing augmented reality user interface and method of operating the same
EP3884367A4 (en) * 2019-02-11 2022-01-12 Samsung Electronics Co., Ltd. Electronic device for providing augmented reality user interface and operating method thereof
US11538443B2 (en) 2019-02-11 2022-12-27 Samsung Electronics Co., Ltd. Electronic device for providing augmented reality user interface and operating method thereof

Similar Documents

Publication Publication Date Title
US20220121344A1 (en) Methods for interacting with virtual controls and/or an affordance for moving virtual objects in virtual environments
US11221730B2 (en) Input device for VR/AR applications
US20230333378A1 (en) Wristwatch based interface for augmented reality eyewear
US20240094866A1 (en) Devices, Methods, and Graphical User Interfaces for Displaying Applications in Three-Dimensional Environments
CN108780360B (en) Virtual reality navigation
CN110603509B (en) Joint of direct and indirect interactions in a computer-mediated reality environment
EP3437075B1 (en) Virtual object manipulation within physical environment
EP3250983B1 (en) Method and system for receiving gesture input via virtual control objects
US9628783B2 (en) Method for interacting with virtual environment using stereoscope attached to computing device and modifying view of virtual environment based on user input in order to be displayed on portion of display
US9299183B2 (en) Detection of partially obscured objects in three dimensional stereoscopic scenes
US9886102B2 (en) Three dimensional display system and use
CN114721470A (en) Device, method and graphical user interface for interacting with a three-dimensional environment
WO2017120271A1 (en) Apparatuses, methods and systems for application of forces within a 3d virtual environment
US9703400B2 (en) Virtual plane in a stylus based stereoscopic display system
JP2012161604A (en) Spatially-correlated multi-display human-machine interface
WO2013074997A1 (en) Indirect 3d scene positioning control
JP2006301654A (en) Image presentation apparatus
JP6931068B2 (en) Paired local and global user interfaces for an improved augmented reality experience
US20190294314A1 (en) Image display device, image display method, and computer readable recording device
US20220291744A1 (en) Display processing device, display processing method, and recording medium
WO2018201150A1 (en) Control system for a three dimensional environment
GB2533777A (en) Coherent touchless interaction with steroscopic 3D images
KR20240056558A (en) Handcrafted Augmented Reality Experiences
KR20240056555A (en) Handcrafted Augmented Reality Effort Proof
KR20240050437A (en) Augmented Reality Prop Interaction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18790894

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18790894

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM DATED 19/02/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18790894

Country of ref document: EP

Kind code of ref document: A1