WO2020113185A1 - Control system for a three dimensional environment - Google Patents

Control system for a three dimensional environment Download PDF

Info

Publication number
WO2020113185A1
WO2020113185A1 PCT/US2019/063879 US2019063879W WO2020113185A1 WO 2020113185 A1 WO2020113185 A1 WO 2020113185A1 US 2019063879 W US2019063879 W US 2019063879W WO 2020113185 A1 WO2020113185 A1 WO 2020113185A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
control signal
determining
dimensional
data points
Prior art date
Application number
PCT/US2019/063879
Other languages
French (fr)
Inventor
Mike KOZLOWSKI
Original Assignee
Mira Labs, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mira Labs, Inc. filed Critical Mira Labs, Inc.
Publication of WO2020113185A1 publication Critical patent/WO2020113185A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0123Head-up displays characterised by optical features comprising devices increasing the field of view
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • Head Mounted Displays produce images intended to be viewed by a single person in a fixed position related to the display.
  • HMDs may be used for Virtual Reality (VR) or Augmented Reality (AR) experiences.
  • VR Virtual Reality
  • AR Augmented Reality
  • the HMD of a virtual reality experience immerses the user’s entire field of vision and provides no image of the outside world.
  • the HMD of an augmented reality experience renders virtual, or pre-recorded images superimposed on top of the outside world.
  • Mac Macintosh
  • the virtual objects can overlap real world objects, obstruct the user’s field of view, and interfere with the immersive nature of the system and environment. Therefore, an adaptation or integration of the conventional 2D menu bar or system feels out of place in the virtual, immersive environment and the look detracts from the immersive feel of the experience.
  • FIG. 1 corresponds to FIG. 1 of the cited application and FIG. 2 corresponds to FIG. 3 of the cited application.
  • FIG. 1 illustrates an exemplary headset for producing an augmented reality environment by reflecting images from a display off an optical element and into the user’s eye to overlay virtual objects within a physical field of view.
  • FIG. 1 includes a frame 12 for supporting the mobile device having a mobile device 18 with a display 22, and optical element 14, and a mounting system 16 to attach the display and optical element to the user.
  • FIG. 2 illustrates exemplary light paths from the display screen 22, off the optical element 14, and into a user’s eye.
  • Exemplary systems described herein include control systems for use in a three dimensional display or environment.
  • Exemplary embodiments described herein include a number of unique features and components. No one feature or component is considered essential to the invention and may be used in any combination or incorporated on any other device or system.
  • exemplary embodiments described herein are generally in terms of an augmented reality system, but features and components described herein may be equally applicable to virtual reality systems or other head mounted systems. Accordingly, headset system is intended to encompass any head mounted system including, but not limited to, augmented reality and virtual reality systems.
  • FIG. 1 illustrates an exemplary augmented reality headset in an in use configuration on a wearer’s head.
  • FIG. 2 illustrates an exemplary reflection illustrating an exemplary reflection of an inserted screen in an augmented reality headset from a lens to a wearer’s eye.
  • FIG. 3 illustrates an exemplary mobile device having an accelerometer and gyroscope according to embodiments described herein.
  • FIGS. 4-5 illustrate exemplary graphical representations of data sets used in algorithms described herein.
  • Exemplary embodiments of control features may include movement recognition within the headset and/or inserted device within the headset.
  • the recognized movement may be macro movement based on the entire head/headset, or micro movement based on responsive movement when the device is not repositioned/reoriented but contacted in a specific manner.
  • Exemplary embodiments described herein include a headset system, having a frame with a compartment configured to support a mobile device, and an optical element coupled to the frame configured to reflect an image displayed on the mobile device.
  • the headset may include an attachment mechanism between the frame and the optical element for removable and/or pivotal attachment of the optical element to the frame.
  • a mobile device application is installed on the mobile device to receive the accelerometer and/or gyroscope readings of the mobile device.
  • the application may sample and store X, Y, and Z values of both the device’s rotation rate and accelerometer values, or any subset thereof.
  • the sample rate is approximately 60 per second, but can be any appropriate sample rate to capture a desired gesture or response.
  • the sampled data is filtered such that all points above a defined sensitivity threshold may be used and analysed, while all values below the sensitivity threshold are discarded. The remaining filtered data may then be used to determine which motion occurred and a corresponding action associated therewith. For example, peaks of data points may be identified and/or isolated. From the number of peaks within a predefined duration and/or a time lapse between successive peaks, identified motions can be identified. For another example, motion points can be correlated from three dimensional space to two dimension plane and motions identified therefrom.
  • Exemplary embodiments include detecting the motion of the inserted mobile device to correspond with the up-down and/or side to side motion, directional or predefined patterned motion, repeated touch motion, combinations thereof and/or other motion types. The identified motion may then be correlated to specific actions to control the mobile device without using the conventional touch screen interface.
  • An exemplary up-down and/or side to side motion detection control scheme may include a system and method for recognizing a nod or shake head or headset movement for input in hands free augmented or virtual reality environment.
  • Exemplary embodiments include a head mounted display system comprising a gyroscope and accelerometer.
  • the gyroscope and/or accelerometer are within a mobile device inserted into a frame of the head mounted display system.
  • the gyroscope and/or accelerometer may also be integrated or coupled directly onto or into the head mounted display.
  • Other sensors and/or methods may also be used to detect rotation rate and/or acceleration of the headset.
  • FIG. 3 illustrates an exemplary mobile device (such as a smart phone), with an in headset orientation. As shown, the mobile device may be inserted into the headset on its side, and may be positioned similar to those of FIGS. 1-2.
  • the system may be configured to detect and recognize a nod and/or shake motion. If the orientation is that of FIG. 3, the nod gesture may use the Y rotation rate value and the X acceleration value; while the shake gesture may use the X rotation rate value and the Y acceleration value.
  • each of the accelerometer and/or gyroscope are sampled periodically over a duration.
  • the sampling rate may be approximately 60 times per second, but may be, for example, between 30-200 times per second.
  • the sampled values may be stored in a queue to retain a set number of previous values.
  • the sample rate and the queue length may be used to store a desired duration of values. For example, a storage of values over a duration of approximately one second may be used. The storage of values of approximately one second to three seconds may be used.
  • a storage of values over a duration of approximately one second may be used.
  • the storage of values of approximately one second to three seconds may be used.
  • the sample rate may be 60 times per second, with a sample queue of 60 samples records, for about one second worth of values.
  • the sample value history may be analysed to make desired determinations.
  • the system is configured to receive and add sampled values from the accelerometer and/or gyroscope into one or more queues.
  • separate queues may be used for each of the acceleration and rotation data sets.
  • a small segment of the rotation rate queue is analysed.
  • the least-recent values i.e. the values from approximately one second to the present time, or the earliest data set in the queue
  • the average may be over 2-10 sample points, for example. If the average is below a stillness threshold, the system determines that a user was still for that fraction of a second. The absence of motion may be used to remove false positives in the analysis of movement. The system therefore may try to identify a duration of relative stillness before determining that an intentional action of nodding or shaking occurred.
  • the system may negate any determination of a desired motion, including a nod or shake, if a preceding duration of the determined desired motion is not below a stillness threshold. If the average of the least-recent values is below the stillness threshold, then the algorithm can continue.
  • the entire rotation rate queue may be analysed.
  • An exemplary algorithm may look for peaks and valleys in the queue, and analyse their relationships to each other. All points within the rotation rate queue may be filtered to remove noise. For example, all points above a defined sensitivity threshold may be retained, while all values below the threshold may be discarded. Continuous points above the threshold may then be used to create separate lists. Each separate list may be used to identify peaks. A master list of each peak list may then be created. In an exemplary embodiment, in order for a new peak list to be created and saved between frames, via a master list of valid peak lists, the potential peak list is determined to have more than a given threshold of values in a row (for example two or three), all above the sensitivity threshold.
  • FIG. 4 illustrates exemplary rate data received as sample values from the gyroscope.
  • the sensitivity and stillness threshold are identified as the horizontal dashed lines.
  • the sensitivity threshold and stillness threshold may be the same or different values.
  • the thresholds are the same.
  • Sequential numbers outside of the threshold are identified and used to create peak lists.
  • the 60 value queue representing a duration of approximately one second, has four peak lists identified.
  • a peak list in this example was created for any data set having three or more sequential values outside of the threshold.
  • each peak list within the master list may then be analysed. For each peak above the threshold, a maximum is determined and added to a significant values list, along with the index of the value in the queue. For each peak below the threshold, a minimum value is added to the significant values list, along with the index of the value in the queue. [0029] In an exemplary embodiment, the significant values list is then analysed. If there are more than a predetermined maximum number of significant values (for example, five significant values in an approximately one second duration) or less than a predetermined minimum number of significant values (for example, two significant values in an approximately one second duration), then the algorithm returns, or ignores the current frame.
  • the typical nod or shake is approximately three to four peaks, and five in select cases.
  • More than five or fewer than two generally indicate other movements than a nod or shake. If the significant value list does not alternate signs through its iteration (+, -, + OR -, +, -), then the algorithm returns, or ignores the current frame. If significant values do not alternate above and below zero, then a proper nod/shake is likely not completed. If the distance between indexes of significant values does not surpass or exceed a spread minimum threshold, then the algorithm returns, or ignores the current frame. For example, the distance between indexes of significant values may exceed a spread minimum, such as a number of frames or sequential data points. In an exemplary embodiment, the minimum spread threshold is six frames, but may be 5 or more frames. The number of frames or minimum spread threshold may remove false positives as transitions in movement for an intentional shake or nod is generally a deliberate action that takes time longer than some fraction of a second, and are instead more fluid motions.
  • any combination of checks and/or filters on the significant value list may be used, such as the significant values list within the predetermined minimum and/or maximum number, the significant values alternate in sign, and/or the distance between significant value indexes is above a spread minimum threshold. If any combination of the checks and/or filters is passed, then the system may analyse the accelerometer queue. The last fraction of the one second record is averaged. The last fraction may include two, three, four, five, or other sequential number of data points in the queue. If the average exceeds a minimum movement threshold value
  • the algorithm returns the relative shake/nod event.
  • the queues may then be cleared and reset, so shaking and nodding can happen once per second.
  • An exemplary embodiment may then use the given shake or nod event indicator to control or act as an input to the system.
  • Exemplary embodiments may be mapped to specific user interface decisions, like answering yes or no questions, to summon a menu, or other associated control input command.
  • the system may be configured to detect and recognize a contact motion.
  • the system may be oriented as in FIG. 3, described herein.
  • the accelerometer may be used to detect motion of the headset to determine if a user has contacted the headset for a sequential or patterned contact, such as in a number of touches and/or in a specific side of the headset (i.e. touches on the side or top of the headset).
  • the system may be configured to detect and determine whether a physical tap or combination of taps, such as a double or triple tap, or a specific side of the headset has occurred with a user’s fingertips.
  • the system may determine a horizontal (lateral) side contact and/or a vertical (top or bottom) side contact. Similar to the nod/shake input described herein, the tap configuration (quantity and direction) may be mapped to any function or control of the system.
  • a physical tap or combination of taps such as a double or triple tap, or a specific side of the headset has occurred with a user’s fingertips.
  • the system may determine a horizontal (lateral) side contact and/or a vertical (top or bottom) side contact. Similar to the nod/shake input described herein, the tap configuration (quantity and direction) may be mapped to any function or control of the system.
  • the tap configuration quantity and direction
  • a double tap on the side of the headset may be used to trigger a commonly used event, like re-centering the AR content or selecting a button or virtual object.
  • triple side tap, double tap on the side, double side tap on a top, or single tap on a side followed by a single tap on the top, etc. may separately be mapped to different control options and/or functions.
  • a double tap algorithm may be configured similar to the nod/shake gesture recognition algorithm.
  • the tap algorithm may receive inputs from an accelerometer of the head mounted display, either on the headset or on an inserted mobile device. If a single direction tap is used, such as the side tap, then only a single directional accelerometer reading is sampled and analysed. For example, if a tap on the side of the device is used, then only the Y accelerometer reading is sampled and analysed (for the configuration of FIG. 3). If different combinations of taps are used, then X and/or Y accelerometer readings may be sampled and analysed.
  • the side tap configuration will be described herein, but is equally applicable if top or combination taps are desired.
  • the y accelerometer is sampled at a sampling rate, such as 60 samples per second.
  • the sampled data points are stored in a queue.
  • the stored queue may store a desired number of data points to represent a desired sample duration.
  • the duration length is approximately 2/3 of a second, but may be 2/3 to 1 second, for example.
  • the storage queue may be 40 frames.
  • the storage queue may be the same as that for the nod/shake algorithm, and only the last portion of the queue is analysed according to the embodiments described herein. Therefore, although the queue may be 60 frames, only the first or last 40 frames are analysed to determine touch or tap inputs.
  • the length of the queue and/or the number of frames analysed according to embodiments described herein may be altered depending on a configured tap speed. For example, if faster repeated taps is desired, then the duration and number of frames may be reduced. If a slower repeat rate is desired to still indicate a control input, the number of frames and/or duration may be increased. Accordingly, any duration and/or frame number may be used, such as from 20 to 90 frames (with a 60 frame per second sample rate).
  • a double tab on a lateral side of the headset may be detected by an algorithm analysing the y accelerometer reading.
  • the y accelerometer reading may therefore be sampled at a desired sampling rate and stored in a queue.
  • the length of the queue may correspond to a desired elapse time for detecting the double tap.
  • the sample rate may be 60x per second with a queue of 40 frames for a desired duration of approximately 2/3 second.
  • the y accelerometer sampling queue may be iterated over using a for-loop and a 3 pointer system.
  • the three points may be the current value (C), a pointer two positions to the left (C-2), previous index, and a pointer two positions to the right (C+2), future index.
  • the current value then iterates over the queue, and each time the value at the current index (C) is 3x (or other desired multiplier) as high as the value at the previous index (C-2) and the value at the future index (C+2).
  • Such a comparison may be used to indicate a spike in the y accelerometer readings.
  • a three time multiplier is disclosed, other multipliers may be used to adjust the sensitivity and/or individual characteristics of a user. For example, 1.5x, 2x, 2.5x, 3x, 3.5x, 4x, or any multiplier in between or otherwise representative of the desired delineation between normal wear and a touch response of a user.
  • a spike is detected and counted when the comparison of the current location to the positions on opposing sides of the current location is above the threshold multiplier. For each spike detected, a counter may be tallied and the number at the current index is saved. Therefore, the count and corresponding index correspond to the number of the spike is saved to indicate a first spike, second spike, etc.
  • the comparison positions may be relocated. The comparison position to the current position may depend on the sampling rate and/or the desired touch type.
  • the pointer system when the pointer system completes its iteration, the total number of counted spikes is analysed.
  • a threshold minimum and maximum number of counted spikes may be defined to return a positive indication of a double touch.
  • the number of counted spikes should be more than 1 and less than 3, or equal to 2.
  • Other minimums and maximums may also be included such that the accuracy of the system may be set to accommodate extra vibration or touches.
  • the spike indexes may be analysed to compare to a duration threshold to determine whether the spikes or detected touches occurred within a desired durational threshold. For example, if the indices associated with the first and second spikes is within a set global threshold range (as in, a set time frame), then the double touch event may be fired or sent in software.
  • the time limit may be, for example, ten frames, or approximately 1/6 of a second. However, the number of frames may depend on the sampling rate, and the desired duration may depend on the desired response rate of the user.
  • the system may be configured to detect and recognize specific motion or series of motions of the headset itself.
  • the system may be oriented as in FIG. 3.
  • the accelerometer may be used to detect motion of the headset to determine if a user has moved the headset in a desired pattern. For example, imagine a user wants to open a software menu. They could find the user interface button (or other UI interface) for opening the menu, or they can move the headset (i.e. their head) in the shape of an“M” (essentially writing an M with their head movement). Either input could trigger the opening of a menu, but the latter only uses the slight head movement.
  • an algorithm may be used to create a two- dimensional recognition library.
  • the library may be created through machine learning. For example, a gesture (such as the“M” in the example above) may be trained into the system. The machine learning gesture recognition may be imported as a library, and may be leamed/created for a specific user, and/or may be imported from any available source.
  • the two-dimensional recognition library includes an array of two-dimensional points, and a corresponding matched shape. The system may then be used to look up a received two dimensional array of points and match the array to one from the two-dimensional recognition library. If a match is made, then the name of the shape may be returned and used further to control the system according to embodiments described herein.
  • the three dimensional environment of augmented reality may be converted into a two-dimensional array for looking up in the two-dimensional recognition library.
  • the system defines a three dimensional reticle position.
  • the three dimensional reticle position may be defined as a point in front of the wearer’s face, defined by a ray coming from a desired location of the head mounted display, such as, for example, the center of the head mounted display.
  • the reticle may move around the scene based on the orientation of the head mounted display.
  • the reticle may always be positioned directly in front of the user’s head or in front of the head set.
  • the three dimensional location of the 3D reticle position is sampled and stored in a queue.
  • the reticle is an object in three dimensional space, locked in front of the user. As the user rotates their head, the position of the reticle moves. It therefore can be defined by an X, Y, Z value in world space.
  • the sample rate may be 60x per second and stored in a queue of 60 frames. Similar to other methods, the sampling rate, and queue length may be set according to the desired pattern and expected time to complete a pattern. The system then iterates over the queue, changing the three dimensional positions into two dimensional screen space positions.
  • a system may be used to transform three dimensional points in the scene to two dimensional viewpoint points on the phone screen, from 0-1 of the screen width by 0-1 of the screen height.
  • the reticle’s most recent viewpoint coordinates may transform from three dimensional coordinates to (0.5, 0.5), half the width and half the height.
  • the resulting queue’s values would range from approximately (0.0, 0.5) to (0.5, 0.5)
  • a two dimensional array may be sent to and compared against the two-dimensional recognition library to retrieve a corresponding detected shape. If a match occurs, the detected shape is returned, and the corresponding mapped function is performed. If a match does not occur, then the algorithm resets and looks for another recognized gesture.
  • Exemplary embodiments include software controls for a mobile device of a user to control attribute of the virtual display/environment depending on the determined received motion of the headset.
  • Exemplary embodiments of the software controls may be configured to display a prompt, pointer, or instruction to the user.
  • Exemplary embodiments of the software controls may be configured to receive an input based on the determined motion of the headset.
  • the determined motion may correspond to any control input and is not limited hereby.
  • a control may be to launch a menu, open an application and/or program, close an application and/or program, make a selection, change a display.
  • Other change in display configurations may include pausing a display or application, dimming or turning of a screen display, or other function.
  • embodiments of the invention may be described and illustrated herein in terms of augmented reality systems, it should be understood that embodiments of this invention are not so limited, but are additionally applicable to virtual reality systems.
  • Features of the system may also be applicable to any head mounted system. Exemplary embodiments are described in terms of a head mounted display system in which a mobile device is inserted within a frame for reflecting off of a lens system for creating the virtual experience and/or overlay.
  • the accelerometer and/or gyroscope of the mobile device may be used to detect and provide the data points used in accordance with the algorithms described herein.
  • exemplary embodiments are not so limited and the head mounted display may have separate sensors for obtaining such data. Exemplary embodiments may also include any combination of features as described herein.
  • any combination of described features, components, or elements may be used and still fall within the scope of the instant description.
  • Exemplary embodiments are described herein with respect to specific accelerometer and/or gyroscopic detector orientations.
  • the system is not so limited and the present invention encompasses different orientations or detection schemes such that a state of the system may be determined by detecting rotational and/or movement positions about different axis.
  • Exemplary embodiments may also include any combination of features as described herein. Therefore, any combination of described features, components, or elements may be used and still fall within the scope of the instant description.
  • features may include the computing for the augmented reality experience is conducted by a smartphone inserted into the headset; the front-facing camera of an inserted smartphone has an unobstructed view through the optical element; the tracking is accomplished using information from the smartphone’s front-facing camera; an output is displayed on the smartphone’s screen; the optical element acts as a combiner that reflects the smartphone’s screen to overlay imagery in the user’s physical field of vision; the headset having only a single optical element in which light from the screen encounters between the screen and the user’s eye; the headset not having any additional optical components for creating, generating, or overlaying the digital image in a user’s field of view besides the optical element; the smartphone and optical element are in a fixed position during operation; the headset or system including inserts for fixing the position of an inserted mobile device during operation; the headset including dynamically adjustable mechanism for
  • any component, feature, step or part may be integrated, separated, sub-divided, removed, duplicated, added, or used in any combination with any other component, feature, step or part or itself and remain within the scope of the present disclosure.
  • Embodiments are exemplary only, and provide an illustrative combination of features, but are not limited thereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Control systems and methods for determining control signals for use with head mounted display systems. Exemplary embodiments include systems and methods for determining inputs from a user.

Description

CONTROL SYSTEM FOR A THREE DIMENSIONAL
ENVIRONMENT
PRIORITY
[0001] This application claims priority to U.S. Application No. 62/772,578, filed
November 28, 2018, which is incorporated by reference in its entirety into this application.
BACKGROUND
[0002] Head Mounted Displays (HMDs) produce images intended to be viewed by a single person in a fixed position related to the display. HMDs may be used for Virtual Reality (VR) or Augmented Reality (AR) experiences. The HMD of a virtual reality experience immerses the user’s entire field of vision and provides no image of the outside world. The HMD of an augmented reality experience renders virtual, or pre-recorded images superimposed on top of the outside world.
[0003] Conventional computer systems use menus to navigate different programs executed on the system. For example, a conventional Macintosh (Mac) based system has a menu bar at a top of a screen that identifies options for controlling functions of the Mac and/or programs running thereon.
[0004] These menu systems are ideal for two dimensional display systems where a majority of the display space is used for the active program, and a small, localized part of the display is used for a navigation system. In this case, the display space is static, defined by the size of the screen. Therefore, the trade-offs between display space for active programs and menus are similarly static. Given the static environment of the two-dimensional display, it is easy to select the most appropriate space for displaying menus that does not interfere or minimally interferes with the usable space of the display. [0005] Virtual reality and augmented reality systems provide images in a three dimensional, immersive space. Because of the immersive environment, there is not a convenient, dedicated location for a menu or control system to be positioned. For example, if a dedicated portion of the field of view is used for a control system, the virtual objects can overlap real world objects, obstruct the user’s field of view, and interfere with the immersive nature of the system and environment. Therefore, an adaptation or integration of the conventional 2D menu bar or system feels out of place in the virtual, immersive environment and the look detracts from the immersive feel of the experience.
[0006] US Application No. 15/944,711, filed April 3, 2018, is incorporated by reference in its entirety herein, and describes exemplary augmented reality systems in which a planar screen, such as that from a mobile device or mobile phone, is used to generate virtual objects in a user’s field of view by reflecting the screen display on an optical element in front of the user’s eyes. FIG. 1 corresponds to FIG. 1 of the cited application and FIG. 2 corresponds to FIG. 3 of the cited application. FIG. 1 illustrates an exemplary headset for producing an augmented reality environment by reflecting images from a display off an optical element and into the user’s eye to overlay virtual objects within a physical field of view. The exemplary headset 10 of FIG. 1 includes a frame 12 for supporting the mobile device having a mobile device 18 with a display 22, and optical element 14, and a mounting system 16 to attach the display and optical element to the user. FIG. 2 illustrates exemplary light paths from the display screen 22, off the optical element 14, and into a user’s eye.
SUMMARY
[0007] Exemplary systems described herein include control systems for use in a three dimensional display or environment.
[0008] Exemplary embodiments described herein include a number of unique features and components. No one feature or component is considered essential to the invention and may be used in any combination or incorporated on any other device or system. For example, exemplary embodiments described herein are generally in terms of an augmented reality system, but features and components described herein may be equally applicable to virtual reality systems or other head mounted systems. Accordingly, headset system is intended to encompass any head mounted system including, but not limited to, augmented reality and virtual reality systems.
SUMMARY
[0009] FIG. 1 illustrates an exemplary augmented reality headset in an in use configuration on a wearer’s head.
[0010] FIG. 2 illustrates an exemplary reflection illustrating an exemplary reflection of an inserted screen in an augmented reality headset from a lens to a wearer’s eye.
[0011] FIG. 3 illustrates an exemplary mobile device having an accelerometer and gyroscope according to embodiments described herein.
[0012] FIGS. 4-5 illustrate exemplary graphical representations of data sets used in algorithms described herein.
DESCRIPTION
[0013] The following detailed description illustrates by way of example, not by way of limitation, the principles of the invention. This description will clearly enable one skilled in the art to make and use the invention, and describes several embodiments, adaptations, variations, alternatives and uses of the invention, including what is presently believed to be the best mode of carrying out the invention. It should be understood that the drawings are diagrammatic and schematic representations of exemplary embodiments of the invention, and are not limiting of the present invention nor are they necessarily drawn to scale.
[0014] International Publication WO 2018/201150 describes exemplary control systems for a three dimensional environment and is incorporated herein in its entirety.
[0015] Exemplary embodiments of control features may include movement recognition within the headset and/or inserted device within the headset. The recognized movement may be macro movement based on the entire head/headset, or micro movement based on responsive movement when the device is not repositioned/reoriented but contacted in a specific manner. [0016] Exemplary embodiments described herein include a headset system, having a frame with a compartment configured to support a mobile device, and an optical element coupled to the frame configured to reflect an image displayed on the mobile device. The headset may include an attachment mechanism between the frame and the optical element for removable and/or pivotal attachment of the optical element to the frame.
[0017] In an exemplary embodiment, a mobile device application is installed on the mobile device to receive the accelerometer and/or gyroscope readings of the mobile device. The application may sample and store X, Y, and Z values of both the device’s rotation rate and accelerometer values, or any subset thereof. In an exemplary embodiment, the sample rate is approximately 60 per second, but can be any appropriate sample rate to capture a desired gesture or response. In an exemplary embodiment, the sampled data is filtered such that all points above a defined sensitivity threshold may be used and analysed, while all values below the sensitivity threshold are discarded. The remaining filtered data may then be used to determine which motion occurred and a corresponding action associated therewith. For example, peaks of data points may be identified and/or isolated. From the number of peaks within a predefined duration and/or a time lapse between successive peaks, identified motions can be identified. For another example, motion points can be correlated from three dimensional space to two dimension plane and motions identified therefrom.
[0018] Exemplary embodiments include detecting the motion of the inserted mobile device to correspond with the up-down and/or side to side motion, directional or predefined patterned motion, repeated touch motion, combinations thereof and/or other motion types. The identified motion may then be correlated to specific actions to control the mobile device without using the conventional touch screen interface.
[0019] An exemplary up-down and/or side to side motion detection control scheme may include a system and method for recognizing a nod or shake head or headset movement for input in hands free augmented or virtual reality environment.
[0020] Exemplary embodiments include a head mounted display system comprising a gyroscope and accelerometer. In an exemplary embodiment, the gyroscope and/or accelerometer are within a mobile device inserted into a frame of the head mounted display system. The gyroscope and/or accelerometer may also be integrated or coupled directly onto or into the head mounted display. Other sensors and/or methods may also be used to detect rotation rate and/or acceleration of the headset.
[0021] FIG. 3 illustrates an exemplary mobile device (such as a smart phone), with an in headset orientation. As shown, the mobile device may be inserted into the headset on its side, and may be positioned similar to those of FIGS. 1-2.
[0022] In an exemplary embodiment, the system may be configured to detect and recognize a nod and/or shake motion. If the orientation is that of FIG. 3, the nod gesture may use the Y rotation rate value and the X acceleration value; while the shake gesture may use the X rotation rate value and the Y acceleration value.
[0023] In an exemplary embodiment, each of the accelerometer and/or gyroscope are sampled periodically over a duration. The sampling rate may be approximately 60 times per second, but may be, for example, between 30-200 times per second. The sampled values may be stored in a queue to retain a set number of previous values. In an exemplary embodiment, the sample rate and the queue length may be used to store a desired duration of values. For example, a storage of values over a duration of approximately one second may be used. The storage of values of approximately one second to three seconds may be used. In an exemplary
embodiment, the sample rate may be 60 times per second, with a sample queue of 60 samples records, for about one second worth of values. In an exemplary embodiment, the sample value history may be analysed to make desired determinations.
[0024] In an exemplary embodiment, the system is configured to receive and add sampled values from the accelerometer and/or gyroscope into one or more queues. In an exemplary embodiment, separate queues may be used for each of the acceleration and rotation data sets.
[0025] In an exemplary embodiment, a small segment of the rotation rate queue is analysed. The least-recent values (i.e. the values from approximately one second to the present time, or the earliest data set in the queue) are averaged. The average may be over 2-10 sample points, for example. If the average is below a stillness threshold, the system determines that a user was still for that fraction of a second. The absence of motion may be used to remove false positives in the analysis of movement. The system therefore may try to identify a duration of relative stillness before determining that an intentional action of nodding or shaking occurred. Therefore, in an exemplary embodiment, the system may negate any determination of a desired motion, including a nod or shake, if a preceding duration of the determined desired motion is not below a stillness threshold. If the average of the least-recent values is below the stillness threshold, then the algorithm can continue.
[0026] In an exemplary embodiment, the entire rotation rate queue may be analysed. An exemplary algorithm may look for peaks and valleys in the queue, and analyse their relationships to each other. All points within the rotation rate queue may be filtered to remove noise. For example, all points above a defined sensitivity threshold may be retained, while all values below the threshold may be discarded. Continuous points above the threshold may then be used to create separate lists. Each separate list may be used to identify peaks. A master list of each peak list may then be created. In an exemplary embodiment, in order for a new peak list to be created and saved between frames, via a master list of valid peak lists, the potential peak list is determined to have more than a given threshold of values in a row (for example two or three), all above the sensitivity threshold.
[0027] FIG. 4 illustrates exemplary rate data received as sample values from the gyroscope. The sensitivity and stillness threshold are identified as the horizontal dashed lines. The sensitivity threshold and stillness threshold may be the same or different values. As illustrated, the thresholds are the same. Sequential numbers outside of the threshold are identified and used to create peak lists. As illustrated, the 60 value queue, representing a duration of approximately one second, has four peak lists identified. A peak list in this example was created for any data set having three or more sequential values outside of the threshold.
[0028] In an exemplary embodiment, each peak list within the master list may then be analysed. For each peak above the threshold, a maximum is determined and added to a significant values list, along with the index of the value in the queue. For each peak below the threshold, a minimum value is added to the significant values list, along with the index of the value in the queue. [0029] In an exemplary embodiment, the significant values list is then analysed. If there are more than a predetermined maximum number of significant values (for example, five significant values in an approximately one second duration) or less than a predetermined minimum number of significant values (for example, two significant values in an approximately one second duration), then the algorithm returns, or ignores the current frame. The typical nod or shake is approximately three to four peaks, and five in select cases. More than five or fewer than two generally indicate other movements than a nod or shake. If the significant value list does not alternate signs through its iteration (+, -, + OR -, +, -), then the algorithm returns, or ignores the current frame. If significant values do not alternate above and below zero, then a proper nod/shake is likely not completed. If the distance between indexes of significant values does not surpass or exceed a spread minimum threshold, then the algorithm returns, or ignores the current frame. For example, the distance between indexes of significant values may exceed a spread minimum, such as a number of frames or sequential data points. In an exemplary embodiment, the minimum spread threshold is six frames, but may be 5 or more frames. The number of frames or minimum spread threshold may remove false positives as transitions in movement for an intentional shake or nod is generally a deliberate action that takes time longer than some fraction of a second, and are instead more fluid motions.
[0030] Any combination of checks and/or filters on the significant value list may be used, such as the significant values list within the predetermined minimum and/or maximum number, the significant values alternate in sign, and/or the distance between significant value indexes is above a spread minimum threshold. If any combination of the checks and/or filters is passed, then the system may analyse the accelerometer queue. The last fraction of the one second record is averaged. The last fraction may include two, three, four, five, or other sequential number of data points in the queue. If the average exceeds a minimum movement threshold value
(indicating vertical or lateral movement of a nod or shake respectively), then the algorithm returns the relative shake/nod event. The queues may then be cleared and reset, so shaking and nodding can happen once per second.
[0031] An exemplary embodiment may then use the given shake or nod event indicator to control or act as an input to the system. Exemplary embodiments may be mapped to specific user interface decisions, like answering yes or no questions, to summon a menu, or other associated control input command.
[0032] In an exemplary embodiment, the system may be configured to detect and recognize a contact motion. The system may be oriented as in FIG. 3, described herein. The accelerometer may be used to detect motion of the headset to determine if a user has contacted the headset for a sequential or patterned contact, such as in a number of touches and/or in a specific side of the headset (i.e. touches on the side or top of the headset).
[0033] In an exemplary embodiment, the system may be configured to detect and determine whether a physical tap or combination of taps, such as a double or triple tap, or a specific side of the headset has occurred with a user’s fingertips. In an exemplary embodiment, the system may determine a horizontal (lateral) side contact and/or a vertical (top or bottom) side contact. Similar to the nod/shake input described herein, the tap configuration (quantity and direction) may be mapped to any function or control of the system. In an exemplary
embodiment, a double tap on the side of the headset may be used to trigger a commonly used event, like re-centering the AR content or selecting a button or virtual object.
[0034] The double tap on the side of the headset is described for illustration purposes only, but the system configurations are not so limited. Any combination of number of taps and/or any orientation of taps may be used and fall within the scope of the instant disclosure.
For example, triple side tap, double tap on the side, double side tap on a top, or single tap on a side followed by a single tap on the top, etc. may separately be mapped to different control options and/or functions.
[0035] In an exemplary embodiment, a double tap algorithm may be configured similar to the nod/shake gesture recognition algorithm. In an exemplary embodiment, the tap algorithm may receive inputs from an accelerometer of the head mounted display, either on the headset or on an inserted mobile device. If a single direction tap is used, such as the side tap, then only a single directional accelerometer reading is sampled and analysed. For example, if a tap on the side of the device is used, then only the Y accelerometer reading is sampled and analysed (for the configuration of FIG. 3). If different combinations of taps are used, then X and/or Y accelerometer readings may be sampled and analysed. The side tap configuration will be described herein, but is equally applicable if top or combination taps are desired.
[0036] In an exemplary embodiment to detect a side tap, the y accelerometer is sampled at a sampling rate, such as 60 samples per second. In an exemplary embodiment, the sampled data points are stored in a queue. The stored queue may store a desired number of data points to represent a desired sample duration. In an exemplary embodiment, the duration length is approximately 2/3 of a second, but may be 2/3 to 1 second, for example. Accordingly, the storage queue may be 40 frames. In an exemplary embodiment, the storage queue may be the same as that for the nod/shake algorithm, and only the last portion of the queue is analysed according to the embodiments described herein. Therefore, although the queue may be 60 frames, only the first or last 40 frames are analysed to determine touch or tap inputs. The length of the queue and/or the number of frames analysed according to embodiments described herein may be altered depending on a configured tap speed. For example, if faster repeated taps is desired, then the duration and number of frames may be reduced. If a slower repeat rate is desired to still indicate a control input, the number of frames and/or duration may be increased. Accordingly, any duration and/or frame number may be used, such as from 20 to 90 frames (with a 60 frame per second sample rate).
[0037] In an exemplary embodiment, a double tab on a lateral side of the headset may be detected by an algorithm analysing the y accelerometer reading. The y accelerometer reading may therefore be sampled at a desired sampling rate and stored in a queue. The length of the queue may correspond to a desired elapse time for detecting the double tap. For example, the sample rate may be 60x per second with a queue of 40 frames for a desired duration of approximately 2/3 second.
[0038] In an exemplary embodiment, the y accelerometer sampling queue may be iterated over using a for-loop and a 3 pointer system. The three points may be the current value (C), a pointer two positions to the left (C-2), previous index, and a pointer two positions to the right (C+2), future index. The current value then iterates over the queue, and each time the value at the current index (C) is 3x (or other desired multiplier) as high as the value at the previous index (C-2) and the value at the future index (C+2). Such a comparison may be used to indicate a spike in the y accelerometer readings. Although a three time multiplier is disclosed, other multipliers may be used to adjust the sensitivity and/or individual characteristics of a user. For example, 1.5x, 2x, 2.5x, 3x, 3.5x, 4x, or any multiplier in between or otherwise representative of the desired delineation between normal wear and a touch response of a user. As the
accelerometer sampling queue is iterated, a spike is detected and counted when the comparison of the current location to the positions on opposing sides of the current location is above the threshold multiplier. For each spike detected, a counter may be tallied and the number at the current index is saved. Therefore, the count and corresponding index correspond to the number of the spike is saved to indicate a first spike, second spike, etc. In an exemplary embodiment, the comparison positions may be relocated. The comparison position to the current position may depend on the sampling rate and/or the desired touch type.
[0039] In an exemplary embodiment, when the pointer system completes its iteration, the total number of counted spikes is analysed. A threshold minimum and maximum number of counted spikes may be defined to return a positive indication of a double touch. For example, the number of counted spikes should be more than 1 and less than 3, or equal to 2. Other minimums and maximums may also be included such that the accuracy of the system may be set to accommodate extra vibration or touches.
[0040] In an exemplary embodiment, the spike indexes may be analysed to compare to a duration threshold to determine whether the spikes or detected touches occurred within a desired durational threshold. For example, if the indices associated with the first and second spikes is within a set global threshold range (as in, a set time frame), then the double touch event may be fired or sent in software. The time limit may be, for example, ten frames, or approximately 1/6 of a second. However, the number of frames may depend on the sampling rate, and the desired duration may depend on the desired response rate of the user.
[0041] In an exemplary embodiment, the system may be configured to detect and recognize specific motion or series of motions of the headset itself. The system may be oriented as in FIG. 3. The accelerometer may be used to detect motion of the headset to determine if a user has moved the headset in a desired pattern. For example, imagine a user wants to open a software menu. They could find the user interface button (or other UI interface) for opening the menu, or they can move the headset (i.e. their head) in the shape of an“M” (essentially writing an M with their head movement). Either input could trigger the opening of a menu, but the latter only uses the slight head movement.
[0042] In an exemplary embodiment, an algorithm may be used to create a two- dimensional recognition library. The library may be created through machine learning. For example, a gesture (such as the“M” in the example above) may be trained into the system. The machine learning gesture recognition may be imported as a library, and may be leamed/created for a specific user, and/or may be imported from any available source. In an exemplary embodiment, the two-dimensional recognition library includes an array of two-dimensional points, and a corresponding matched shape. The system may then be used to look up a received two dimensional array of points and match the array to one from the two-dimensional recognition library. If a match is made, then the name of the shape may be returned and used further to control the system according to embodiments described herein.
[0043] In an exemplary embodiment, the three dimensional environment of augmented reality may be converted into a two-dimensional array for looking up in the two-dimensional recognition library. In an exemplary embodiment, the system defines a three dimensional reticle position. The three dimensional reticle position may be defined as a point in front of the wearer’s face, defined by a ray coming from a desired location of the head mounted display, such as, for example, the center of the head mounted display. The reticle may move around the scene based on the orientation of the head mounted display. The reticle may always be positioned directly in front of the user’s head or in front of the head set. The three dimensional location of the 3D reticle position is sampled and stored in a queue. In an exemplary embodiment, the reticle is an object in three dimensional space, locked in front of the user. As the user rotates their head, the position of the reticle moves. It therefore can be defined by an X, Y, Z value in world space. For example, the sample rate may be 60x per second and stored in a queue of 60 frames. Similar to other methods, the sampling rate, and queue length may be set according to the desired pattern and expected time to complete a pattern. The system then iterates over the queue, changing the three dimensional positions into two dimensional screen space positions. In an exemplary embodiment, a system may be used to transform three dimensional points in the scene to two dimensional viewpoint points on the phone screen, from 0-1 of the screen width by 0-1 of the screen height. Therefore, given that the reticle’s most recent position in world space can be defined directly in front of the user, the reticle’s most recent viewpoint coordinates may transform from three dimensional coordinates to (0.5, 0.5), half the width and half the height. As an example, if a user was moving their head in a lateral-right motion, rotating from left to right, and a queue of 60 or so most recent three dimensional reticle points is collected, the resulting queue’s values would range from approximately (0.0, 0.5) to (0.5, 0.5) Once the three
dimensional locations are converted, a two dimensional array may be sent to and compared against the two-dimensional recognition library to retrieve a corresponding detected shape. If a match occurs, the detected shape is returned, and the corresponding mapped function is performed. If a match does not occur, then the algorithm resets and looks for another recognized gesture.
[0044] Exemplary embodiments include software controls for a mobile device of a user to control attribute of the virtual display/environment depending on the determined received motion of the headset. Exemplary embodiments of the software controls may be configured to display a prompt, pointer, or instruction to the user. Exemplary embodiments of the software controls may be configured to receive an input based on the determined motion of the headset. The determined motion may correspond to any control input and is not limited hereby. For example, a control may be to launch a menu, open an application and/or program, close an application and/or program, make a selection, change a display. Other change in display configurations may include pausing a display or application, dimming or turning of a screen display, or other function.
[0045] Although embodiments of the invention may be described and illustrated herein in terms of augmented reality systems, it should be understood that embodiments of this invention are not so limited, but are additionally applicable to virtual reality systems. Features of the system may also be applicable to any head mounted system. Exemplary embodiments are described in terms of a head mounted display system in which a mobile device is inserted within a frame for reflecting off of a lens system for creating the virtual experience and/or overlay. In this case the accelerometer and/or gyroscope of the mobile device may be used to detect and provide the data points used in accordance with the algorithms described herein. However, exemplary embodiments are not so limited and the head mounted display may have separate sensors for obtaining such data. Exemplary embodiments may also include any combination of features as described herein. Therefore, any combination of described features, components, or elements may be used and still fall within the scope of the instant description. Exemplary embodiments are described herein with respect to specific accelerometer and/or gyroscopic detector orientations. The system is not so limited and the present invention encompasses different orientations or detection schemes such that a state of the system may be determined by detecting rotational and/or movement positions about different axis.
[0046] Exemplary embodiments may also include any combination of features as described herein. Therefore, any combination of described features, components, or elements may be used and still fall within the scope of the instant description. For example, features may include the computing for the augmented reality experience is conducted by a smartphone inserted into the headset; the front-facing camera of an inserted smartphone has an unobstructed view through the optical element; the tracking is accomplished using information from the smartphone’s front-facing camera; an output is displayed on the smartphone’s screen; the optical element acts as a combiner that reflects the smartphone’s screen to overlay imagery in the user’s physical field of vision; the headset having only a single optical element in which light from the screen encounters between the screen and the user’s eye; the headset not having any additional optical components for creating, generating, or overlaying the digital image in a user’s field of view besides the optical element; the smartphone and optical element are in a fixed position during operation; the headset or system including inserts for fixing the position of an inserted mobile device during operation; the headset including dynamically adjustable mechanism for accommodating inserted mobile devices of various size; the headset including an elastic cover to shield the screen and retain the mobile device relative to the headset; the headset including retaining features to position the inserted mobile device; the headset no including computing power besides the phone; the optical element is removable; the optical element can fold for storage or transportation relative to the compartment; the optical element consists of two sub components to display stereoscopic imagery; the optical element including a coating on a first surface to reflect an image from the mobile device; the optical element including an anti- reflective coating on another surface to reduce reflection of an image from the mobile device; the optical element including a spherical curvature; the optical element having a uniform thickness; the optical element contains magnets and the compartment or a frame contains mating magnets that allow the optical element to attach and detach from the frame of the headset such that it is always in the correct positioning; integrated or removable straps or band secure the headset to a user’s face; the compartment having a face cushion for comfort during use; the compartment having an integrated optical component covering the front-facing camera of the smartphone; the integrated optical component covering the front-facing camera of the smartphone modifies the image entering the front-facing camera to improve tracking area; the optical component is a prism; the optical component is a wide-angle lens; the mounting system including modular straps and support frames; the mounting system straps including surface features to increase structural support; the mounting system support features including an indentation on a broad side of the strap toward a user’s head; The mounting system straps including tapered thickness; the mounting system including keyed mating surfaces to define an orientation or a mated pair; and any combination thereof or otherwise described herein.
[0047] Although embodiments of this invention have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the present disclosure as defined by the appended claims. Specifically, exemplary components are described herein. Any combination of these
components may be used in any combination. For example, any component, feature, step or part may be integrated, separated, sub-divided, removed, duplicated, added, or used in any combination with any other component, feature, step or part or itself and remain within the scope of the present disclosure. Embodiments are exemplary only, and provide an illustrative combination of features, but are not limited thereto.
[0048] When used in this specification and claims, the terms "comprises" and
"comprising" and variations thereof mean that the specified features, steps or integers are included. The terms are not to be interpreted to exclude the presence of other features, steps or components.
[0049] The features disclosed in the foregoing description, or the following claims, or the accompanying drawings, expressed in their specific forms or in terms of a means for performing the disclosed function, or a method or process for attaining the disclosed result, as appropriate, may, separately, or in any combination of such features, be used for realising the invention in diverse forms thereof.

Claims

What is claimed is:
1. Method of controlling a head mounted electronic system, comprising: receiving a first data set comprising a series of data points received from an accelerometer coupled to the head mounted electronic system; and determining a control signal.
2. The method of claim 1, further comprising: receiving a second data set comprising a series of data points received from a gyroscope couple to the head mounted electronic system;
3. The method of claim 2, wherein the first data set is stored in a first queue and the second data set is stored in a second queue.
4. The method of claim 3, wherein a subset of data points from the second queue are averaged and compared against a stillness threshold to determine whether a user was still for a duration associated with the subset, if the average of the subset of data points are below the stillness threshold, the method proceeds with determining the control signal, and if the average of the subset of data points are above the stillness threshold, the method terminates without indicating the control signal.
5. The method of claim 3, wherein data points from the second queue are analysed to determine whether a false input occurs and data points from the first queue are used to determine the control signal if a false input is not determined.
6. The method of claim 5, further comprising creating from the second queue subsets of continuous data points for continuous data points outside of the noise band, and identifying a peak within each of the subsets of continuous data points and storing a significant values list from a peak value and corresponding time for each of the subsets of continuous data points.
7. The method of claim 6, wherein the corresponding time is a sequence location in the queue.
8. The method of claim 6, wherein a subset of continuous data points is created for sequential data sets of more than three data points outside of the noise band.
9. The method of claim 6, wherein the noise band comprises a first threshold corresponding to a sensitivity, and a second threshold corresponding to a stillness.
10. The method of claim 6, further comprising analysing the significant values list for false positives to determine the control signal is not an input signal.
11. The method of claim 6, further comprising determining a total number of peaks within the significant values list and if the total number is less than a predetermined maximum number, the method proceeds with determining the control signal, and if the total number is greater than the predetermined maximum number, the method terminates without indicating the control signal.
12. The method of claim 6, further comprising determining a sign of each of the peaks within the significant values list and if sequential peaks alternate signs, the method proceeds with determining the control signal, and if sequential peaks are the same sign, the method terminates without indicating the control signal.
13. The method of claim 12, further comprising determining a distance between adjacent significant values and comparing the distance against a spread minimum threshold, and if the distance is greater than the spread minimum threshold, the method proceeds with determining the control signal, and if the distance is less than the spread minimum threshold, the method terminates without indicating the control signal.
14. The method of claim 6, further comprising analysing the significant values list for false positives to terminate the determination of the control signal, and if the analysis does not detect a false positive then analysing the first data set to determine the control signal.
15. The method of claim 14, wherein analysing the first data set comprising averaging a subset of the first data set and determining a direction to determine the control signal.
16. The method of claim 15, further comprising returning the control signal to control an attribute of the head mounted display.
17. The method of claim 1, further comprising 3 storing the first data set in a first queue.
18. The method of claim 17, further comprising iterating over the first queue a comparison of a current value compared against a previous value in the queue before the current value and a subsequent value in the queue after the current value.
19. The method of claim 18, wherein the comparison is determining if the current value is outside of a multiplier of the previous value and the subsequent value to determine a spike.
20. The method of claim 19, further comprising incrementing a counter for each spike detected and storing an index value corresponding to each detected spike.
21. The method of claim 20, further comprising completing an iteration over the first queue and determining a total number of counted spikes, if the total number of counted spikes is above a minimum and below a maximum, the method proceeds to determining the control signal, and if the total number of counted spikes is below a minimum or above a maximum, the control signal is determined to be a non control event.
22. The method of claim 20, further comprising determining a duration between each detected spike and comparing the duration against a set global threshold range.
23. A method comprising: defining a three dimensional reticle position, retrieving a series of three dimensional reticle positions; determining a control shape from the series of three dimensional reticle positions.
24. The method of claim 23, further comprising: sampling and storing the three dimensional reticle position in a queue; iterating over the queue to transform the three dimensional reticle position in the queue to a two dimensional position.
25. The method of claim 24, wherein the two dimensional position corresponds to a two dimensional screen space position.
26. The method of claim 25, further comprising creating a two dimensional array from the two dimensional positions.
27. The method of claim 26, further comprising comparing the two dimensional array against a recognition library to determine a corresponding detected shape.
PCT/US2019/063879 2018-11-28 2019-11-29 Control system for a three dimensional environment WO2020113185A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862772578P 2018-11-28 2018-11-28
US62/772,578 2018-11-28

Publications (1)

Publication Number Publication Date
WO2020113185A1 true WO2020113185A1 (en) 2020-06-04

Family

ID=70852665

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/063879 WO2020113185A1 (en) 2018-11-28 2019-11-29 Control system for a three dimensional environment

Country Status (1)

Country Link
WO (1) WO2020113185A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857366A (en) * 2020-06-15 2020-10-30 歌尔科技有限公司 Method and device for determining double-click action of earphone and earphone

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001493A1 (en) * 2002-06-26 2004-01-01 Cloonan Thomas J. Method and apparatus for queuing data flows
US20160232718A1 (en) * 2015-02-05 2016-08-11 Tsinghua University Method for designing three-dimensional freeform surface
US9710057B2 (en) * 2014-03-14 2017-07-18 Sony Interactive Entertainment Inc. Methods and systems including tracking a head mounted display (HMD) and calibrations for HMD headband adjustments

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001493A1 (en) * 2002-06-26 2004-01-01 Cloonan Thomas J. Method and apparatus for queuing data flows
US9710057B2 (en) * 2014-03-14 2017-07-18 Sony Interactive Entertainment Inc. Methods and systems including tracking a head mounted display (HMD) and calibrations for HMD headband adjustments
US20160232718A1 (en) * 2015-02-05 2016-08-11 Tsinghua University Method for designing three-dimensional freeform surface

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857366A (en) * 2020-06-15 2020-10-30 歌尔科技有限公司 Method and device for determining double-click action of earphone and earphone
CN111857366B (en) * 2020-06-15 2024-03-19 歌尔科技有限公司 Method and device for determining double-click action of earphone and earphone

Similar Documents

Publication Publication Date Title
US10712901B2 (en) Gesture-based content sharing in artificial reality environments
US11157725B2 (en) Gesture-based casting and manipulation of virtual content in artificial-reality environments
US9921663B2 (en) Moving object detecting apparatus, moving object detecting method, pointing device, and storage medium
US11017257B2 (en) Information processing device, information processing method, and program
US8933882B2 (en) User centric interface for interaction with visual display that recognizes user intentions
US11360551B2 (en) Method for displaying user interface of head-mounted display device
US11573641B2 (en) Gesture recognition system and method of using same
US20190212828A1 (en) Object enhancement in artificial reality via a near eye display interface
US20200005539A1 (en) Visual flairs for emphasizing gestures in artificial-reality environments
KR20160071404A (en) User interface programmatic scaling
WO2015008164A2 (en) Systems and methods of direct pointing detection for interaction with a digital device
EP2905680B1 (en) Information processing apparatus, information processing method, and program
US10896545B1 (en) Near eye display interface for artificial reality applications
US11803233B2 (en) IMU for touch detection
EP3779959B1 (en) Information processing device, information processing method, and program
WO2020113185A1 (en) Control system for a three dimensional environment
KR20220100051A (en) Slip-resistant eye tracking user interface
EP4220355A1 (en) Information processing device, information processing method, and program
CN116204060A (en) Gesture-based movement and manipulation of a mouse pointer
CN115803786A (en) Information processing apparatus, information processing method, and program
CN117873328A (en) Virtual reality interaction method and device, head-mounted virtual reality equipment and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19890399

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19890399

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30/01/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 19890399

Country of ref document: EP

Kind code of ref document: A1