WO2018154563A1 - Systems and methods for electronic device interface - Google Patents

Systems and methods for electronic device interface Download PDF

Info

Publication number
WO2018154563A1
WO2018154563A1 PCT/IL2018/050179 IL2018050179W WO2018154563A1 WO 2018154563 A1 WO2018154563 A1 WO 2018154563A1 IL 2018050179 W IL2018050179 W IL 2018050179W WO 2018154563 A1 WO2018154563 A1 WO 2018154563A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
electronic device
content
angular position
temporal information
Prior art date
Application number
PCT/IL2018/050179
Other languages
French (fr)
Inventor
Gal Rotem
Original Assignee
Double X Vr Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Double X Vr Ltd. filed Critical Double X Vr Ltd.
Publication of WO2018154563A1 publication Critical patent/WO2018154563A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera

Definitions

  • the present invention relates to electronic device interfaces and controls. BACKGROUND OF THE INVENTION
  • Head-mounted electronic displays devices such as virtual reality (VR) headsets, or other electronic devices mountable on head-mounted display holders, provide users with comfortable hands-free viewing.
  • VR virtual reality
  • head-mounted displays utilize screen touch triggers or magnetic triggers as mechanisms for providing the user with input control.
  • types of triggers require hand- use in order to perform triggering actions, thereby depriving the user from a completely hands-free viewing experience.
  • the present invention is directed to systems and methods, to interface with electronic devices, for example, head-mounted electronic display devices.
  • Embodiments of the present invention are directed to a system for interfacing with an electronic device having a display element.
  • the system comprises: a head- mounted display holder for positioning the display element of the electronic device in front of a user; a sensor arrangement functionally associated with the head-mounted display holder, the sensor arrangement detecting head-performed actions of the user; and a processing subsystem including at least one processor coupled to the sensor arrangement.
  • the processing subsystem is configured to: determine, based on received data derived from a detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head-performed action, and provide the determined angular position and temporal information to a content controller linked to the electronic device to enable the content controller to take at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
  • the sensor arrangement includes an accelerometer.
  • the sensor arrangement is carried by the electronic device.
  • the communicated content is web content.
  • the communicated content is image content
  • the communicated content is video content.
  • the communicated content is audio content.
  • the at least one axis of rotation includes a yaw axis.
  • the at least one axis of rotation includes a pitch axis.
  • the content controller is remotely located from the system and is in networked communication with the processing subsystem.
  • the content controller is linked to a website that communicates the content.
  • the content controller is linked to an application that communicates the content.
  • the electronic device is a smartphone or a tablet.
  • the respective threshold criteria are based on pre-determined values.
  • Embodiments of the present invention are directed to a method for interfacing with an electronic device having a display element.
  • the method comprises: detecting a head-performed action of a user via at least one sensor functionally associated with a head-mounted display holder positioning the display element of the electronic device in front of a user; determining, based on received data derived from the detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head- performed action; sending the determined angular position and temporal information to a content controller linked to the electronic device; checking if the angular position and the temporal information simultaneously satisfy respective threshold criteria; and taking at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
  • the temporal information includes a speed of the detected head- performed action.
  • the checking includes: determining if the absolute value of the angular position is greater than a threshold angular value, and determining if the speed of the detected head-performed action is greater than a threshold speed value.
  • the content controller is remotely located from the electronic device, and the sending includes: transmitting, over a network linking the content controller and the electronic device, at least one data packet that includes the determined angular position and temporal information.
  • Embodiments of the present invention are directed to a method for interfacing with an electronic device having a display element.
  • the method comprises: detecting a head-performed action of a user via at least one sensor functionally associated with a head-mounted display holder positioning the display element of the electronic device in front of a user; determining, based on received data derived from the detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head- performed action; checking if the angular position and the temporal information simultaneously satisfy respective threshold criteria; and actuating a content controller linked to the electronic device to take at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
  • the content controller is remotely located from the electronic device, and the actuating includes: transmitting, over a network, a command to the content controller to take the at least one action.
  • Embodiments of the present invention are directed to a system for interfacing with an electronic device having a display element.
  • the system comprises: a head- mounted display holder for positioning the display element of the electronic device in front of a user; a sensor arrangement functionally associated with the head-mounted display holder, the sensor arrangement detecting head-performed actions of the user; and a processing subsystem including at least one processor coupled to the sensor arrangement.
  • the processing subsystem is configured to: determine, based on received data derived from a detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head-performed action, and actuate a content controller linked to the electronic device to take at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
  • FIG. 1 is a schematic representation of an example environment in which embodiments of the invention can be performed
  • FIG. 2 is a block diagram of a system that includes a sensor arrangement and a processing subsystem for providing interface and control functionality between a user and an electronic device, based on head-performed actions by the user, according to an embodiment of the invention
  • FIG. 3 is a diagram illustrative example environment in which embodiments of the invention can be performed
  • FIG. 4 is a diagram of the architecture of an exemplary processing subsystem through which embodiments of the present invention can be performed;
  • FIGS. 5 and 6 are flow diagrams illustrating processes to interface with, and control, an electronic device, according to embodiments of the invention.
  • FIGS. 7 A and 7B are schematic representations of head-performed actions in a static reference frame;
  • FIGS. 8 A and 8B are schematic representations of head-performed actions in a dynamic reference frame.
  • the present invention is directed to systems and methods, to interface with electronic devices, for example, a head-mounted electronic display device displaying content from a content source (e.g., a remote server).
  • the head-mounted electronic display device is mounted to the user via a head-mounted display holder.
  • a sensor arrangement for example one or more accelerometers embedded in the electronic display device or externally coupled to the head-mounted display holder, detects head-performed actions by the user and provides data derived from such actions to a processing subsystem.
  • the processing subsystem determines the angular position of the head of the user, about one or more rotational axes of the user's head (e.g., pitch, yaw, etc.), resultant from the detected head-performed action, as well as the speed of the detected head-performed action.
  • the angular position and speed are evaluated against threshold criteria, and a content controller linked to the electronic device takes an action to affect the content in response to the detected head-performed action if the threshold criteria are satisfied.
  • the present invention is applicable to different types of presentations of digital content on head-mounted displays.
  • Such presentations include, but are not limited to, websites, video games, and virtual reality (VR) systems.
  • VR virtual reality
  • FIG. 1 shows a schematic representation of an example environment in which embodiments of the present disclosure can be performed when used by a user 180.
  • a head-mounted display holder 134 is worn on the head 182 of the user 180, and is affixed to the head 182 via a head attachment mechanism 136.
  • the attachment mechanism 136 is implemented as one or more adjustable straps which engage the back and top portions of the head 182.
  • the head-mounted display holder 134 further includes a mounting mechanism 138 for attaching an electronic device 130 to the head-mounted display holder 134.
  • the mounting mechanism 138 may be implemented as an arrangement of latches or clasps which provide a mechanism for removably attaching the electronic device 130 to the head-mounted display holder 134.
  • the electronic device 130 and the head-mounted display holder 134 are integrated to form a single unit, such as in the case of many VR systems.
  • the head-mounted display holder 134 functions to position a display 132 of the electronic device 130 in front of the user 180, and more specifically in front of the eyes of the user 180.
  • the display 132 as seen by the user 180, is illustrated as a projection away from the electronic device 130 for clarity.
  • the electronic device 130 may be any device which can communicate digital content, in the form of visual or audio display, to the user 180.
  • digital content include, but are not limited to, image content, video content, web content from a website, audio content, and VR content.
  • the electronic device 130 is implemented as a mobile communication device which can be readily transported from one location to another.
  • mobile communication devices include, but are not limited to tablet computing devices (e.g., iPad from Apple of Cupertino, CA), smartphones (e.g., iPhone from Apple of Cupertino, CA), and VR headsets (e.g., Oculus Rift from Facebook of Menlo Park, CA).
  • tablet computing devices e.g., iPad from Apple of Cupertino, CA
  • smartphones e.g., iPhone from Apple of Cupertino, CA
  • VR headsets e.g., Oculus Rift from Facebook of Menlo Park, CA
  • One or more sensors 112, depicted in FIG. 1 as a single sensor for clarity, are functionally associated with the head-mounted display holder 134.
  • the sensors 112 function to detect head-performed actions of the user 180.
  • the functional association with the head-mounted display holder 134 may be accomplished, for example, by attaching the sensors 112 to an external portion of electronic device 130, leveraging one or more sensors of the electronic device 130, or attaching the sensors 112 to a portion of the head-mounted display holder 134, as exemplarily illustrated by the attachment of the sensors 112 to mounting mechanism 138 in FIG. 1.
  • FIG. 2 illustrates a block diagram of a system, generally designated 100, for interfacing with the electronic device 130, according to an embodiment of the present disclosure.
  • the system 100 includes a sensor arrangement 110 that includes the sensors 112, and a processing subsystem 120 for processing data derived from the detected head- performed actions and providing commands based on the processed data.
  • the major components of the system 100 are attached, in some form, to the electronic device 130 and/or the head-mounted display holder 134, and therefore are positioned locally with the electronic device 130 and the user 180.
  • the system 100 may leverage such sensors and/or processors of the electronic device 130 to perform the interface functionality of the present disclosure.
  • the system 100 may utilize the accelerometers (i.e., the sensors 112) of the electronic device 130 to detect head-performed actions of the user 180.
  • the head-performed actions generally include movement of the head 182 about one or more rotational axes about which the human head is able to move, which in FIG. 1 is illustrated as a pitch axis 184, a yaw axis 186, and a roll axis 188. It is noted that the range of motion of the head 182 about the pitch axis 184 and yaw axis 186 is typically greater than the range of motion about the roll axis 188. Moreover, head movement about the pitch axis 184 and yaw axis 186 is typically more easily performed than head movement about the roll axis 188. Nevertheless, the head- performed actions performed by the user 180 may include movement, to some degree, about the roll axis 188.
  • the head-performed actions detected and processed by the system 100 are preferably fast-twitch actions, in which the head 182 moves about one or more of the axes 184, 186, 188 from an initial resting position to a final resting position in a relatively short time span, for example, on the order of a few seconds.
  • the sensors 112 provide data-bearing electrical signals to the processing subsystem 120.
  • the data received by the processing subsystem 120 includes information indicative of the angular position of the head 182 as a result of the head- performed actions, as well as temporal information that indicates the amount of time required to perform the detected head-performed action or the speed with which a detected head-performed action has been performed.
  • the angular position of the head 182 refers to the rotation of the head 182 about one or more of the axes 184, 186, 188.
  • the accelerometric data captured by the accelerometer is converted into an angular position indicative of the rotation of the head 182 about one or more of the axes 184, 186, 188.
  • the angular position may be derived from the accelerometric data by converting the acceleration detected by the accelerometer into position, for example via double integration of the acceleration with respect to time, and calculating the change in angle, about the axis of rotation of the head 182, caused by the change from the initial position of the head 182 to the final position of the head.
  • the accelerometric data may also include timing information associated with the initial and final position of the head 182. For example, the time to complete the head- performed action can be derived or extracted from the accelerometric data. Alternatively, the speed or velocity with which the head-performed action was performed can be derived from the accelerometric data by converting the acceleration detected by the accelerometer into speed or velocity, for example via integration of the acceleration with respect to time.
  • the network 200 may be formed of one or more networks, including, for example, the Internet, cellular networks, wide area, public, and local networks.
  • the electronic device 130 receives digital content from a content provider for distribution to the user 180 as a visual and/or audio display.
  • the content provider may be, for example, a website 150 which provides web content, hosted by a server 140, to the electronic device 130.
  • the web content can include image content, video content, graphical content, text, or any other content typically accessed through websites.
  • the content provider may also be an application 160, hosted by the server 140, which provides various forms of digital content to the electronic device 130.
  • the content provided by the application 160 may include, for example, video content, image content, streaming content, video game content, and audio content.
  • Both the website 150 and the application 160 have a respective content controller 170 linked thereto.
  • the content controller 170 performs functions to change the content communicated to the user 180 in response to user input.
  • the digital content is web content which is displayed by the display 132 of the electronic device 130 as a website with a pop-up advertisement
  • the user 180 may provide input to actuate the content controller 170 to close the pop-up.
  • the digital content is provided via a website 150 or an application 160 having an options menu
  • the user 180 may provide input to actuate the content controller 170 to open the menu and select one or more options listed in the menu.
  • the user input is provided by the user 180 through head-performed actions, which are detected and processed by the system 100 in order to provide appropriate information or commands to the content controller 170.
  • a non-exhaustive list of actions performed by the content controller 170 in response to detected head-performed actions include: menu opening, menu item selection, menu closing, display page (e.g., webpage) scrolling action, webpage hyperlink selection, open web browser window, close web pop-up window, video playback control functionality (e.g., channel change, skip ahead, skip backward, pause, resume, etc.), and audio control (e.g., volume up, volume down, mute, unmute).
  • the content controller 170 is remotely located from the electronic device 130, and may be accessible through the network 200.
  • the content controller 170 may be implemented as one or more processors coupled to a storage media that includes machine executable instructions for execution by the processor(s). Such processors and storage media may be implemented, for example, at the server 140 or remotely from the server 140.
  • processors and storage media may be implemented, for example, at the server 140 or remotely from the server 140.
  • the processing subsystem 120 determines the angular position of the head 182 about one or more of the axes 184, 186, 188, as well as the temporal information (e.g., speed) associated with the detected action.
  • the processing subsystem 120 then sends the determined angular position and temporal information, for example as a transmission of one or more data packets over the network 200, to the content controller 170.
  • the content controller 170 monitors for receipt of such packets, either continuously, periodically or intermittently, in order to process the information in the packets.
  • the content controller 170 Upon receipt of the angular position and temporal information, as one or more data packets, the content controller 170 evaluates the angular position and the temporal information against respective threshold criteria in order to determine whether or not to take action to change the content communicated to the user 180 by the electronic device 130. If the angular position and the temporal information simultaneously satisfy their respective threshold criterion, the content controller 170 takes at least one action to modify the content communicated to the user 180 by the electronic device 130. For example, if the content is web content that includes a popup advertisement that is displayed by the display 132, and both the angular position and temporal information simultaneously satisfy a respective threshold criterion, the content controller 170 may close the pop-up advertisement.
  • the processing subsystem 120 evaluates the determined angular position and temporal information against the respective threshold criteria.
  • the processing subsystem 120 may send an actuation command via one or more data packets to the content controller 170, over the network 200, to take at least one action to modify the content communicated to the user 180 by the electronic device 130 in response to the angular position and temporal information simultaneously satisfying a respective threshold criterion.
  • threshold criteria and actions taken in response to satisfying the threshold criteria.
  • the examples below are provided within the context of the electronic device 130 displaying content from a website or from an application. As should be understood, the following examples are for illustration purposes only, and many other threshold criteria and actions responsive to such threshold criteria are possible.
  • Example 1 In one example of threshold criteria, movement of the head 182 upward about the pitch axis 184 to an angle greater than 10° in a span of less than 1 second (i.e., a speed greater than 10° per second) causes the content controller 170 to take an action to open the upper menu of the webpage or the application.
  • Example 2 In another example of threshold criteria, movement of the head 182 downward about the pitch axis 184 to an angle less than -10° in a span of less than 1 second (i.e., a speed greater than 10° per second) causes the content controller 170 to take an action to close the upper menu of the webpage or the application.
  • Example 3 In another example of threshold criteria, movement of the head 182 upward about the pitch axis 184 to an angle greater than 20° in a span of less than 2 seconds (i.e., a speed greater than 10° per second) causes the content controller 170 to take an action to scroll up to display the content at the top of the webpage or the application.
  • Example 4 In another example of threshold criteria, movement of the head 182 to the right about the yaw axis 186 to an angle greater than 20° in a span of less than 1 second (i.e., a speed greater than 20° per second) causes the content controller 170 to take an action to close an open pop-up.
  • the evaluation against the angular threshold may be performed by comparing the absolute value of the determined angular position to an angular threshold value.
  • the angular threshold criterion is satisfied if the absolute value of the determined angular position is greater than the angular threshold value.
  • such an evaluation entails checking whether the absolute value of the determined angular position is greater than 10°.
  • the temporal threshold value comparison i.e., time span or speed of the head-performed action
  • the angular position threshold comparison provides an unambiguous output as to which action, if any, should be taken by the content controller 170.
  • the angular position and temporal information may be evaluated against more than one respective threshold at a time to ensure that the action intended by the user 180 is taken by the content controller 170.
  • the simultaneous evaluation of the angular position and the temporal information (e.g., speed) from the head-performed action prevents the system 100 and/or the content controller 170 from taking actions in response to false detections.
  • the user 180 may move the head 182 upward about the pitch axis 184 to an angular position of greater than 20° but in a span of 3 seconds or more.
  • the simultaneous threshold evaluation prevents slow head movements from inducing a false detection by the system 100 and/or the content controller 170, thereby ensuring that selected fast-twitch head actions trigger the content change action.
  • the system 100 may be configured to employ a reset time between detected head -performed actions.
  • the reset time defines the minimal elapsed time period required after a detected head-performed action before the system 100 can detect another head-performed action or respond to another detected head-performed action.
  • the reset time may be a configurable parameter set by the user 180, via a user input to the processing subsystem 120, and may be on the order of a few seconds, for example 2 seconds.
  • the user 180 performs a head- performed action by moving the head 182 from a straight-ahead zero pitch angle position (i.e., initial resting position) to a head-up increased pitch angle position (i.e., final resting position) via rotation about the pitch axis 184.
  • the movement to the final resting position triggers the detection of the head-performed action by the system 100, which causes the content controller 170 to take an action (e.g., open a menu as Example 1 above).
  • the user 180 moves the head 182 back down to the initial resting position.
  • the movement back to the initial resting position may be a recoil action, which can be a voluntary or involuntary human action.
  • the reset time e.g. 10 ms
  • the reset time may be stored in a memory of the content controller 170 and/or in a memory of the processing subsystem 120.
  • the threshold values and the actions to which the values correspond may be stored in a memory of the content controller and/or in a memory of the processing subsystem 120, and may be programmably assigned and modified by the user 180, via a user input to the processing subsystem 120. In this way, the angular position and temporal information may be can be evaluated against pre-determined threshold values (e.g., threshold values set by the user 180).
  • the user 180 may also create new actions corresponding to new threshold values. In this way, the system 100 provides the user 180 with the capability to personalize the threshold values to according to user preferences.
  • the processing subsystem 120 includes a central processing unit (CPU) 402 that is formed of one or more processors 404 for performing various functions, including some or all of the processes and sub- processes shown and described in the flow diagrams of FIGS. 5 and 6.
  • the processors which can include microprocessors are, for example, conventional processors, such as those used in servers, computers, and other computerized devices.
  • the processors may include x86 Processors from AMD and Xeon® and Pentium® processors from Intel, as well as any combinations thereof.
  • the processing subsystem 120 further includes four exemplary memory devices: a random-access memory (RAM) 406, a boot read-only memory (ROM) 408, a mass storage device (i.e., a hard disk) 410, and a flash memory 412.
  • RAM random-access memory
  • ROM boot read-only memory
  • mass storage device i.e., a hard disk
  • flash memory i.e., a flash memory 412.
  • processing and memory can include any computer readable medium storing software and/or firmware and/or any hardware element(s) including but not limited to field programmable logic array (FPLA) element(s), hard-wired logic element(s), field programmable gate array (FPGA) element(s), and application-specific integrated circuit (ASIC) element(s).
  • FPLA field programmable logic array
  • FPGA field programmable gate array
  • ASIC application-specific integrated circuit
  • Any instruction set architecture may be used in the CPU 402 including but not limited to reduced instruction set computer (RISC) architecture and/or
  • the mass storage device 410 is a non-limiting example of a non-transitory computer-readable storage medium bearing computer-readable code for implementing the stereoscopic presentation creation methodology described herein.
  • the non- transitory computer readable (storage) medium may be a computer readable signal medium or a computer readable storage medium.
  • Other examples of a computer readable storage medium include, but are not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the processing subsystem 120 may have an operating system (OS) stored on one or more of the memory devices.
  • the OS may include any of the conventional computer operating systems, such as those available from Microsoft of Redmond Washington, commercially available as Windows® OS, such as, for example, Windows® XP, Windows® 7, Windows® 8 and Windows® 10, MAC OS from Apple of Cupertino, CA, or Linux.
  • the ROM 408 may include boot code for the processing subsystem 120, and the CPU 402 may be configured for executing the boot code to load the OS to the RAM 406, executing the operating system to copy computer-readable code to the RAM 406 and execute the code.
  • a network connection 420 provides communications to and from the processing subsystem 120 over a network, such as for example, the network 200.
  • the data packets which include the angular position and temporal information (in some embodiments) or the actuation commands (in other embodiments) may be transmitted to the network 200 for receipt by the content controller 170.
  • a single network connection provides one or more links, including virtual connections, to other devices on local and/or remote networks.
  • the processing subsystem 120 can include more than one network connection (not shown), each network connection providing one or more links to other devices and/or networks.
  • All of the components of the processing subsystem 120 are connected to each other (electronically and/or data), either directly or indirectly, through one or more connections, exemplified in FIG. 4 as a communication bus 414.
  • FIG. 5 shows a flow diagram detailing a computer- implemented process 500 in accordance with embodiments of the disclosed subject matter.
  • the computer-implemented process includes steps for interfacing with the electronic device 130 and controlling the visual or audio output of the electronic device 130. Reference is also made to the elements shown in FIGS. 1-4.
  • the process and sub-processes of FIG. 5 are computerized processes performed by the system 100 and related components, such as the content controller 170.
  • the process 500 begins at block 502 where the sensor arrangement 110, and more particularly the sensors 112 (e.g., accelerometers), collects measurement data to detect head-performed actions of the user 180.
  • the process 500 then moves to block 504, where the processing subsystem 120 receives the collected measurement data and determines the angular position of the head 182 about one or more of the axes 184, 186, 188 and the temporal information (e.g., speed) associated with the detected action.
  • the angular position and the temporal information are derived from the collected measurement data.
  • the process 500 then moves to block 506, where the processing subsystem 120 transmits the derived angular position and temporal information to the content controller 170 over the network 200 as one or more data packets.
  • the process 500 then moves to block 508, where the content controller 170, which continuously, periodically or intermittently monitors for receipt of such packets, receives the data packet(s).
  • the content controller 170 extracts the relevant angular position and temporal information data from the data packet(s), and evaluates the angular position against one or more angular thresholds, and evaluates the temporal information against one or more temporal thresholds, in order to determine whether or not to take action to change the content communicated to the user 180 by the electronic device 130 in response to the detected head-performed action.
  • the process 500 moves to block 510, where the content controller 170 changes the visual (or audio) display of the content communicated (e.g., displayed) by the electronic device 130 according to the action associated with the satisfied thresholds.
  • threshold criteria e.g., angular position is greater than a threshold angular position, and head-performed action speed is greater than a threshold speed
  • the process 500 returns to block 502 to collect additional sensor measurement data to detect a new head-performed action. It is noted that subsequent to the execution of actions described in block 510, the process 500 may also return to block 502 to detect the next head-performed action.
  • the threshold comparison is performed by the content controller 170 in some embodiments, while in other embodiments the threshold comparison is performed by the processing subsystem 120.
  • the process 500 as described above corresponds to embodiments in which the threshold comparison is performed by the content controller 170 in some embodiments.
  • FIG. 6 shows a flow diagram detailing a computer-implemented process 600 in accordance with embodiments of the disclosed subject matter, in particular, embodiments in which the threshold comparison is performed by the processing subsystem 120.
  • blocks 602, 604 and 610 of the process 600 correspond to blocks 502, 504 and 510 of the process 500, and should be understood by analogy thereto. Therefore, the details of blocks 602, 604 and 610 will not be repeated here.
  • the process 600 moves from block 604 to block 606, where the processing subsystem 120 evaluates the angular position (determined in block 604) against one or more angular thresholds, and evaluates the temporal information (determined in block 604) against one or more temporal thresholds, in order to determine whether or not to take action. If either or both of the angular position and temporal information fail to satisfy their respective threshold criteria, the process 600 returns to block 602 to collect additional sensor measurement data to detect a new head-performed action.
  • the process 600 moves to block 608, where the processing subsystem 120 sends an actuation command to the content controller 170 to change the visual (or audio) display of the content communicated (i.e., displayed) by the electronic deice 130 in a manner corresponding to the satisfied thresholds.
  • the processing subsystem 120 sends an actuation command to the content controller 170 to change the visual (or audio) display of the content communicated (i.e., displayed) by the electronic deice 130 in a manner corresponding to the satisfied thresholds.
  • the process 600 then moves to block 610 where the actuation command, which may be sent as a data packet or packets over the network 200, is received by the content controller 170, and the content controller 170 performs one or more actions according to the actuation command. Specifically, the content controller 170 changes the visual (or audio) display of the content communicated (i.e., displayed) by the electronic deice 130 in a manner corresponding to the satisfied thresholds.
  • the angular position of the head 182 about each of the axes 184, 186, 188 is determined relative to the zero pitch, zero yaw, and zero roll angles. It is noted, however, that these zero angles may be static, for example if the reference frame in which the angles are measured is static and does not change with the movement of the head 182. Alternatively, these zero angles may be dynamic, for example if the reference frame in which the angles are measured is dynamic and changes with the movement of the head 182.
  • FIGS. 7A-8B examples are illustrated in which the reference frame in which the angles are measured does not change with the movement of the head 182, and in which the reference frame in which the angles are measured changes with the movement of the head 182.
  • FIGS. 7 A and 7B illustrate examples of head-performed actions in a static reference frame
  • FIGS. 8 A and 8B illustrate examples of head-performed actions in a dynamic reference frame.
  • the head 182 is represented schematically as an oblong object, and the rotational positions of the head are represented by vectors originating at the oblong object. It is also noted that the angles illustrated in FIGS. 7A-8B are not to scale. Referring first to FIGS.
  • the final 10° movement will not trigger any action by the content controller 170 (based on the example thresholds discussed above) since the final angular position in the reference frame is -5° (FIG. 7B). However, if the reference frame in which the angles are measured changes with the movement of the head 182, the final 10° movement will trigger an action by the content controller 170 (based on the example thresholds discussed above), since the final angular position in the reference frame is -10° as a result of the change in the pitch axis 184 (FIG. 8B).
  • the angular position of the head 182 about one of the axes of rotation is determined based on a fixed baseline angular position.
  • a fixed baseline angular position corresponds to scenarios where the reference frame in which the angles are measured does not change with the movement of the head 182 (e.g., FIGS. 7A and 7B).
  • the angular position of the head 182 about one of the axes of rotation is determined based on the change in angular position between the angular position of the head 182 prior to movement and the angular position of the head after the movement has completed.
  • the reference frame in which the angles are measured changes with the movement of the head 182 (e.g., FIGS. 8A and 8B).
  • Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system, such as the OS of the processing subsystem 120. As will be understood with reference to the paragraphs and the referenced drawings, provided above, various embodiments of computer-implemented methods are provided herein, some of which can be performed by various embodiments of apparatuses and systems described herein and some of which can be performed according to instructions stored in non-transitory computer-readable storage media described herein.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Abstract

A system interfaces with an electronic device having a display element. A head-mounted display holder positions the display element of the electronic device in front of a user. A sensor arrangement functionally associated with the head-mounted display holder detects head-performed actions of the user. A processing subsystem determines, based on data derived from a detected head-performed action, an angular position of the head of the user and temporal information associated with the detected head-performed action. The processing subsystem provides the determined angular position and temporal information to a content controller linked to the electronic device to enable the content controller to take at least one action to affect content communicated by the electronic device if the angular position and temporal information simultaneously satisfy respective threshold criteria. In some embodiments, the processing subsystem checks if the angular position and temporal information satisfy respective threshold criteria and accordingly actuates the content controller.

Description

APPLICATION FOR PATENT
TITLE
Systems and Methods for Electronic Device Interface
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority from UK Provisional Patent Application No.
1702939.8, filed February 23, 2017, whose disclosure is incorporated by reference in its entirety herein.
TECHNICAL FIELD
The present invention relates to electronic device interfaces and controls. BACKGROUND OF THE INVENTION
Head-mounted electronic displays devices, such as virtual reality (VR) headsets, or other electronic devices mountable on head-mounted display holders, provide users with comfortable hands-free viewing. Typically, such head-mounted displays utilize screen touch triggers or magnetic triggers as mechanisms for providing the user with input control. However, such types of triggers require hand- use in order to perform triggering actions, thereby depriving the user from a completely hands-free viewing experience.
SUMMARY OF THE INVENTION
The present invention is directed to systems and methods, to interface with electronic devices, for example, head-mounted electronic display devices.
Embodiments of the present invention are directed to a system for interfacing with an electronic device having a display element. The system comprises: a head- mounted display holder for positioning the display element of the electronic device in front of a user; a sensor arrangement functionally associated with the head-mounted display holder, the sensor arrangement detecting head-performed actions of the user; and a processing subsystem including at least one processor coupled to the sensor arrangement. The processing subsystem is configured to: determine, based on received data derived from a detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head-performed action, and provide the determined angular position and temporal information to a content controller linked to the electronic device to enable the content controller to take at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
Optionally, the sensor arrangement includes an accelerometer.
Optionally, the sensor arrangement is carried by the electronic device.
Optionally, the communicated content is web content.
Optionally, the communicated content is image content
Optionally, the communicated content is video content.
Optionally, the communicated content is audio content.
Optionally, the at least one axis of rotation includes a yaw axis.
Optionally, the at least one axis of rotation includes a pitch axis.
Optionally, the content controller is remotely located from the system and is in networked communication with the processing subsystem.
Optionally, the content controller is linked to a website that communicates the content.
Optionally, the content controller is linked to an application that communicates the content.
Optionally, the electronic device is a smartphone or a tablet.
Optionally, the respective threshold criteria are based on pre-determined values.
Embodiments of the present invention are directed to a method for interfacing with an electronic device having a display element. The method comprises: detecting a head-performed action of a user via at least one sensor functionally associated with a head-mounted display holder positioning the display element of the electronic device in front of a user; determining, based on received data derived from the detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head- performed action; sending the determined angular position and temporal information to a content controller linked to the electronic device; checking if the angular position and the temporal information simultaneously satisfy respective threshold criteria; and taking at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria. Optionally, the temporal information includes a speed of the detected head- performed action.
Optionally, the checking includes: determining if the absolute value of the angular position is greater than a threshold angular value, and determining if the speed of the detected head-performed action is greater than a threshold speed value.
Optionally, the content controller is remotely located from the electronic device, and the sending includes: transmitting, over a network linking the content controller and the electronic device, at least one data packet that includes the determined angular position and temporal information.
Embodiments of the present invention are directed to a method for interfacing with an electronic device having a display element. The method comprises: detecting a head-performed action of a user via at least one sensor functionally associated with a head-mounted display holder positioning the display element of the electronic device in front of a user; determining, based on received data derived from the detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head- performed action; checking if the angular position and the temporal information simultaneously satisfy respective threshold criteria; and actuating a content controller linked to the electronic device to take at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
Optionally, the content controller is remotely located from the electronic device, and the actuating includes: transmitting, over a network, a command to the content controller to take the at least one action.
Embodiments of the present invention are directed to a system for interfacing with an electronic device having a display element. The system comprises: a head- mounted display holder for positioning the display element of the electronic device in front of a user; a sensor arrangement functionally associated with the head-mounted display holder, the sensor arrangement detecting head-performed actions of the user; and a processing subsystem including at least one processor coupled to the sensor arrangement. The processing subsystem is configured to: determine, based on received data derived from a detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head-performed action, and actuate a content controller linked to the electronic device to take at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
Unless otherwise defined herein, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein may be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control.
In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
BRIEF DESCRIPTION OF THE DRAWINGS
Some embodiments of the present invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention.
In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
Attention is now directed to the drawings, where like reference numerals or characters indicate corresponding or like components. In the drawings:
FIG. 1 is a schematic representation of an example environment in which embodiments of the invention can be performed;
FIG. 2 is a block diagram of a system that includes a sensor arrangement and a processing subsystem for providing interface and control functionality between a user and an electronic device, based on head-performed actions by the user, according to an embodiment of the invention;
FIG. 3 is a diagram illustrative example environment in which embodiments of the invention can be performed;
FIG. 4 is a diagram of the architecture of an exemplary processing subsystem through which embodiments of the present invention can be performed;
FIGS. 5 and 6 are flow diagrams illustrating processes to interface with, and control, an electronic device, according to embodiments of the invention; FIGS. 7 A and 7B are schematic representations of head-performed actions in a static reference frame; and
FIGS. 8 A and 8B are schematic representations of head-performed actions in a dynamic reference frame.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention is directed to systems and methods, to interface with electronic devices, for example, a head-mounted electronic display device displaying content from a content source (e.g., a remote server). The head-mounted electronic display device is mounted to the user via a head-mounted display holder. A sensor arrangement, for example one or more accelerometers embedded in the electronic display device or externally coupled to the head-mounted display holder, detects head-performed actions by the user and provides data derived from such actions to a processing subsystem. The processing subsystem determines the angular position of the head of the user, about one or more rotational axes of the user's head (e.g., pitch, yaw, etc.), resultant from the detected head-performed action, as well as the speed of the detected head-performed action. The angular position and speed are evaluated against threshold criteria, and a content controller linked to the electronic device takes an action to affect the content in response to the detected head-performed action if the threshold criteria are satisfied.
The present invention is applicable to different types of presentations of digital content on head-mounted displays. Such presentations include, but are not limited to, websites, video games, and virtual reality (VR) systems.
The principles and operation of the systems and methods according to present invention may be better understood with reference to the drawings accompanying the description.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the examples. The invention is capable of other embodiments or of being practiced or carried out in various ways. Initially, throughout this document, references are made to directions such as, for example, up and down, left and right, and the like. These directional references are exemplary only to illustrate the invention and embodiments thereof. Referring now to the drawings, FIG. 1 shows a schematic representation of an example environment in which embodiments of the present disclosure can be performed when used by a user 180. A head-mounted display holder 134 is worn on the head 182 of the user 180, and is affixed to the head 182 via a head attachment mechanism 136. In certain embodiments, the attachment mechanism 136 is implemented as one or more adjustable straps which engage the back and top portions of the head 182. In certain embodiments, the head-mounted display holder 134 further includes a mounting mechanism 138 for attaching an electronic device 130 to the head-mounted display holder 134. The mounting mechanism 138 may be implemented as an arrangement of latches or clasps which provide a mechanism for removably attaching the electronic device 130 to the head-mounted display holder 134. In other embodiments, the electronic device 130 and the head-mounted display holder 134 are integrated to form a single unit, such as in the case of many VR systems. The head-mounted display holder 134 functions to position a display 132 of the electronic device 130 in front of the user 180, and more specifically in front of the eyes of the user 180. In FIG. 1, the display 132, as seen by the user 180, is illustrated as a projection away from the electronic device 130 for clarity.
The electronic device 130 may be any device which can communicate digital content, in the form of visual or audio display, to the user 180. Such types of digital content include, but are not limited to, image content, video content, web content from a website, audio content, and VR content. According to certain embodiments, the electronic device 130 is implemented as a mobile communication device which can be readily transported from one location to another. Such mobile communication devices include, but are not limited to tablet computing devices (e.g., iPad from Apple of Cupertino, CA), smartphones (e.g., iPhone from Apple of Cupertino, CA), and VR headsets (e.g., Oculus Rift from Facebook of Menlo Park, CA). Note that in implementations in which the electronic device 130 is implemented as a VR headset, the electronic device 130 and the head-mounted display holder 134 are integrated in a single unit.
One or more sensors 112, depicted in FIG. 1 as a single sensor for clarity, are functionally associated with the head-mounted display holder 134. The sensors 112 function to detect head-performed actions of the user 180. The functional association with the head-mounted display holder 134 may be accomplished, for example, by attaching the sensors 112 to an external portion of electronic device 130, leveraging one or more sensors of the electronic device 130, or attaching the sensors 112 to a portion of the head-mounted display holder 134, as exemplarily illustrated by the attachment of the sensors 112 to mounting mechanism 138 in FIG. 1.
With continued reference to FIG. 1, refer now to FIG. 2, which illustrates a block diagram of a system, generally designated 100, for interfacing with the electronic device 130, according to an embodiment of the present disclosure. The system 100 includes a sensor arrangement 110 that includes the sensors 112, and a processing subsystem 120 for processing data derived from the detected head- performed actions and providing commands based on the processed data. The major components of the system 100 are attached, in some form, to the electronic device 130 and/or the head-mounted display holder 134, and therefore are positioned locally with the electronic device 130 and the user 180.
It is generally noted that many current electronic devices include various sensors and processors as part of the device. For example, tablet computing device and smartphones typically include accelerometers for detecting various positional and temporal related attributes of the device. According to certain embodiments, the system 100 may leverage such sensors and/or processors of the electronic device 130 to perform the interface functionality of the present disclosure. For example, in implementations in which the electronic device 130 is implemented as a tablet computing device or a smartphone, the system 100 may utilize the accelerometers (i.e., the sensors 112) of the electronic device 130 to detect head-performed actions of the user 180.
The head-performed actions generally include movement of the head 182 about one or more rotational axes about which the human head is able to move, which in FIG. 1 is illustrated as a pitch axis 184, a yaw axis 186, and a roll axis 188. It is noted that the range of motion of the head 182 about the pitch axis 184 and yaw axis 186 is typically greater than the range of motion about the roll axis 188. Moreover, head movement about the pitch axis 184 and yaw axis 186 is typically more easily performed than head movement about the roll axis 188. Nevertheless, the head- performed actions performed by the user 180 may include movement, to some degree, about the roll axis 188. The head-performed actions detected and processed by the system 100 are preferably fast-twitch actions, in which the head 182 moves about one or more of the axes 184, 186, 188 from an initial resting position to a final resting position in a relatively short time span, for example, on the order of a few seconds.
The sensors 112 provide data-bearing electrical signals to the processing subsystem 120. The data received by the processing subsystem 120 includes information indicative of the angular position of the head 182 as a result of the head- performed actions, as well as temporal information that indicates the amount of time required to perform the detected head-performed action or the speed with which a detected head-performed action has been performed. The angular position of the head 182 refers to the rotation of the head 182 about one or more of the axes 184, 186, 188.
In embodiments in which the sensors 112 are implemented as accelerometers, the accelerometric data captured by the accelerometer is converted into an angular position indicative of the rotation of the head 182 about one or more of the axes 184, 186, 188. The angular position may be derived from the accelerometric data by converting the acceleration detected by the accelerometer into position, for example via double integration of the acceleration with respect to time, and calculating the change in angle, about the axis of rotation of the head 182, caused by the change from the initial position of the head 182 to the final position of the head. The accelerometric data may also include timing information associated with the initial and final position of the head 182. For example, the time to complete the head- performed action can be derived or extracted from the accelerometric data. Alternatively, the speed or velocity with which the head-performed action was performed can be derived from the accelerometric data by converting the acceleration detected by the accelerometer into speed or velocity, for example via integration of the acceleration with respect to time.
With continued reference to FIGS. 1 and 2, refer now to FIG. 3, which shows an illustrative example environment in which embodiments of the present disclosure can be performed over a network 200. The network 200 may be formed of one or more networks, including, for example, the Internet, cellular networks, wide area, public, and local networks. The electronic device 130 receives digital content from a content provider for distribution to the user 180 as a visual and/or audio display. The content provider may be, for example, a website 150 which provides web content, hosted by a server 140, to the electronic device 130. The web content can include image content, video content, graphical content, text, or any other content typically accessed through websites. The content provider may also be an application 160, hosted by the server 140, which provides various forms of digital content to the electronic device 130. The content provided by the application 160 may include, for example, video content, image content, streaming content, video game content, and audio content.
Both the website 150 and the application 160 have a respective content controller 170 linked thereto. The content controller 170 performs functions to change the content communicated to the user 180 in response to user input. As an example, if the digital content is web content which is displayed by the display 132 of the electronic device 130 as a website with a pop-up advertisement, the user 180 may provide input to actuate the content controller 170 to close the pop-up. As another example, if the digital content is provided via a website 150 or an application 160 having an options menu, the user 180 may provide input to actuate the content controller 170 to open the menu and select one or more options listed in the menu. As will be discussed in greater detail, the user input is provided by the user 180 through head-performed actions, which are detected and processed by the system 100 in order to provide appropriate information or commands to the content controller 170.
A non-exhaustive list of actions performed by the content controller 170 in response to detected head-performed actions include: menu opening, menu item selection, menu closing, display page (e.g., webpage) scrolling action, webpage hyperlink selection, open web browser window, close web pop-up window, video playback control functionality (e.g., channel change, skip ahead, skip backward, pause, resume, etc.), and audio control (e.g., volume up, volume down, mute, unmute).
The content controller 170 is remotely located from the electronic device 130, and may be accessible through the network 200. The content controller 170 may be implemented as one or more processors coupled to a storage media that includes machine executable instructions for execution by the processor(s). Such processors and storage media may be implemented, for example, at the server 140 or remotely from the server 140. In an exemplary series of processes to interface with the electronic device 130 and control the visual and/or audio output of the electronic device 130, the sensor arrangement 110, and more particularly the sensors 112 (e.g., accelerometers), detect head-performed actions of the user 180. In response to detecting such an action, the processing subsystem 120 determines the angular position of the head 182 about one or more of the axes 184, 186, 188, as well as the temporal information (e.g., speed) associated with the detected action.
According to certain embodiments, the processing subsystem 120 then sends the determined angular position and temporal information, for example as a transmission of one or more data packets over the network 200, to the content controller 170. The content controller 170 monitors for receipt of such packets, either continuously, periodically or intermittently, in order to process the information in the packets.
Upon receipt of the angular position and temporal information, as one or more data packets, the content controller 170 evaluates the angular position and the temporal information against respective threshold criteria in order to determine whether or not to take action to change the content communicated to the user 180 by the electronic device 130. If the angular position and the temporal information simultaneously satisfy their respective threshold criterion, the content controller 170 takes at least one action to modify the content communicated to the user 180 by the electronic device 130. For example, if the content is web content that includes a popup advertisement that is displayed by the display 132, and both the angular position and temporal information simultaneously satisfy a respective threshold criterion, the content controller 170 may close the pop-up advertisement.
In other embodiments, the processing subsystem 120 evaluates the determined angular position and temporal information against the respective threshold criteria. In such embodiments the processing subsystem 120 may send an actuation command via one or more data packets to the content controller 170, over the network 200, to take at least one action to modify the content communicated to the user 180 by the electronic device 130 in response to the angular position and temporal information simultaneously satisfying a respective threshold criterion.
The following paragraphs describe non-limiting examples of the threshold criteria and actions taken in response to satisfying the threshold criteria. The examples below are provided within the context of the electronic device 130 displaying content from a website or from an application. As should be understood, the following examples are for illustration purposes only, and many other threshold criteria and actions responsive to such threshold criteria are possible.
Example 1: In one example of threshold criteria, movement of the head 182 upward about the pitch axis 184 to an angle greater than 10° in a span of less than 1 second (i.e., a speed greater than 10° per second) causes the content controller 170 to take an action to open the upper menu of the webpage or the application.
Example 2: In another example of threshold criteria, movement of the head 182 downward about the pitch axis 184 to an angle less than -10° in a span of less than 1 second (i.e., a speed greater than 10° per second) causes the content controller 170 to take an action to close the upper menu of the webpage or the application.
Example 3: In another example of threshold criteria, movement of the head 182 upward about the pitch axis 184 to an angle greater than 20° in a span of less than 2 seconds (i.e., a speed greater than 10° per second) causes the content controller 170 to take an action to scroll up to display the content at the top of the webpage or the application.
Example 4: In another example of threshold criteria, movement of the head 182 to the right about the yaw axis 186 to an angle greater than 20° in a span of less than 1 second (i.e., a speed greater than 20° per second) causes the content controller 170 to take an action to close an open pop-up.
According to certain embodiments, the evaluation against the angular threshold may be performed by comparing the absolute value of the determined angular position to an angular threshold value. In such embodiments, the angular threshold criterion is satisfied if the absolute value of the determined angular position is greater than the angular threshold value. In the scenario of Example 2 above, such an evaluation entails checking whether the absolute value of the determined angular position is greater than 10°. Preferably, in such embodiments, the temporal threshold value comparison (i.e., time span or speed of the head-performed action) when taken together with the angular position threshold comparison provides an unambiguous output as to which action, if any, should be taken by the content controller 170.
It is noted herein that the angular position and temporal information may be evaluated against more than one respective threshold at a time to ensure that the action intended by the user 180 is taken by the content controller 170. It is further noted that the simultaneous evaluation of the angular position and the temporal information (e.g., speed) from the head-performed action prevents the system 100 and/or the content controller 170 from taking actions in response to false detections. For example, the user 180 may move the head 182 upward about the pitch axis 184 to an angular position of greater than 20° but in a span of 3 seconds or more. The simultaneous threshold evaluation prevents slow head movements from inducing a false detection by the system 100 and/or the content controller 170, thereby ensuring that selected fast-twitch head actions trigger the content change action.
To further prevent false detections, the system 100 may be configured to employ a reset time between detected head -performed actions. The reset time defines the minimal elapsed time period required after a detected head-performed action before the system 100 can detect another head-performed action or respond to another detected head-performed action. The reset time may be a configurable parameter set by the user 180, via a user input to the processing subsystem 120, and may be on the order of a few seconds, for example 2 seconds.
For example, consider the scenario in which the user 180 performs a head- performed action by moving the head 182 from a straight-ahead zero pitch angle position (i.e., initial resting position) to a head-up increased pitch angle position (i.e., final resting position) via rotation about the pitch axis 184. The movement to the final resting position triggers the detection of the head-performed action by the system 100, which causes the content controller 170 to take an action (e.g., open a menu as Example 1 above). Shortly thereafter, for example, 10 milliseconds (ms) after settling at the final resting position, the user 180 moves the head 182 back down to the initial resting position. The movement back to the initial resting position may be a recoil action, which can be a voluntary or involuntary human action. In such a scenario, it is preferred that the movement back to the initial resting position, within a small elapsed time period, not trigger a detection of another head-performed action, or if such an action is detected, that such a detection does not lead to any actions taken by the content controller 170. Accordingly, if the elapsed time (e.g., 10 ms) between consecutive actions is less than the reset time (e.g., 2 seconds), such unintended detections (by the system 100) and/or responsive actions (by the content controller 170) are avoided. The reset time may be stored in a memory of the content controller 170 and/or in a memory of the processing subsystem 120. In addition, the threshold values and the actions to which the values correspond may be stored in a memory of the content controller and/or in a memory of the processing subsystem 120, and may be programmably assigned and modified by the user 180, via a user input to the processing subsystem 120. In this way, the angular position and temporal information may be can be evaluated against pre-determined threshold values (e.g., threshold values set by the user 180). The user 180 may also create new actions corresponding to new threshold values. In this way, the system 100 provides the user 180 with the capability to personalize the threshold values to according to user preferences.
Referring now to FIG. 4, a diagram of an example architecture of the processing subsystem 120. The processing subsystem 120 includes a central processing unit (CPU) 402 that is formed of one or more processors 404 for performing various functions, including some or all of the processes and sub- processes shown and described in the flow diagrams of FIGS. 5 and 6. The processors, which can include microprocessors are, for example, conventional processors, such as those used in servers, computers, and other computerized devices. For example, the processors may include x86 Processors from AMD and Xeon® and Pentium® processors from Intel, as well as any combinations thereof.
The processing subsystem 120 further includes four exemplary memory devices: a random-access memory (RAM) 406, a boot read-only memory (ROM) 408, a mass storage device (i.e., a hard disk) 410, and a flash memory 412. As is known in the art, processing and memory can include any computer readable medium storing software and/or firmware and/or any hardware element(s) including but not limited to field programmable logic array (FPLA) element(s), hard-wired logic element(s), field programmable gate array (FPGA) element(s), and application-specific integrated circuit (ASIC) element(s). Any instruction set architecture may be used in the CPU 402 including but not limited to reduced instruction set computer (RISC) architecture and/or complex instruction set computer (CISC) architecture. A module (i.e., a processing module) 416 is shown on the mass storage device 410, but as will be obvious to one skilled in the art, could be located on any of the memory devices.
The mass storage device 410 is a non-limiting example of a non-transitory computer-readable storage medium bearing computer-readable code for implementing the stereoscopic presentation creation methodology described herein. The non- transitory computer readable (storage) medium may be a computer readable signal medium or a computer readable storage medium. Other examples of a computer readable storage medium include, but are not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable ROM (EPROM or Flash memory), an optical fiber, a portable compact disc ROM (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
The processing subsystem 120 may have an operating system (OS) stored on one or more of the memory devices. The OS may include any of the conventional computer operating systems, such as those available from Microsoft of Redmond Washington, commercially available as Windows® OS, such as, for example, Windows® XP, Windows® 7, Windows® 8 and Windows® 10, MAC OS from Apple of Cupertino, CA, or Linux.
The ROM 408 may include boot code for the processing subsystem 120, and the CPU 402 may be configured for executing the boot code to load the OS to the RAM 406, executing the operating system to copy computer-readable code to the RAM 406 and execute the code. A network connection 420 provides communications to and from the processing subsystem 120 over a network, such as for example, the network 200. For example, the data packets which include the angular position and temporal information (in some embodiments) or the actuation commands (in other embodiments) may be transmitted to the network 200 for receipt by the content controller 170. Typically, a single network connection provides one or more links, including virtual connections, to other devices on local and/or remote networks. Alternatively, the processing subsystem 120 can include more than one network connection (not shown), each network connection providing one or more links to other devices and/or networks.
All of the components of the processing subsystem 120 are connected to each other (electronically and/or data), either directly or indirectly, through one or more connections, exemplified in FIG. 4 as a communication bus 414.
Attention is now directed to FIG. 5 which shows a flow diagram detailing a computer- implemented process 500 in accordance with embodiments of the disclosed subject matter. The computer-implemented process includes steps for interfacing with the electronic device 130 and controlling the visual or audio output of the electronic device 130. Reference is also made to the elements shown in FIGS. 1-4. The process and sub-processes of FIG. 5 are computerized processes performed by the system 100 and related components, such as the content controller 170.
The process 500 begins at block 502 where the sensor arrangement 110, and more particularly the sensors 112 (e.g., accelerometers), collects measurement data to detect head-performed actions of the user 180. The process 500 then moves to block 504, where the processing subsystem 120 receives the collected measurement data and determines the angular position of the head 182 about one or more of the axes 184, 186, 188 and the temporal information (e.g., speed) associated with the detected action. The angular position and the temporal information are derived from the collected measurement data.
The process 500 then moves to block 506, where the processing subsystem 120 transmits the derived angular position and temporal information to the content controller 170 over the network 200 as one or more data packets.
The process 500 then moves to block 508, where the content controller 170, which continuously, periodically or intermittently monitors for receipt of such packets, receives the data packet(s). The content controller 170 extracts the relevant angular position and temporal information data from the data packet(s), and evaluates the angular position against one or more angular thresholds, and evaluates the temporal information against one or more temporal thresholds, in order to determine whether or not to take action to change the content communicated to the user 180 by the electronic device 130 in response to the detected head-performed action. If the angular position and temporal information simultaneously satisfy their respective threshold criteria (e.g., angular position is greater than a threshold angular position, and head-performed action speed is greater than a threshold speed), the process 500 moves to block 510, where the content controller 170 changes the visual (or audio) display of the content communicated (e.g., displayed) by the electronic device 130 according to the action associated with the satisfied thresholds.
If either or both of the angular position and temporal information fail to satisfy their respective threshold criteria, the process 500 returns to block 502 to collect additional sensor measurement data to detect a new head-performed action. It is noted that subsequent to the execution of actions described in block 510, the process 500 may also return to block 502 to detect the next head-performed action.
As mentioned above, the threshold comparison is performed by the content controller 170 in some embodiments, while in other embodiments the threshold comparison is performed by the processing subsystem 120. The process 500 as described above corresponds to embodiments in which the threshold comparison is performed by the content controller 170 in some embodiments.
Attention is now directed to FIG. 6 which shows a flow diagram detailing a computer-implemented process 600 in accordance with embodiments of the disclosed subject matter, in particular, embodiments in which the threshold comparison is performed by the processing subsystem 120. Note that blocks 602, 604 and 610 of the process 600 correspond to blocks 502, 504 and 510 of the process 500, and should be understood by analogy thereto. Therefore, the details of blocks 602, 604 and 610 will not be repeated here.
With attention directed to block 604, the process 600 moves from block 604 to block 606, where the processing subsystem 120 evaluates the angular position (determined in block 604) against one or more angular thresholds, and evaluates the temporal information (determined in block 604) against one or more temporal thresholds, in order to determine whether or not to take action. If either or both of the angular position and temporal information fail to satisfy their respective threshold criteria, the process 600 returns to block 602 to collect additional sensor measurement data to detect a new head-performed action. If the angular position and temporal information simultaneously satisfy their respective threshold criteria (e.g., angular position is greater than a threshold angular position, and head-performed action speed is greater than a threshold speed), the process 600 moves to block 608, where the processing subsystem 120 sends an actuation command to the content controller 170 to change the visual (or audio) display of the content communicated (i.e., displayed) by the electronic deice 130 in a manner corresponding to the satisfied thresholds.
The process 600 then moves to block 610 where the actuation command, which may be sent as a data packet or packets over the network 200, is received by the content controller 170, and the content controller 170 performs one or more actions according to the actuation command. Specifically, the content controller 170 changes the visual (or audio) display of the content communicated (i.e., displayed) by the electronic deice 130 in a manner corresponding to the satisfied thresholds.
As discussed in detail above, the angular position of the head 182 about each of the axes 184, 186, 188 is determined relative to the zero pitch, zero yaw, and zero roll angles. It is noted, however, that these zero angles may be static, for example if the reference frame in which the angles are measured is static and does not change with the movement of the head 182. Alternatively, these zero angles may be dynamic, for example if the reference frame in which the angles are measured is dynamic and changes with the movement of the head 182.
With reference to FIGS. 7A-8B, examples are illustrated in which the reference frame in which the angles are measured does not change with the movement of the head 182, and in which the reference frame in which the angles are measured changes with the movement of the head 182. Specifically, FIGS. 7 A and 7B illustrate examples of head-performed actions in a static reference frame, and FIGS. 8 A and 8B illustrate examples of head-performed actions in a dynamic reference frame. In FIGS. 7A-8B, the head 182 is represented schematically as an oblong object, and the rotational positions of the head are represented by vectors originating at the oblong object. It is also noted that the angles illustrated in FIGS. 7A-8B are not to scale. Referring first to FIGS. 7 A and 8 A, an example in which the user 180 performs a head-performed action by moving the head 182 from a straight-ahead position (i.e., initial resting position) to a head-up position (i.e., final resting position) via upward rotation about the pitch axis 184 to angle of 5°. Clearly the movement is not enough to trigger any actions by the content controller 170 based on the example thresholds discussed above. After moving to the final resting position, and after allowing an elapsed time greater than the reset time, the user 180 performs a head- performed action by moving the head 182 downward 10° about the pitch axis 184 in a span of less than 1 second.
If the reference frame in which the angles are measured does not change with the movement of the head 182, the final 10° movement will not trigger any action by the content controller 170 (based on the example thresholds discussed above) since the final angular position in the reference frame is -5° (FIG. 7B). However, if the reference frame in which the angles are measured changes with the movement of the head 182, the final 10° movement will trigger an action by the content controller 170 (based on the example thresholds discussed above), since the final angular position in the reference frame is -10° as a result of the change in the pitch axis 184 (FIG. 8B).
Therefore, according to certain embodiments, the angular position of the head 182 about one of the axes of rotation is determined based on a fixed baseline angular position. Such embodiments correspond to scenarios where the reference frame in which the angles are measured does not change with the movement of the head 182 (e.g., FIGS. 7A and 7B). In other embodiments, the angular position of the head 182 about one of the axes of rotation is determined based on the change in angular position between the angular position of the head 182 prior to movement and the angular position of the head after the movement has completed. Such embodiments correspond to scenarios where the reference frame in which the angles are measured changes with the movement of the head 182 (e.g., FIGS. 8A and 8B).
Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system, such as the OS of the processing subsystem 120. As will be understood with reference to the paragraphs and the referenced drawings, provided above, various embodiments of computer-implemented methods are provided herein, some of which can be performed by various embodiments of apparatuses and systems described herein and some of which can be performed according to instructions stored in non-transitory computer-readable storage media described herein. Still, some embodiments of computer-implemented methods provided herein can be performed by other apparatuses or systems and can be performed according to instructions stored in computer-readable storage media other than that described herein, as will become apparent to those having skill in the art with reference to the embodiments described herein. Any reference to systems and computer-readable storage media with respect to the following computer- implemented methods is provided for explanatory purposes, and is not intended to limit any of such systems and any of such non-transitory computer-readable storage media with regard to embodiments of computer-implemented methods described above. Likewise, any reference to the following computer-implemented methods with respect to systems and computer-readable storage media is provided for explanatory purposes, and is not intended to limit any of such computer-implemented methods disclosed herein.
The flowchart and block diagrams in the drawings illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
The processes (methods) and systems, including components thereof, herein have been described with exemplary reference to specific hardware and software. The processes (methods) have been described as exemplary, whereby specific steps and their order can be omitted and/or changed by persons of ordinary skill in the art to reduce these embodiments to practice without undue experimentation. The processes (methods) and systems have been described in a manner sufficient to enable persons of ordinary skill in the art to readily adapt other hardware and software as may be needed to reduce any of the embodiments to practice without undue experimentation and using conventional techniques.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A system for interfacing with an electronic device having a display element, comprising:
a head-mounted display holder for positioning the display element of the electronic device in front of a user;
a sensor arrangement functionally associated with the head-mounted display holder, the sensor arrangement detecting head-performed actions of the user; and
a processing subsystem including at least one processor coupled to the sensor arrangement, the processing subsystem configured to: determine, based on received data derived from a detected head- performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head-performed action, and provide the determined angular position and temporal information to a content controller linked to the electronic device to enable the content controller to take at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
2. The system of claim 1, wherein the sensor arrangement includes an accelerometer.
3. The system of claim 1, wherein the sensor arrangement is carried by the electronic device.
4. The system of claim 1, wherein the communicated content is web content.
5. The system of claim 1, wherein the communicated content is image content.
6. The system of claim 1, wherein the communicated content is video content.
7. The system of claim 1, wherein the communicated content is audio content.
8. The system of claim 1, wherein the at least one axis of rotation includes a yaw axis.
9. The system of claim 1, wherein the at least one axis of rotation includes a pitch axis.
10. The system of claim 1, wherein the content controller is remotely located from the system and is in networked communication with the processing subsystem.
11. The system of claim 1, wherein the content controller is linked to a website that communicates the content.
12. The system of claim 1, wherein the content controller is linked to an application that communicates the content.
13. The system of claim 1, wherein the electronic device is a smartphone or a tablet.
14. The system of claim 1, wherein the respective threshold criteria are based on pre-determined values.
15. A method for interfacing with an electronic device having a display element, comprising:
detecting a head-performed action of a user via at least one sensor functionally associated with a head-mounted display holder positioning the display element of the electronic device in front of a user; determining, based on received data derived from the detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head-performed action;
sending the determined angular position and temporal information to a content controller linked to the electronic device;
checking if the angular position and the temporal information simultaneously satisfy respective threshold criteria; and
taking at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
16. The method of claim 15, wherein the temporal information includes a speed of the detected head-performed action.
17. The method of claim 16, wherein the checking includes:
determining if the absolute value of the angular position is greater than a threshold angular value, and
determining if the speed of the detected head-performed action is greater than a threshold speed value.
18. The method of claim 15, wherein the content controller is remotely located from the electronic device, and wherein the sending includes:
transmitting, over a network linking the content controller and the electronic device, at least one data packet that includes the determined angular position and temporal information.
19. A method for interfacing with an electronic device having a display element, comprising:
detecting a head-performed action of a user via at least one sensor functionally associated with a head-mounted display holder positioning the display element of the electronic device in front of a user; determining, based on received data derived from the detected head-performed action, an angular position of the head of the user about at least one axis of rotation and temporal information associated with the detected head-performed action;
checking if the angular position and the temporal information simultaneously satisfy respective threshold criteria; and
actuating a content controller linked to the electronic device to take at least one action to affect content communicated by the electronic device if the angular position and the temporal information simultaneously satisfy respective threshold criteria.
20. The method of claim 19, wherein the content controller is remotely located from the electronic device, and wherein the actuating includes:
transmitting, over a network, a command to the content controller to take the at least one action.
PCT/IL2018/050179 2017-02-23 2018-02-18 Systems and methods for electronic device interface WO2018154563A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1702939.8A GB201702939D0 (en) 2017-02-23 2017-02-23 Systems, methods and computer readable storage media for conditioned accelerometer trigger in man-machine interface for virtual reality glasses
GB1702939.8 2017-02-23

Publications (1)

Publication Number Publication Date
WO2018154563A1 true WO2018154563A1 (en) 2018-08-30

Family

ID=58544235

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2018/050179 WO2018154563A1 (en) 2017-02-23 2018-02-18 Systems and methods for electronic device interface

Country Status (2)

Country Link
GB (1) GB201702939D0 (en)
WO (1) WO2018154563A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050156817A1 (en) * 2002-08-30 2005-07-21 Olympus Corporation Head-mounted display system and method for processing images
US20130135353A1 (en) * 2011-11-28 2013-05-30 Google Inc. Head-Angle-Trigger-Based Action
US20140267400A1 (en) * 2013-03-14 2014-09-18 Qualcomm Incorporated User Interface for a Head Mounted Display
US20160162020A1 (en) * 2014-12-03 2016-06-09 Taylor Lehman Gaze target application launcher

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050156817A1 (en) * 2002-08-30 2005-07-21 Olympus Corporation Head-mounted display system and method for processing images
US20130135353A1 (en) * 2011-11-28 2013-05-30 Google Inc. Head-Angle-Trigger-Based Action
US20140267400A1 (en) * 2013-03-14 2014-09-18 Qualcomm Incorporated User Interface for a Head Mounted Display
US20160162020A1 (en) * 2014-12-03 2016-06-09 Taylor Lehman Gaze target application launcher

Also Published As

Publication number Publication date
GB201702939D0 (en) 2017-04-12

Similar Documents

Publication Publication Date Title
US11048394B2 (en) User interface for controlling data navigation
KR102058891B1 (en) Reactive user interface for head-mounted display
US10572017B2 (en) Systems and methods for providing dynamic haptic playback for an augmented or virtual reality environments
US9483112B2 (en) Eye tracking in remote desktop client
US20160048214A1 (en) Using distance between objects in touchless gestural interfaces
US10488918B2 (en) Analysis of user interface interactions within a virtual reality environment
US9389706B2 (en) Method and system for mouse control over multiple screens
CN111147743B (en) Camera control method and electronic equipment
CN108369447B (en) Method and device for controlling running state of wearable electronic equipment
US11055923B2 (en) System and method for head mounted device input
US20210117048A1 (en) Adaptive assistive technology techniques for computing devices
US20150268840A1 (en) Determination of a program interaction profile based at least in part on a display region
US11430414B2 (en) Eye gaze control of magnification user interface
WO2016147498A1 (en) Information processing device, information processing method, and program
CN113289330B (en) Rendering method and device
US9077884B2 (en) Electronic devices with motion response and related methods
US10082936B1 (en) Handedness determinations for electronic devices
WO2018154563A1 (en) Systems and methods for electronic device interface
US20140240273A1 (en) Apparatus and method for interacting with a user
US20140266982A1 (en) System and method for controlling an event in a virtual reality environment based on the body state of a user
AU2015309688B2 (en) Methods and systems for positioning and controlling sound images in three-dimensional space
US20240036653A1 (en) Capturing touchless inputs and controlling a user interface with the same
US10303243B2 (en) Controlling devices based on physical gestures
EP3567463A1 (en) System for navigating an aircraft display with a mobile device
CN117891343A (en) Method and device for debouncing input of an input device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18758220

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: 1205A 20.11.2019

122 Ep: pct application non-entry in european phase

Ref document number: 18758220

Country of ref document: EP

Kind code of ref document: A1