WO2013123077A1 - Engagement-dependent gesture recognition - Google Patents

Engagement-dependent gesture recognition Download PDF

Info

Publication number
WO2013123077A1
WO2013123077A1 PCT/US2013/025971 US2013025971W WO2013123077A1 WO 2013123077 A1 WO2013123077 A1 WO 2013123077A1 US 2013025971 W US2013025971 W US 2013025971W WO 2013123077 A1 WO2013123077 A1 WO 2013123077A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
engagement
detecting
gesture
context
Prior art date
Application number
PCT/US2013/025971
Other languages
French (fr)
Inventor
Ian Charles CLARKSON
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to EP13707952.1A priority Critical patent/EP2815292A1/en
Priority to CN201380008650.4A priority patent/CN104115099A/en
Priority to JP2014556822A priority patent/JP2015510197A/en
Publication of WO2013123077A1 publication Critical patent/WO2013123077A1/en
Priority to IN1753MUN2014 priority patent/IN2014MN01753A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • aspects of the disclosure relate to computing technologies.
  • aspects of the disclosure relate to computing technologies in applications or devices capable of providing an active user interface, such as systems, methods, apparatuses, and computer-readable media that perform gesture recognition.
  • computing platforms such as smart phones, tablet computers, personal digital assistants (PDAs), televisions, as well as other devices
  • touch screens such as smart phones, tablet computers, personal digital assistants (PDAs), televisions, as well as other devices
  • touch screens such as smart phones, tablet computers, personal digital assistants (PDAs), televisions, as well as other devices
  • accelerometers such as a Bosch Sensortec BMAs
  • cameras such as a Bosch Sensortec BMAs
  • proximity sensors such as a g., a sensor that may be a sensor that may allow these devices to sense motion or other user activity serving as a form of user input.
  • touch screen devices provide an interface whereby the user can cause specific commands to be executed by dragging a finger across the screen in an up, down, left or right direction. In these devices, a user action is recognized and a corresponding command is executed in response.
  • aspects of the present disclosure provide more convenient, intuitive, and functional gesture recognition interfaces.
  • gesture control systems may implement more complex gestures (such as having a user move their hand(s) in a triangle shape, for instance), it may be more difficult for users to perform all of the recognized gestures and/or it may take more time for a system to capture any particular gesture.
  • Another challenge that might arise in current gesture control systems is accurately determining when a user intends to interact with such a system— and when the user does not so intend.
  • One way to make this determination is to wait for the user to input a command to activate or engage a gesture recognition mode, which may involve the user performing an engagement pose, using voice engagement inputs, or taking some other action.
  • an engagement pose may be a static gesture that the device recognizes as a command to enter a full gesture detection mode.
  • the device may seek to detect a range of gesture inputs with which the user can control the functionality of the device. In this way, once the user has engaged the system, the system may enter a gesture detection mode in which one or more gesture inputs may be performed by the user and recognized by the device to cause commands to be executed on the device.
  • a gesture control system on the device may be configured to recognize multiple unique engagement inputs. After detecting a particular engagement input and entering the full detection mode, the gesture control system may interpret subsequent gestures in accordance with a gesture interpretation context associated with the engagement input. For example, a user may engage the gesture control system by performing a hand pose which involves an outstretched thumb and pinky finger (e.g., mimicking the shape of a telephone), and which is associated with a first gesture input interpretation context. In response to detecting this particular hand pose, the device activates the first gesture interpretation context to which the hand pose corresponds. Under the first gesture interpretation context, a left swipe gesture may be linked to a "redial" command. Thus, if the device subsequently detects a left swipe gesture, it executes the redial command through a telephone application provided by the system.
  • a gesture interpretation context associated with the engagement input. For example, a user may engage the gesture control system by performing a hand pose which involves an outstretched thumb and pinky finger (e.g., mimicking
  • a user may engage the full detection mode by performing a hand pose involving the thumb and index finger in a circle (e.g., mimicking the shape of a globe) which corresponds to a second gesture interpretation context.
  • a left swipe gesture may be associated with a scroll map command executable within a satellite application.
  • the gesture control system will enter the full detection mode and subsequently interpret a left swipe gesture as corresponding to a "scroll map" command when the satellite navigation application is in use.
  • a computing device may be configured to detect multiple distinct engagement inputs.
  • Each of the multiple engagement inputs may correspond to a different gesture input interpretation context.
  • the computing device may detect any one of the multiple engagement inputs at the time the input is provided by the user. Then, in response to user gesture input, the computing device may execute at least one command based on the detected gesture input and the gesture interpretation context corresponding to the detected engagement input.
  • the engagement input may take the form of an engagement pose, such as a hand pose.
  • the detected engagement may be an audio engagement, such as a user's voice.
  • a computing device may remain in a limited detection mode until an engagement pose is detected. While in the limited detection mode, the device may ignore one or more detected gesture inputs. The computing device then detect an engagement pose and initiate processing of subsequent gesture inputs in response to detecting the engagement pose. Subsequently, the computing device may detect at least one gesture, and the computing device may further execute at least one command based on the detected gesture and the detected engagement pose.
  • a method may comprise detecting an engagement of a plurality of engagements, where each engagement of the plurality of engagements defines a gesture interpretation context of a plurality of gesture interpretation contexts.
  • the method may further comprise selecting a gesture interpretation context from amongst the plurality of gesture interpretation contexts.
  • the method may comprise detecting a gesture subsequent to detecting the engagement and executing at least one command based on the detected gesture and the selected gesture interpretation context.
  • the detection of the gesture is based on the selected gesture interpretation context. For example, one or more parameters associated with the selected gesture interpretation context are used for the detection.
  • a method may comprise ignoring non- engagement sensor input until an engagement pose of a plurality of engagement poses is detected, detecting at least one gesture based on the sensor input subsequent to the detection of the engagement pose, and executing at least one command based on the detected gesture and the detected engagement pose.
  • each engagement pose of the plurality of engagement poses defines a different gesture interpretation context.
  • the method further comprises initiating processing of the sensor input in response to detecting the engagement pose, where the at least one gesture is detected subsequent to the initiating.
  • a method may comprise detecting a first engagement, activating at least some functionality of a gesture detection engine in response to the detecting, detecting a gesture subsequent to the activating using the gesture detection engine, and controlling an application based on the detected first engagement and the detected gesture.
  • the activating comprises switching from a low power mode to a mode that consumes more power than the low power mode.
  • the activating comprises beginning to receive information from one or more sensors.
  • the first engagement defines a gesture interpretation context for the application.
  • the method further comprises ignoring one or more gestures prior to detecting the first engagement.
  • the activating comprises inputting data points obtained from the first engagement into operation of the gesture detection engine.
  • a method may comprise detecting a first engagement, receiving sensor input related to a first gesture subsequent to the first engagement, and determining whether the first gesture is a command.
  • the first gesture comprises a command when the first engagement is maintained for at least a portion of the first gesture.
  • the method may further comprise determining that the first gesture does not comprise a command when the first engagement is not held for substantially the entirety of the first gesture.
  • FIG. 1 illustrates an example device that may implement one or more aspects of the disclosure.
  • FIG. 2 illustrates an example timeline showing how a computing device may switch from a limited detection mode into a gesture detection mode in response to detecting an engagement pose in accordance with one or more illustrative aspects of the disclosure.
  • FIG. 3 illustrates an example method of performing engagement-dependent gesture recognition in accordance with one or more illustrative aspects of the disclosure.
  • FIG. 4 illustrates an example table of engagement poses and gestures that may be recognized by a computing device in accordance with one or more illustrative aspects of the disclosure.
  • FIG. 5 illustrates an example computing system in which one or more aspects of the disclosure may be implemented.
  • FIG. 6 illustrates a second example system for implementing one or more aspects of the present disclosure.
  • FIG. 7 is a flow diagram depicting an algorithm for implementing certain methods of the present disclosure, and may be used in conjunction with the example system of FIG. 6.
  • FIG. 8 is a flow diagram depicting example operations of a device configured to operate in accordance with techniques disclosed herein.
  • FIG. 1 illustrates an example device that may implement one or more aspects of the disclosure.
  • computing device 100 may be a personal computer, set- top box, electronic gaming console device, laptop computer, smart phone, tablet computer, personal digital assistant, or other mobile device that is equipped with one or more sensors that allow computing device 100 to capture motion and/or other sensed conditions as a form of user input.
  • computing device 100 may be equipped with, communicatively coupled to, and/or otherwise include one or more cameras, microphones, proximity sensors, gyroscopes, accelerometers, pressure sensors, grip sensors, touch screens, and/or other sensors.
  • computing device 100 also may include one or more processors, memory units, and/or other hardware components, as described in greater detail below.
  • the device 100 is incorporated into an automobile, for example in a central console of the automobile.
  • computing device 100 may use any and/or all of these sensors alone or in combination to recognize gestures, for example gestures that may not include a user touching the device 100, performed by one or more users of the device.
  • computing device 100 may use one or more cameras, such as camera 110, to capture hand and/or arm movements performed by a user, such as a hand wave or swipe motion, among other possible movements.
  • more complex and/or large-scale movements such as whole body movements performed by a user (e.g., walking, dancing, etc.), may likewise be captured by the one or more cameras (and/or other sensors) and subsequently be recognized as gestures by computing device 100, for instance.
  • computing device 100 may use one or more touch screens, such as touch screen 120, to capture touch-based user input provided by a user, such as pinches, swipes, and twirls, among other possible movements. While these sample movements, which may alone be considered gestures and/or may be combined with other movements or actions to form more complex gestures, are described here as examples, any other sort of motion, movement, action, or other sensor-captured user input may likewise be received as gesture input and/or be recognized as a gesture by a computing device implementing one or more aspects of the disclosure, such as computing device 100.
  • a camera such as a depth camera may be used to control a computer or media hub based on the recognition of gestures or changes in gestures of a user.
  • camera-based gesture input may allow photos, videos, or other images to be clearly displayed or otherwise output based on the user's natural body movements or poses.
  • gestures may be recognized that allow a user to view, pan (i.e., move), size, rotate, and perform other manipulations on image objects.
  • a depth camera such as a structured light camera or a time-of-flight camera, may include infrared emitters and a sensor.
  • the depth camera may produce a pulse of infrared light and subsequently measure the time it takes for the light to travel to an object and back to the sensor. A distance may be calculated based on the travel time.
  • other input devices and/or sensors may be used to detect or receive input and/or assist in detected a gesture.
  • a "gesture” is intended to refer to a form of non-verbal communication made with part of a human body, and is contrasted with verbal communication such as speech.
  • a gesture may be defined by a movement, change or transformation between a first position, pose, or expression and a second pose, position, or expression.
  • gestures used in everyday discourse include for instance, an "air quote" gesture, a bowing gesture, a curtsey, a cheek-kiss, a finger or hand motion, a genuflection, a head bobble or movement, a high-five, a nod, a sad face, a raised fist, a salute, a thumbs-up motion, a pinching gesture, a hand or body twisting gesture, or a finger pointing gesture.
  • a gesture may be detected using a camera, such as by analyzing an image of a user, using a tilt sensor, such as by detecting an angle that a user is holding or tilting a device, or by any other approach.
  • a gesture may comprise a non-touch, touchless, or touch-free gesture such as a hand movement performed in mid-air, for example.
  • Such non-touch, touchless, or touch-free gestures may be distinguished from various "gestures" that might be performed by drawing a pattern on a touchscreen, for example, in some embodiments.
  • a gesture may be performed in mid-air while holding a device, and one or more sensors in the device such as an accelerometer may be used to detect the gesture.
  • a user may make a gesture (or “gesticulate") by changing the position (i.e. a waving motion) of a body part, or may gesticulate while holding a body part in a constant position (i.e. by making a clenched fist gesture).
  • hand and arm gestures may be used to control functionality via camera input, while in other arrangements, other types of gestures may additionally or alternatively be used.
  • hands and/or other body parts e.g., arms, head, torso, legs, feet, etc. may be moved in making one or more gestures.
  • gestures may be performed by moving one or more hands, while other gestures may be performed by moving one or more hands in combination with one or more arms, one or more legs, and so on.
  • a gesture may comprise a certain pose, for example a hand or body pose, being maintained for a threshold amount of time.
  • FIG. 2 illustrates an example timeline showing how a computing device may switch from a limited detection mode into a full detection mode in response to detecting an engagement input in accordance with one or more illustrative aspects of the disclosure.
  • a computing device such as device 100
  • the device may be in a limited detection mode.
  • the device processes sensor data to detect an engagement input.
  • the device may not execute commands associated with user inputs available for controlling the device in the full detection mode. In other words, only engagement inputs are valid in the limited detection mode in some embodiments.
  • the device may also be configured so that while it is in the limited detection mode, power and processing resources are not devoted to detecting inputs associated with the commands associated with the full detection mode.
  • the computing device might be configured to analyze sensor input (and/or any other input that might be received during this time) relevant to determining whether a user has provided an engagement input.
  • one or more sensors may be configured to be turned off or powered down, or to not provide sensor information to other components while the device 100 is in the limited detection mode.
  • an "engagement input” refers to an input which triggers activation of the full detection mode.
  • the full detection mode refers to a mode of device operation in which certain inputs may be used to control the functionality of the device, as determined by the active gesture interpretation context.
  • an engagement input may be an engagement pose involving a user positioning his or her body or hand(s) in a particular way (e.g., an open palm, a closed fist, a "peace fingers” sign, a finger pointing at a device, etc.).
  • an engagement may involve one or more other body parts, in addition to and/or instead of the user's hand(s).
  • an open palm or closed fist may constitute an engagement input when detected at the end of an outstretched arm in some embodiments.
  • an engagement input may include an audio input such as a sound which triggers the device to enter the full gesture detection mode.
  • an engagement input may be a user speaking a particular word or phrase which the device is configured to recognize as an engagement input.
  • an engagement input may be provided by a user occluding a sensor.
  • a device could be configured to recognize when the user blocks the field of view of a camera or the transmitting and/or receiving space of a sonic device.
  • a user traveling in an automobile may provide an engagement input by occluding a camera or other sensor present in the car or on a handheld device.
  • the device determines that an engagement input has been detected, the device enters a full detection mode.
  • the particular engagement input that was detected by the device may correspond to and trigger a particular gesture interpretation context.
  • a gesture interpretation context may comprise a set of gesture inputs recognizable by the device when the context is engaged, as well as the command(s) activated by each such gesture.
  • the active gesture interpretation context may dictate the interpretation given by a device to detected gesture inputs.
  • the active gesture interpretation context may itself be dictated by the engagement input which triggered the device to enter the full detection mode.
  • a "default" engagement may be implemented that will allow the user to enter a most recent gesture interpretation context, for example rather than itself being associated with a unique gesture interpretation context.
  • the computing device may detect one or more gestures.
  • the device may interpret the gesture based on the gesture interpretation context corresponding to the most recent engagement input.
  • the recognizable gestures in the active gesture interpretation context may each be associated with a command.
  • the device determines the command with which the gesture is associated, and executes the determined command.
  • the most recent engagement input may not only determine which commands are associated with which gestures, but the engagement input may be used to determine one or more parameters used to detect one or more of those gestures.
  • a device could recognize a pose involving a user's thumb and outstretched pinky finger, and could associate this pose with a telephonic gesture interpretation context.
  • the same device could also recognize a hand pose involving a thumb and forefinger pressed together in a circle, and could associate this pose with a separate navigational gesture interpretation context applicable to mapping applications.
  • this example computing device may interpret gestures detected during the gesture detection mode in accordance with a telephonic gesture interpretation context.
  • the device may interpret the gesture as a "redial" command to be executed using a telephone application (e.g., a telephonic software application) provided by the device, for example.
  • a telephone application e.g., a telephonic software application
  • the device may interpret gestures detected during the gesture detection mode in accordance with a navigational gesture interpretation context.
  • a navigational gesture interpretation context e.g., a left swipe gesture
  • the device may interpret the gesture as a "scroll map" command to be executed using a satellite navigation application (e.g., a satellite navigation software application) also provided by the device, for example.
  • a satellite navigation application e.g., a satellite navigation software application
  • the computing device may be implemented as and/or in an automobile control system, and these various engagements and gestures may allow the user to control different functionalities of the automobile control system.
  • FIG. 3 illustrates an example method of performing engagement-dependent gesture recognition in accordance with one or more illustrative aspects of the disclosure.
  • any and/or all of the methods and/or method steps described herein may be implemented by and/or in a computing device, such as computing device 100 and/or the computer system described in greater detail below, for instance.
  • one or more of the method steps described below with respect to FIG. 3 are implemented by a processor of the device 100.
  • any and/or all of the methods and/or method steps described herein may be implemented in computer-readable instructions, such as computer-readable instructions stored on a computer-readable medium.
  • a device may incorporate other steps, calculations, algorithms, methods, or actions which may be needed for the execution of any of the steps, decisions, determinations and actions depicted in FIG. 3.
  • a computing device such as a computing device capable of recognizing one or more gestures as user input (e.g., computing device 500 or 600), may be initialized, and/or one or more settings may be loaded.
  • the device in association with software stored and/or executed thereon, for instance, may load one or more settings, such as user preferences related to gestures.
  • these user preferences may include gesture mapping information in which particular gestures are mapped to particular commands in different gesture interpretation contexts. Additionally or alternatively, such gesture mapping information may specify the engagement inputs and the different gesture interpretation contexts brought about by each such engagement input.
  • Information related to gesture mapping settings or the like may be stored in memory 535 or memory 606, for example.
  • the settings may specify that engagement inputs operate at a "global" level, such that these engagement inputs correspond to the same gesture interpretation context regardless of the application currently "in focus" or being used.
  • the settings may specify that other engagement inputs operate at an application level, such that these engagement inputs correspond to different gestures at different times, with the correspondence depending on which application is being used.
  • the arrangement of global and application level engagement inputs may depend on the system implementing these concepts and a system may be configured with global and application level engagement inputs as needed to suit the specific system design objectives.
  • the arrangement of global and application level engagement inputs may also be partially or entirely determined based on settings provided by the user.
  • Table A illustrates an example of the gesture mapping information that may be used in connection with a system implementing one or more aspects of the disclosure in an automotive setting:
  • Table B illustrates an example of the gesture mapping information that may be used in connection with a system implementing one or more aspects of the disclosure in a home entertainment system setting:
  • Table A and B are provided for example purposes only and alternative or additional mapping arrangements, commands, gestures, etc. may be used in a device employing gesture recognition in accordance with this disclosure.
  • gesture detection and gesture mapping information may also be configured to use gesture detection and gesture mapping information in which particular gestures are mapped to particular commands in different gesture interpretation contexts.
  • a television application interface may incorporate gesture detection to enable users to control the television.
  • a television application may incorporate gesture interpretation contexts in which a certain engagement input facilitates changing the television channel with subsequent gestures, while a different engagement input facilitates changing the television volume with subsequent gestures.
  • a video game application may be controlled by a user through gesture detection.
  • a gesture input interpretation context for the video game may include certain gesture inputs mapped to "pause" or "end” control commands, for example similar to how the video game may be operated at a main menu (i.e. main menu is the focus).
  • a different interpretation context for the video game may include the same or different gesture inputs mapped to live game control commands, such as shooting, running, or jumping commands.
  • a gesture interpretation context may facilitate changing an active application.
  • a gesture interpretation context available during use of a GPS application may contain mapping information tying a certain gesture input to a command for switching to or additionally activating another application, such as a telephone or camera application.
  • the computing device may process input in the limited detection mode.
  • computing device 100 may be in the limited detection mode in which sensor input may be received and/or captured by the device, but processed only for the purpose of detecting engagement inputs.
  • sensor input Prior to processing, sensor input may be received by input device 515 or sensor 602.
  • gestures that correspond to the commands recognized in the full detection mode may be ignored or go undetected.
  • the device may deactivate or reduce power to sensors, sensor components, processor components, or software modules which are not involved in detecting engagement inputs.
  • the device may reduce power to a touchscreen or audio receiver/detector components while using the camera to detect the engagement pose inputs.
  • a limited power source such as a battery
  • step 315 the device may determine whether an engagement input has been provided.
  • This step may involve computing device 100 continuously or periodically analyzing sensor information received during the limited detection mode to determine if an engagement input (such as an engagement pose or audio engagement described above) has been provided. More specifically, this analysis may be performed by a processor such as the processor 510, in conjunction with memory device(s) 525. Alternatively, a processor such as processor 604 may be configurable to perform the analysis in conjunction with module 608. Until the computing device detects an engagement input, at step 315, it may remain in the limited detection mode as depicted by the redirection arrow pointing to step 310, and continues to process input data for the purpose of detecting an engagement input.
  • the computing device detects an engagement input at step 315, the device selects and may activate a gesture input interpretation context based on the engagement input, and may commence a time-out counter, as depicted at 318. More specifically, selection and activation of a gesture interpretation context may be performed by a processor such as the processor 510, in conjunction with memory device(s) 525. Alternatively, a processor such as processor 604 may be configurable to perform the selection and activation, in conjunction with module 610.
  • the computing device may be configured to detect several possible engagement inputs at 315.
  • the computing device may be configured to detect one or more engagement inputs associated with gesture input interpretation contexts in which both static poses and dynamic gestures are recognizable and are mapped to control commands.
  • Information depicting each engagement input e.g. each hand pose, gesture, swipe, movement, etc.
  • This information may be directly determined from model engagement inputs provided by the user or another person. Additionally or alternatively, the information could be based on mathematical models which quantitatively depict the sensor inputs expected to be generated by each of the engagement inputs. Furthermore, in certain embodiments, the information could be dynamically altered and updated based on an artificial intelligence learning process occurring inside the device or at an external entity in communication with the device.
  • information depicting the available gesture interpretation contexts may be stored in memory a manner which associates each interpretation context with at least one engagement input.
  • the device may be configured to generate such associations through the use of one or more lookup tables or other storage mechanisms which facilitate associations within a data storage structure.
  • the device enters the full detection mode and processes sensor information to detect gesture inputs, as indicated at step 320.
  • computing device 100 may capture, store, analyze, and/or otherwise process sensor information to detect the gesture inputs relevant within the active gesture interpretation context.
  • computing device 100 may further communicate to the user an indication of the gesture inputs available within the active gesture interpretation context and the commands which correspond to each such gesture input.
  • computing device 100 may play a sound and/or otherwise provide audio feedback to indicate activation of the gesture input interpretation context associated with the detected engagement.
  • the device may provide a "telephone dialing" sound effect upon detecting an engagement input associated with a telephonic context or a "twinkling stars" sound effect upon detecting an engagement gesture associated with a satellite navigational gesture input interpretation context.
  • a device may be configured to provide a visual output indicating detection of an engagement gesture associated with a gesture input interpretation context.
  • a visual output may be displayed on a screen or through another medium suitable for displaying images or visual feedback.
  • a device may show graphical depictions of certain of the hand poses or gestures recognizable in the interpretation context and a description of the commands to which the gestures correspond.
  • a gesture input detection engine may be initialized as part of step 320. Such initialization may be performed, at least in part, by a processor such as processor 604.
  • the initialization of the gesture input detection engine may involve the processor 604 activating a module for detecting gesture inputs such as the one depicted at 612.
  • the initialization may further involve processor 604 accessing information depicting recognizable engagement inputs. Such information may be stored in engagement input library 618, or in any other storage location.
  • the device may obtain information about the user or the environment surrounding the device. Such information may be saved and subsequently utilized in the full detection mode by the gesture detection engine or during the processing in step 320 and/or in step 325, for example to improve gesture input detection.
  • the device 100 extracts features or key points of the hand that may be used to subsequently track hand motion in step 320 in order to detect a gesture input in full detection mode.
  • the computing device 100 determines whether an actionable gesture input has been provided by the user.
  • computing device 600 may continuously or periodically analyze sensor data to determine whether a gesture input associated with the active interpretation context has been provided. In the case of computing device 600, such analysis may be performed by processor 604 in conjunction with module 612 and the library of gesture inputs 620.
  • the full detection mode may only last for a predetermined period of time (e.g., 10 seconds or 10 seconds since the last valid input was detected), such that if an actionable gesture input is not detected within such time, the gesture detection mode "times out” and the device returns to the limited detection mode described above.
  • This "time out" feature is depicted in FIG. 3 at 318, and may be implemented using a time lapse counter which, upon reaching a time limit or expiring, triggers cancellation of the full detection mode and re-initialization of the limited detection mode.
  • the counter may be configured to start as soon as a detected engagement input is no longer detected, as shown by step 318. In this way, a user can hold an engagement pose or other such input and then delay in deciding upon an input gesture, without the gesture detection mode timing out before the user provides the gesture input.
  • the user may perform a certain gesture or another predefined engagement that "cancels" a previously provided engagement input, thereby allowing the user to reinitialize the time out counter or change gesture interpretation contexts without having to wait for the time out counter to expire.
  • step 330 the computing device determines if the time-out counter has expired. If the counter has expired, the device notifies the user that the full detection mode has timed out (e.g., by displaying a user interface, playing a sound, etc.), and subsequently reenters the limited detection mode, as depicted by the return arrow to step 310. If the time-out counter has not yet expired, the device continues to process sensor information in the full detection mode, as depicted by the return arrow to step 320.
  • the computing device detects an actionable gesture input (i.e. a gesture input that is part of the active gesture interpretation context) at step 325, then at step 335, the computing device interprets the gesture based on the active gesture input interpretation context.
  • Interpreting the gesture may include determining which command(s) should be executed in response to the gesture in accordance with the active gesture input interpretation context.
  • different contexts corresponding to different engagements
  • a navigational context may allow for control of a navigation application
  • a telephonic context may allow for control of a telephone application.
  • the detection of engagement inputs in the limited detection mode and/or selection of an interpretation context may be independent of the location at which the user provides the engagement input.
  • the device may be configured to activate the full detection mode and a gesture interpretation context regardless of the position relative to the device sensor at which the engagement input is detected. Additionally or alternatively, the device may be configured so that detection of input gestures in the full detection mode is independent of the position at which the gesture input is provided. Further, when elements are being displayed, for example on the screen 120, detection of an engagement input and/or selection of an input interpretation context may be independent of what is being displayed.
  • Certain embodiments of the present invention may involve gesture input interpretation contexts having only one layer of one-to-one mapping between inputs and corresponding commands. In such a case, all commands may be available to the user through the execution of only a single gesture input. Additionally or alternatively, the gesture input interpretation contexts used by a device may incorporate nested commands which cannot be executed unless a series of two consecutive gesture inputs are provided by the user. For example, in an example gesture input interpretation context incorporating single layer, one-to-one command mapping, an extended thumb and forefinger gesture input may directly correspond to a command for accessing a telephone application. In an example system in which nested commands are used, a gesture input involving a circular hand pose may directly correspond to a command to initialize a navigation application.
  • an open palm or closed fist may thereafter correspond to a functional command within the navigation application.
  • the functional command corresponding to the open palm is a nested command, and an open palm gesture input may not cause the functional command to be executed unless the circular hand pose has been detected first.
  • Additional embodiments may involve the device being configured to be operate based on nested engagement inputs.
  • a device using nested engagement inputs may be configured to recognize a first and second engagement input, or any series of engagement inputs.
  • Such a device may be configured so as to not enter the full detection mode until after a complete series of engagement.
  • a device capable of operations based on nesting of engagement inputs may enable a user to provide a first engagement input indicating an application which the user desires to activate.
  • a subsequent engagement input may then specify a desired gesture interpretation context associated with the indicated application.
  • the subsequent engagement input may also trigger the full detection mode, and activation of the indicated application and context.
  • the device may be configured to respond to the second engagement input in a manner dictated by the first detected engagement input.
  • different engagement input sequences involving identical second engagement inputs may cause the device to activate different gesture input interpretation contexts.
  • the device may execute the one or more commands which, in the active gesture input interpretation context, correspond to the previously detected gesture input. As depicted by the return arrow shown after step 340, the device may then return to processing sensor information at step 320, while the active gesture input interpretation context is maintained. In some embodiments, the time-out counter is reset at 340 or 320. Alternatively, the device may return to the limited detection mode or some other mode of operation.
  • FIG. 4 illustrates an example table of engagement poses and gestures that may be recognized by a computing device in accordance with one or more illustrative aspects of the disclosure.
  • a "swipe right" gesture 405 may cause a computing device to execute a "next track" command within a media player application.
  • the same gesture may be mapped to different functions depending on the context set by the engagement pose.
  • an engagement such as an "open palm” engagement pose 410 or a “closed fist” engagement pose 420
  • the same gesture may be mapped to different functions depending on the context set by the engagement pose.
  • the computing device may execute a "next track” command within the media player based on a track- level control context set by the "open palm” engagement pose 410.
  • the computing device may execute a "next album” command within the media player based on the album-level control context set by the "closed first" engagement pose 420.
  • a computer system as illustrated in FIG. 5 may be incorporated as part of a computing device, which may implement, perform, and/or execute any and/or all of the features, methods, and/or method steps described herein.
  • a handheld device may be entirely or partially composed of a computer system 500.
  • the hand-held device may be any computing device with sensor capable of sensing user inputs, such as a camera and/or a touchscreen display unit. Examples of a hand-held device include but are not limited to video game consoles, tablets, smart phones, and mobile devices.
  • the system 500 of FIG. 5 is one of many structures which may be used implement some or all of the features and methods described previously with regards to the device 100.
  • FIG. 5 may be used within a host computer system, a remote kiosk/terminal, a point-of-sale device, a mobile device, a set-top box, or any other type of computer system configured to detect user inputs.
  • FIG. 5 is meant only to provide a generalized illustration of various components, any and/or all of which may be utilized as appropriate.
  • FIG. 5, therefore, broadly illustrates how individual system elements may be implemented and is not intended to depict that all such system elements must be disposed in in integrated manner.
  • system components such as the ones shown in FIG. 5 may be incorporated within a common electrical structure, or may be located in separate structures.
  • components are not to be so limited, and may be embodied in or exist as software, processor modules, one or more micro-control devices, logical circuitry, algorithms, remote or local data storage, or any other suitable devices, structures, or implementations known in the arts relevant to user input detection systems.
  • the computer system 500 is shown comprising hardware elements that can be electrically coupled via a bus 505 (or may otherwise be in communication, as appropriate).
  • the hardware elements may include one or more processors 510, including without limitation one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 515, which can include without limitation a camera, a mouse, a keyboard and/or the like; and one or more output devices 520, which can include without limitation a display unit, a printer and/or the like.
  • the bus 505 may also provide communication between cores of the processor 510 in some embodiments.
  • the computer system 500 may further include (and/or be in communication with) one or more non-transitory storage devices 525, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable and/or the like.
  • RAM random access memory
  • ROM read-only memory
  • Such storage devices may be configured to implement any appropriate data storage, including without limitation, various file systems, database structures, and/or the like.
  • the computer system 500 might also include a communications subsystem 530, which can include without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device and/or chipset (such as a Bluetooth® device, an 802.1 1 device, a WiFi device, a WiMax device, cellular communication facilities, etc.), and/or the like.
  • the communications subsystem 530 may permit data to be exchanged with a network (such as the network described below, to name one example), other computer systems, and/or any other devices described herein.
  • the computer system 500 will further comprise a non-transitory working memory 535, which can include a RAM or ROM device, as described above.
  • the computer system 500 also can comprise software elements, shown as being currently located within the working memory 535, including an operating system 540, device drivers, executable libraries, and/or other code, such as one or more application programs 545, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • an operating system 540 operating system 540
  • device drivers executable libraries
  • application programs 545 which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • application programs 545 may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • application programs 545 may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • a set of these instructions and/or code might be stored on a computer- readable storage medium, such as the storage device(s) 525 described above. In some cases, the storage medium might be incorporated within a computer system, such as computer system 500.
  • the storage medium might be separate from a computer system (e.g., a removable medium, such as a compact disc), and/or provided in an installation package, such that the storage medium can be used to program, configure and/or adapt a general purpose computer with the instructions/code stored thereon.
  • These instructions might take the form of executable code, which is executable by the computer system 500 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 500 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
  • Some embodiments may employ a computer system (such as the computer system 500) to perform methods in accordance with the disclosure. For example, some or all of the procedures of the described methods may be performed by the computer system 500 in response to processor 510 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 540 and/or other code, such as an application program 545) contained in the working memory 535. Such instructions may be read into the working memory 535 from another computer- readable medium, such as one or more of the storage device(s) 525. Merely by way of example, execution of the sequences of instructions contained in the working memory 535 might cause the processor(s) 510 to perform one or more procedures of the methods described herein, for example a method described with respect to FIG. 3.
  • a computer system such as the computer system 500
  • machine-readable medium and “computer-readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion.
  • various computer-readable media might be involved in providing instructions/code to processor(s) 510 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals).
  • a computer- readable medium is a physical and/or tangible storage medium.
  • Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
  • Non-volatile media include, for example, optical and/or magnetic disks, such as the storage device(s) 525.
  • Volatile media include, without limitation, dynamic memory, such as the working memory 535.
  • Transmission media include, without limitation, coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 505, as well as the various components of the communications subsystem 530 (and/or the media by which the communications subsystem 530 provides communication with other devices).
  • transmission media can also take the form of waves (including without limitation radio, acoustic and/or light waves, such as those generated during radio-wave and infrared data communications).
  • Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
  • Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 510 for execution.
  • the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer.
  • a remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 500.
  • These signals which might be in the form of electromagnetic signals, acoustic signals, optical signals and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
  • the communications subsystem 530 (and/or components thereof) generally will receive the signals, and the bus 505 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 535, from which the processor(s) 510 retrieves and executes the instructions.
  • the instructions received by the working memory 535 may optionally be stored on a non-transitory storage device 525 either before or after execution by the processor(s) 510.
  • FIG. 6 depicts a second device which may alternatively be used to implement any of the methods, steps, processes or algorithms previously disclosed herein.
  • FIG. 6 includes one or more sensors 602 which can be used to sense engagement inputs and/or gesture inputs, and which provides sensor information regarding such inputs to a processor 604.
  • the sensor 602 may comprise ultrasound technology (e.g., using microphones and/or ultrasound emitters), image or video capturing technologies such as a camera, IR or UV technology, magnetic field technology, emitted electromagnetic radiation technology, an accelerometer and/or gyroscope, and/or other technologies that may be used to sense an engagement input and/or a gesture input.
  • the sensor 602 comprises a camera configured to capture two-dimensional images. Such camera may be included in a cost- efficient system in some embodiments, and the use of input interpretation contexts may expand the number of commands that may be efficiently detected by the camera in some embodiments.
  • Processor 604 may store some or all sensor information in memory 606. Furthermore, processor 604 is configured to communicate with a module 608 for detecting engagement inputs, module 610 for selecting and activating input interpretation contexts, module 612 for detecting gesture inputs, and module 614 for determining and executing commands.
  • each of the modules, 608, 610, 612 and 614 may have access to the memory 606.
  • Memory 606 may include or interface with libraries, lists, arrays, databases, or other storage structures used to store sensor data, user preferences and information about input gesture interpretation contexts, actionable gesture inputs for each context, and/or commands corresponding to the different actionable gesture inputs.
  • the memory may also store information about engagement inputs and the gesture input interpretation contexts corresponding to each engagement input.
  • the modules 608, 610, 612, and 614 are illustrated separate from the processor 604 and the memory 606. In some embodiments, one or more of the modules 608, 610, 612, and 614 may be implemented by the processor 604 and/or in the memory 606.
  • the memory 606 contains an engagement input library 616, an input interpretation context library 618, a library of gesture inputs 620, and a library of commands 622.
  • Each library may contain indices which enable modules 608, 610, 612 and 614 to identify the one or more elements as corresponding to one or more elements of another one of the libraries 616, 618, 620 and 622.
  • Each of the modules 608, 610, 612 and 614, as well as processor 604 may have access to the memory 606 and each library therein, and may be capable of writing and reading data to and from the memory 606.
  • the processor 604 and module for determining and executing commands 614 may be configured to access and/or control the output component 624.
  • Libraries 616, 618, 620 and 622 may be hard encoded with information descriptive of the various actionable engagement inputs and their corresponding gesture input interpretation contexts, the gesture inputs associated with each context, and the commands linked to each such gesture input. Additionally, they may be supplemented with information provided by the user based on the user's preference, or may store information as determined by software or other medium executable by the device.
  • processor 604 in conjunction with modules 608, 610, 612 and 614, may be configured to perform certain steps described previously with regards to the discussion of processor(s) 510 in FIG. 5.
  • Memory 606 may provide storage functionality similar to storage device(s) 525.
  • Output component 624 may be configured to provide device outputs similar to those of output device(s) 520 in FIG. 5.
  • sensor 602 may be configured to enable certain functionality similar to functionality of input device(s) 515.
  • FIG. 7 depicts an example detailed algorithmic process which may be used by the device of FIG. 6 to implement certain methods according to the present disclosure.
  • the processor 604 may signal output component 624 to prompt the user for one or more engagement inputs.
  • the processor may interface with memory 606, and specifically, the engagement input library 616, to obtain information descriptive of the one or more engagement inputs for use in the prompt. Subsequently, the device remains in the limited detection mode and processor 604 continuously or intermittently processes sensor information relevant to detecting an engagement input.
  • processor 604 processes sensor information associated with the engagement input.
  • the processor 604 identifies the engagement input by using the module for detecting an engagement input 608 to review the engagement library and determining that the sensor information matches a descriptive entry in the engagement input library 616.
  • the processor 604 selects a gesture input interpretation context by using the module for selecting an input interpretation context 610 to scan the input interpretation context library 618 for the gesture input interpretation context entry that corresponds to the detected engagement input.
  • the processor 604 activates the selected gesture input interpretation context and activates the full detection mode.
  • the processor accesses the gesture input library 620 and the library of commands 622 to determine actionable gesture inputs for the active gesture input interpretation context, as well as the commands corresponding to these gesture inputs.
  • the processor commands the output component 624 to output communication to inform the user of one or more of the actionable gesture inputs and corresponding commands associated with the active gesture input interpretation context.
  • the processor begins analyzing sensor information to determine if the user has provided a gesture input. This analysis may involve the processor using the module for detecting gesture inputs 612 to access the library of gesture inputs 620 for the purpose of determining if an actionable gesture input has been provided.
  • the module for detecting gesture inputs 612 may compare sets of sensor information to descriptions of actionable gesture inputs in the library 620, and may detect a gesture input when a set of sensor information matches with one of the stored descriptions.
  • the processor in conjunction with the module for detecting gesture inputs 612, detects and identifies the gesture input by determining a match with an actionable gesture input description stored in the library of gesture inputs and associated with the active gesture input interpretation context.
  • the processor activates the module 614 for determining and executing commands.
  • the processor in conjunction with module 614, for example, may access the library of commands 622 and find the command having an index corresponding to the previously identified gesture input.
  • the processor executes the determined command.
  • FIG. 8 is a flow diagram depicting example operations of a gesture recognition device in accordance with the present disclosure.
  • the device may detect an engagement input, for example using the module 608, the processor 604, data from the sensor 602, and/or the library 618.
  • the device selects an input interpretation context from amongst a plurality of input interpretation contexts, for example using the module 610, the processor 604, the library 618, and/or the library 616. The selecting is based on the detected engagement input.
  • the engagement input detected at 802 is one of a plurality of engagement inputs, and each of the plurality of engagement inputs corresponds to a respective one of the plurality of input interpretation contexts.
  • the selecting at 804 may comprise selecting the input interpretation context corresponding to the detected engagement input
  • the device detects a gesture input subsequent to the selecting an input interpretation context, or example using the module 612, the processor 604, and/or the library 620.
  • the detection at 806 is based on the input interpretation context selected at 804.
  • one or more parameters associated with the selected input interpretation context may be used to detect the gesture input.
  • Such parameters may be stored in the library 616, for example, or loaded into the library 620 or a gesture detection engine when the input interpretation context is selected.
  • a gesture detection engine may be initialized or activated, for example to detect motion when the engagement comprises a static pose.
  • a gesture detection engine is implemented by the module 612 and/or by the processor 604, and/or as described above. Potential gestures available in the selected input interpretation context may be loaded into the gesture detection engine in some embodiments, for example from the library 616 and/or 620. In some embodiments, detectable or available gestures may be linked to function, for example in a lookup table or the library 622 or another portion of the memory 606. In some embodiments, gestures for an application may be registered with the gesture detection engine, and/or hand or gesture models for certain gestures or poses may be selected or used or loaded based on the selection of the input interpretation context. At 808, the device executes a command based on the detected gesture input and the selected input interpretation context, for example using the module 614, the processor 604, and/or the library 622.
  • embodiments were described as processes depicted as flow diagrams or block diagrams. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure.
  • embodiments of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof.
  • the program code or code segments to perform the associated tasks may be stored in a computer-readable medium such as a storage medium. Processors may perform the associated tasks.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Methods, apparatuses, systems, and computer-readable media for performing engagement-dependent gesture recognition are presented. According to one or more aspects, a computing device may detect an engagement of a plurality of engagements, and each engagement of the plurality of engagements may define a gesture interpretation context of a plurality of gesture interpretation contexts. Subsequently, the computing device may detect a gesture. Then, the computing device may execute at least one command based on the detected gesture and the gesture interpretation context defined by the detected engagement. In some arrangements, the engagement may be an engagement pose, such as a hand pose, while in other arrangements, the detected engagement may be an audio engagement, such as a particular word or phrase spoken by a user.

Description

ENGAGEMENT-DEPENDENT GESTURE RECOGNITION
BACKGROUND
[0001] Aspects of the disclosure relate to computing technologies. In particular, aspects of the disclosure relate to computing technologies in applications or devices capable of providing an active user interface, such as systems, methods, apparatuses, and computer-readable media that perform gesture recognition.
[0002] Increasingly, computing platforms such as smart phones, tablet computers, personal digital assistants (PDAs), televisions, as well as other devices, include touch screens, accelerometers, cameras, proximity sensors, microphones, and/or other sensors that may allow these devices to sense motion or other user activity serving as a form of user input. For example, many touch screen devices provide an interface whereby the user can cause specific commands to be executed by dragging a finger across the screen in an up, down, left or right direction. In these devices, a user action is recognized and a corresponding command is executed in response. Aspects of the present disclosure provide more convenient, intuitive, and functional gesture recognition interfaces.
SUMMARY
[0003] Systems, methods, apparatuses, and computer-readable media for performing engagement-dependent gesture recognition are presented. In current gesture control systems, maintaining a library of simple dynamic gestures (e.g., a left swipe gesture, a right swipe gesture, etc., in which a user may move one or more body parts and/or other objects in a substantially linear direction and/or with a velocity sufficient to suggest the user's intent to perform the gesture) that can be performed by a user and recognized by a system may be a challenge. In particular, there may be only a limited number of "simple" gestures, and as gesture control systems begin to implement more complex gestures (such as having a user move their hand(s) in a triangle shape, for instance), it may be more difficult for users to perform all of the recognized gestures and/or it may take more time for a system to capture any particular gesture.
[0004] Another challenge that might arise in current gesture control systems is accurately determining when a user intends to interact with such a system— and when the user does not so intend. One way to make this determination is to wait for the user to input a command to activate or engage a gesture recognition mode, which may involve the user performing an engagement pose, using voice engagement inputs, or taking some other action. As discussed in greater detail below, an engagement pose may be a static gesture that the device recognizes as a command to enter a full gesture detection mode. In the full gesture detection mode, the device may seek to detect a range of gesture inputs with which the user can control the functionality of the device. In this way, once the user has engaged the system, the system may enter a gesture detection mode in which one or more gesture inputs may be performed by the user and recognized by the device to cause commands to be executed on the device.
[0005] In various embodiments described herein, a gesture control system on the device may be configured to recognize multiple unique engagement inputs. After detecting a particular engagement input and entering the full detection mode, the gesture control system may interpret subsequent gestures in accordance with a gesture interpretation context associated with the engagement input. For example, a user may engage the gesture control system by performing a hand pose which involves an outstretched thumb and pinky finger (e.g., mimicking the shape of a telephone), and which is associated with a first gesture input interpretation context. In response to detecting this particular hand pose, the device activates the first gesture interpretation context to which the hand pose corresponds. Under the first gesture interpretation context, a left swipe gesture may be linked to a "redial" command. Thus, if the device subsequently detects a left swipe gesture, it executes the redial command through a telephone application provided by the system.
[0006] Alternatively, a user may engage the full detection mode by performing a hand pose involving the thumb and index finger in a circle (e.g., mimicking the shape of a globe) which corresponds to a second gesture interpretation context. Under the second gesture interpretation context, a left swipe gesture may be associated with a scroll map command executable within a satellite application. Thus, when the thumb and index finger in a circle are used as an engagement gesture, the gesture control system will enter the full detection mode and subsequently interpret a left swipe gesture as corresponding to a "scroll map" command when the satellite navigation application is in use. [0007] According to one or more aspects of the disclosure, a computing device may be configured to detect multiple distinct engagement inputs. Each of the multiple engagement inputs may correspond to a different gesture input interpretation context. Subsequently, the computing device may detect any one of the multiple engagement inputs at the time the input is provided by the user. Then, in response to user gesture input, the computing device may execute at least one command based on the detected gesture input and the gesture interpretation context corresponding to the detected engagement input. In some arrangements, the engagement input may take the form of an engagement pose, such as a hand pose. In other arrangements, the detected engagement may be an audio engagement, such as a user's voice.
[0008] According to one or more additional and/or alternative aspects of the disclosure, a computing device may remain in a limited detection mode until an engagement pose is detected. While in the limited detection mode, the device may ignore one or more detected gesture inputs. The computing device then detect an engagement pose and initiate processing of subsequent gesture inputs in response to detecting the engagement pose. Subsequently, the computing device may detect at least one gesture, and the computing device may further execute at least one command based on the detected gesture and the detected engagement pose.
[0009] According to one or more aspects, a method may comprise detecting an engagement of a plurality of engagements, where each engagement of the plurality of engagements defines a gesture interpretation context of a plurality of gesture interpretation contexts. The method may further comprise selecting a gesture interpretation context from amongst the plurality of gesture interpretation contexts. Further, the method may comprise detecting a gesture subsequent to detecting the engagement and executing at least one command based on the detected gesture and the selected gesture interpretation context. In some embodiments, the detection of the gesture is based on the selected gesture interpretation context. For example, one or more parameters associated with the selected gesture interpretation context are used for the detection. In some embodiments, potential gestures are loaded into a gesture detection engine based on the selected gesture interpretation context, or models for certain gestures may be selected or used or loaded based on the selected gesture interpretation context, for example. [0010] According to one or more aspects, a method may comprise ignoring non- engagement sensor input until an engagement pose of a plurality of engagement poses is detected, detecting at least one gesture based on the sensor input subsequent to the detection of the engagement pose, and executing at least one command based on the detected gesture and the detected engagement pose. In some embodiments, each engagement pose of the plurality of engagement poses defines a different gesture interpretation context. In some embodiments, the method further comprises initiating processing of the sensor input in response to detecting the engagement pose, where the at least one gesture is detected subsequent to the initiating.
[0011] According to one or more aspects, a method may comprise detecting a first engagement, activating at least some functionality of a gesture detection engine in response to the detecting, detecting a gesture subsequent to the activating using the gesture detection engine, and controlling an application based on the detected first engagement and the detected gesture. In some embodiments, the activating comprises switching from a low power mode to a mode that consumes more power than the low power mode. In some embodiments, the activating comprises beginning to receive information from one or more sensors. In some embodiments, the first engagement defines a gesture interpretation context for the application. In some embodiments, the method further comprises ignoring one or more gestures prior to detecting the first engagement. In some embodiments, the activating comprises inputting data points obtained from the first engagement into operation of the gesture detection engine.
[0012] According to one or more aspects, a method may comprise detecting a first engagement, receiving sensor input related to a first gesture subsequent to the first engagement, and determining whether the first gesture is a command. In some embodiments, the first gesture comprises a command when the first engagement is maintained for at least a portion of the first gesture. The method may further comprise determining that the first gesture does not comprise a command when the first engagement is not held for substantially the entirety of the first gesture.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Aspects of the disclosure are illustrated by way of example. In the accompanying figures, like reference numbers indicate similar elements, and: [0014] FIG. 1 illustrates an example device that may implement one or more aspects of the disclosure.
[0015] FIG. 2 illustrates an example timeline showing how a computing device may switch from a limited detection mode into a gesture detection mode in response to detecting an engagement pose in accordance with one or more illustrative aspects of the disclosure.
[0016] FIG. 3 illustrates an example method of performing engagement-dependent gesture recognition in accordance with one or more illustrative aspects of the disclosure.
[0017] FIG. 4 illustrates an example table of engagement poses and gestures that may be recognized by a computing device in accordance with one or more illustrative aspects of the disclosure.
[0018] FIG. 5 illustrates an example computing system in which one or more aspects of the disclosure may be implemented.
[0019] FIG. 6 illustrates a second example system for implementing one or more aspects of the present disclosure.
[0020] FIG. 7 is a flow diagram depicting an algorithm for implementing certain methods of the present disclosure, and may be used in conjunction with the example system of FIG. 6.
[0021] FIG. 8 is a flow diagram depicting example operations of a device configured to operate in accordance with techniques disclosed herein.
DETAILED DESCRIPTION
[0022] Several illustrative embodiments will now be described with respect to the accompanying drawings, which form a part hereof. While particular embodiments, in which one or more aspects of the disclosure may be implemented, are described below, other embodiments may be used and various modifications may be made without departing from the scope of the disclosure or the spirit of the appended claims. [0023] FIG. 1 illustrates an example device that may implement one or more aspects of the disclosure. For example, computing device 100 may be a personal computer, set- top box, electronic gaming console device, laptop computer, smart phone, tablet computer, personal digital assistant, or other mobile device that is equipped with one or more sensors that allow computing device 100 to capture motion and/or other sensed conditions as a form of user input. For instance, computing device 100 may be equipped with, communicatively coupled to, and/or otherwise include one or more cameras, microphones, proximity sensors, gyroscopes, accelerometers, pressure sensors, grip sensors, touch screens, and/or other sensors. In addition to including one or more sensors, computing device 100 also may include one or more processors, memory units, and/or other hardware components, as described in greater detail below. In some embodiments, the device 100 is incorporated into an automobile, for example in a central console of the automobile.
[0024] In one or more arrangements, computing device 100 may use any and/or all of these sensors alone or in combination to recognize gestures, for example gestures that may not include a user touching the device 100, performed by one or more users of the device. For example, computing device 100 may use one or more cameras, such as camera 110, to capture hand and/or arm movements performed by a user, such as a hand wave or swipe motion, among other possible movements. In addition, more complex and/or large-scale movements, such as whole body movements performed by a user (e.g., walking, dancing, etc.), may likewise be captured by the one or more cameras (and/or other sensors) and subsequently be recognized as gestures by computing device 100, for instance. In yet another example, computing device 100 may use one or more touch screens, such as touch screen 120, to capture touch-based user input provided by a user, such as pinches, swipes, and twirls, among other possible movements. While these sample movements, which may alone be considered gestures and/or may be combined with other movements or actions to form more complex gestures, are described here as examples, any other sort of motion, movement, action, or other sensor-captured user input may likewise be received as gesture input and/or be recognized as a gesture by a computing device implementing one or more aspects of the disclosure, such as computing device 100.
[0025] In some arrangements, for instance, a camera such as a depth camera may be used to control a computer or media hub based on the recognition of gestures or changes in gestures of a user. Unlike some touch-screen systems that might suffer from the deleterious, obscuring effect of fingerprints, camera-based gesture input may allow photos, videos, or other images to be clearly displayed or otherwise output based on the user's natural body movements or poses. With this advantage in mind, gestures may be recognized that allow a user to view, pan (i.e., move), size, rotate, and perform other manipulations on image objects.
[0026] A depth camera, such as a structured light camera or a time-of-flight camera, may include infrared emitters and a sensor. The depth camera may produce a pulse of infrared light and subsequently measure the time it takes for the light to travel to an object and back to the sensor. A distance may be calculated based on the travel time. As described in greater detail below, other input devices and/or sensors may be used to detect or receive input and/or assist in detected a gesture.
[0027] As used herein, a "gesture" is intended to refer to a form of non-verbal communication made with part of a human body, and is contrasted with verbal communication such as speech. For instance, a gesture may be defined by a movement, change or transformation between a first position, pose, or expression and a second pose, position, or expression. Common gestures used in everyday discourse include for instance, an "air quote" gesture, a bowing gesture, a curtsey, a cheek-kiss, a finger or hand motion, a genuflection, a head bobble or movement, a high-five, a nod, a sad face, a raised fist, a salute, a thumbs-up motion, a pinching gesture, a hand or body twisting gesture, or a finger pointing gesture. A gesture may be detected using a camera, such as by analyzing an image of a user, using a tilt sensor, such as by detecting an angle that a user is holding or tilting a device, or by any other approach. As those of skill in the art will appreciate from the description above and the further descriptions below, a gesture may comprise a non-touch, touchless, or touch-free gesture such as a hand movement performed in mid-air, for example. Such non-touch, touchless, or touch-free gestures may be distinguished from various "gestures" that might be performed by drawing a pattern on a touchscreen, for example, in some embodiments. In some embodiments, a gesture may be performed in mid-air while holding a device, and one or more sensors in the device such as an accelerometer may be used to detect the gesture.
[0028] A user may make a gesture (or "gesticulate") by changing the position (i.e. a waving motion) of a body part, or may gesticulate while holding a body part in a constant position (i.e. by making a clenched fist gesture). In some arrangements, hand and arm gestures may be used to control functionality via camera input, while in other arrangements, other types of gestures may additionally or alternatively be used. Additionally or alternatively, hands and/or other body parts (e.g., arms, head, torso, legs, feet, etc.) may be moved in making one or more gestures. For example, some gestures may be performed by moving one or more hands, while other gestures may be performed by moving one or more hands in combination with one or more arms, one or more legs, and so on. In some embodiments, a gesture may comprise a certain pose, for example a hand or body pose, being maintained for a threshold amount of time.
[0029] FIG. 2 illustrates an example timeline showing how a computing device may switch from a limited detection mode into a full detection mode in response to detecting an engagement input in accordance with one or more illustrative aspects of the disclosure. As seen in FIG. 2, at a start time 205, a computing device, such as device 100, may be in a limited detection mode. In the limited detection mode, the device processes sensor data to detect an engagement input. However, in this mode, the device may not execute commands associated with user inputs available for controlling the device in the full detection mode. In other words, only engagement inputs are valid in the limited detection mode in some embodiments.
[0030] Furthermore, the device may also be configured so that while it is in the limited detection mode, power and processing resources are not devoted to detecting inputs associated with the commands associated with the full detection mode. During the limited detection mode, the computing device might be configured to analyze sensor input (and/or any other input that might be received during this time) relevant to determining whether a user has provided an engagement input. In some embodiments, one or more sensors may be configured to be turned off or powered down, or to not provide sensor information to other components while the device 100 is in the limited detection mode.
[0031] As used herein, an "engagement input" refers to an input which triggers activation of the full detection mode. The full detection mode refers to a mode of device operation in which certain inputs may be used to control the functionality of the device, as determined by the active gesture interpretation context. [0032] In some instances, an engagement input may be an engagement pose involving a user positioning his or her body or hand(s) in a particular way (e.g., an open palm, a closed fist, a "peace fingers" sign, a finger pointing at a device, etc.). In other instances, an engagement may involve one or more other body parts, in addition to and/or instead of the user's hand(s). For example, an open palm or closed fist may constitute an engagement input when detected at the end of an outstretched arm in some embodiments.
[0033] Additionally or alternatively, an engagement input may include an audio input such as a sound which triggers the device to enter the full gesture detection mode. For instance, an engagement input may be a user speaking a particular word or phrase which the device is configured to recognize as an engagement input. In some embodiments, an engagement input may be provided by a user occluding a sensor. For example, a device could be configured to recognize when the user blocks the field of view of a camera or the transmitting and/or receiving space of a sonic device. For example, a user traveling in an automobile may provide an engagement input by occluding a camera or other sensor present in the car or on a handheld device.
[0034] Once the computing device determines that an engagement input has been detected, the device enters a full detection mode. In one or more arrangements, the particular engagement input that was detected by the device may correspond to and trigger a particular gesture interpretation context. A gesture interpretation context may comprise a set of gesture inputs recognizable by the device when the context is engaged, as well as the command(s) activated by each such gesture. Thus, during full detection mode, the active gesture interpretation context may dictate the interpretation given by a device to detected gesture inputs. Furthermore, in full detection mode, the active gesture interpretation context may itself be dictated by the engagement input which triggered the device to enter the full detection mode. In some embodiments, a "default" engagement may be implemented that will allow the user to enter a most recent gesture interpretation context, for example rather than itself being associated with a unique gesture interpretation context.
[0035] Continuing to refer to FIG. 2, once the computing device has entered the full detection mode, the computing device may detect one or more gestures. In response to detecting a particular gesture, the device may interpret the gesture based on the gesture interpretation context corresponding to the most recent engagement input. The recognizable gestures in the active gesture interpretation context may each be associated with a command. In this way, when any one of the gestures is detected as input, the device determines the command with which the gesture is associated, and executes the determined command. In some embodiments, the most recent engagement input may not only determine which commands are associated with which gestures, but the engagement input may be used to determine one or more parameters used to detect one or more of those gestures.
[0036] As an example implementation of the previously described methodology, a device could recognize a pose involving a user's thumb and outstretched pinky finger, and could associate this pose with a telephonic gesture interpretation context. The same device could also recognize a hand pose involving a thumb and forefinger pressed together in a circle, and could associate this pose with a separate navigational gesture interpretation context applicable to mapping applications.
[0037] If this example computing device detected an engagement that included a hand pose involving a user's thumb and outstretched pinky finger, then the device may interpret gestures detected during the gesture detection mode in accordance with a telephonic gesture interpretation context. In this context, if the computing device were to then recognize a left swipe gesture, the device may interpret the gesture as a "redial" command to be executed using a telephone application (e.g., a telephonic software application) provided by the device, for example. On the other hand, in this example, if the computing device recognized an engagement that included a hand pose in which the user's thumb and index finger form a circle (e.g., mimicking the shape of a globe), then the device may interpret gestures detected during the gesture detection mode in accordance with a navigational gesture interpretation context. In this context, if the computing device were to then recognize a left swipe gesture, the device may interpret the gesture as a "scroll map" command to be executed using a satellite navigation application (e.g., a satellite navigation software application) also provided by the device, for example. As suggested by these examples, in at least one embodiment, the computing device may be implemented as and/or in an automobile control system, and these various engagements and gestures may allow the user to control different functionalities of the automobile control system. [0038] FIG. 3 illustrates an example method of performing engagement-dependent gesture recognition in accordance with one or more illustrative aspects of the disclosure. According to one or more aspects, any and/or all of the methods and/or method steps described herein may be implemented by and/or in a computing device, such as computing device 100 and/or the computer system described in greater detail below, for instance. In one embodiment, one or more of the method steps described below with respect to FIG. 3 are implemented by a processor of the device 100. Additionally or alternatively, any and/or all of the methods and/or method steps described herein may be implemented in computer-readable instructions, such as computer-readable instructions stored on a computer-readable medium. Moreover, in accordance with the present disclosure, a device may incorporate other steps, calculations, algorithms, methods, or actions which may be needed for the execution of any of the steps, decisions, determinations and actions depicted in FIG. 3.
[0039] In conjunction with a description of the method of FIG. 3, subsequent paragraphs will refer ahead to FIGs. 5 and 6 to indicate certain components of these figures which may be associated with the method steps. In step 305, a computing device, such as a computing device capable of recognizing one or more gestures as user input (e.g., computing device 500 or 600), may be initialized, and/or one or more settings may be loaded. For example, when the computing device is first powered on, the device (in association with software stored and/or executed thereon, for instance) may load one or more settings, such as user preferences related to gestures. In at least one arrangement, these user preferences may include gesture mapping information in which particular gestures are mapped to particular commands in different gesture interpretation contexts. Additionally or alternatively, such gesture mapping information may specify the engagement inputs and the different gesture interpretation contexts brought about by each such engagement input. Information related to gesture mapping settings or the like may be stored in memory 535 or memory 606, for example.
[0040] In one or more additional and/or alternative arrangements, the settings may specify that engagement inputs operate at a "global" level, such that these engagement inputs correspond to the same gesture interpretation context regardless of the application currently "in focus" or being used. On the other hand, the settings may specify that other engagement inputs operate at an application level, such that these engagement inputs correspond to different gestures at different times, with the correspondence depending on which application is being used. The arrangement of global and application level engagement inputs may depend on the system implementing these concepts and a system may be configured with global and application level engagement inputs as needed to suit the specific system design objectives. The arrangement of global and application level engagement inputs may also be partially or entirely determined based on settings provided by the user.
[0041] For instance, the following table (labeled "Table A" below) illustrates an example of the gesture mapping information that may be used in connection with a system implementing one or more aspects of the disclosure in an automotive setting:
Table A
Figure imgf000014_0001
[0042] As another example, the following table (labeled "Table B" below) illustrates an example of the gesture mapping information that may be used in connection with a system implementing one or more aspects of the disclosure in a home entertainment system setting:
Table B
Figure imgf000015_0001
[0043] Table A and B are provided for example purposes only and alternative or additional mapping arrangements, commands, gestures, etc. may be used in a device employing gesture recognition in accordance with this disclosure.
[0044] Many additional devices and applications may also be configured to use gesture detection and gesture mapping information in which particular gestures are mapped to particular commands in different gesture interpretation contexts. For example, a television application interface may incorporate gesture detection to enable users to control the television. A television application may incorporate gesture interpretation contexts in which a certain engagement input facilitates changing the television channel with subsequent gestures, while a different engagement input facilitates changing the television volume with subsequent gestures.
[0045] As an additional example, a video game application may be controlled by a user through gesture detection. A gesture input interpretation context for the video game may include certain gesture inputs mapped to "pause" or "end" control commands, for example similar to how the video game may be operated at a main menu (i.e. main menu is the focus). A different interpretation context for the video game may include the same or different gesture inputs mapped to live game control commands, such as shooting, running, or jumping commands.
[0046] Moreover, for a device which incorporates more than one user application, a gesture interpretation context may facilitate changing an active application. For example, a gesture interpretation context available during use of a GPS application may contain mapping information tying a certain gesture input to a command for switching to or additionally activating another application, such as a telephone or camera application.
[0047] In step 310, the computing device may process input in the limited detection mode. For example, in step 310, computing device 100 may be in the limited detection mode in which sensor input may be received and/or captured by the device, but processed only for the purpose of detecting engagement inputs. Prior to processing, sensor input may be received by input device 515 or sensor 602. In certain embodiments, while a device operates in the limited detection mode, gestures that correspond to the commands recognized in the full detection mode may be ignored or go undetected. Furthermore, the device may deactivate or reduce power to sensors, sensor components, processor components, or software modules which are not involved in detecting engagement inputs. For example, in a device in which the engagement inputs are engagement poses, the device may reduce power to a touchscreen or audio receiver/detector components while using the camera to detect the engagement pose inputs. As noted above, operating in this manner may be advantageous when computing device 100 is relying on a limited power source, such as a battery, as processing resources (and consequently, power) may be conserved during the limited detection mode.
[0048] Subsequently, in step 315, the device may determine whether an engagement input has been provided. This step may involve computing device 100 continuously or periodically analyzing sensor information received during the limited detection mode to determine if an engagement input (such as an engagement pose or audio engagement described above) has been provided. More specifically, this analysis may be performed by a processor such as the processor 510, in conjunction with memory device(s) 525. Alternatively, a processor such as processor 604 may be configurable to perform the analysis in conjunction with module 608. Until the computing device detects an engagement input, at step 315, it may remain in the limited detection mode as depicted by the redirection arrow pointing to step 310, and continues to process input data for the purpose of detecting an engagement input.
[0049] On the other hand, if the computing device detects an engagement input at step 315, the device selects and may activate a gesture input interpretation context based on the engagement input, and may commence a time-out counter, as depicted at 318. More specifically, selection and activation of a gesture interpretation context may be performed by a processor such as the processor 510, in conjunction with memory device(s) 525. Alternatively, a processor such as processor 604 may be configurable to perform the selection and activation, in conjunction with module 610.
[0050] The computing device may be configured to detect several possible engagement inputs at 315. In certain embodiments of the present disclosure, the computing device may be configured to detect one or more engagement inputs associated with gesture input interpretation contexts in which both static poses and dynamic gestures are recognizable and are mapped to control commands. Information depicting each engagement input (e.g. each hand pose, gesture, swipe, movement, etc.) detectable by the computing device may be accessibly stored within the device, as will be explained with reference to subsequent figures. This information may be directly determined from model engagement inputs provided by the user or another person. Additionally or alternatively, the information could be based on mathematical models which quantitatively depict the sensor inputs expected to be generated by each of the engagement inputs. Furthermore, in certain embodiments, the information could be dynamically altered and updated based on an artificial intelligence learning process occurring inside the device or at an external entity in communication with the device.
[0051] Additionally, information depicting the available gesture interpretation contexts may be stored in memory a manner which associates each interpretation context with at least one engagement input. For example, the device may be configured to generate such associations through the use of one or more lookup tables or other storage mechanisms which facilitate associations within a data storage structure.
[0052] Then, at 320, the device enters the full detection mode and processes sensor information to detect gesture inputs, as indicated at step 320. For example, in step 320, computing device 100 may capture, store, analyze, and/or otherwise process sensor information to detect the gesture inputs relevant within the active gesture interpretation context. In one or more additional and/or alternative arrangements, in response to determining that an engagement has been detected, computing device 100 may further communicate to the user an indication of the gesture inputs available within the active gesture interpretation context and the commands which correspond to each such gesture input.
[0053] Additionally or alternatively, in response to detecting an engagement input, computing device 100 may play a sound and/or otherwise provide audio feedback to indicate activation of the gesture input interpretation context associated with the detected engagement. For example, the device may provide a "telephone dialing" sound effect upon detecting an engagement input associated with a telephonic context or a "twinkling stars" sound effect upon detecting an engagement gesture associated with a satellite navigational gesture input interpretation context.
[0054] Also, a device may be configured to provide a visual output indicating detection of an engagement gesture associated with a gesture input interpretation context. A visual output may be displayed on a screen or through another medium suitable for displaying images or visual feedback. As an example of a visual indication of a gesture interpretation context, a device may show graphical depictions of certain of the hand poses or gestures recognizable in the interpretation context and a description of the commands to which the gestures correspond. [0055] In some embodiments, after an engagement is detected in step 315, a gesture input detection engine may be initialized as part of step 320. Such initialization may be performed, at least in part, by a processor such as processor 604. The initialization of the gesture input detection engine may involve the processor 604 activating a module for detecting gesture inputs such as the one depicted at 612. The initialization may further involve processor 604 accessing information depicting recognizable engagement inputs. Such information may be stored in engagement input library 618, or in any other storage location.
[0056] In some embodiments, as part of the process of detecting an engagement input at 315, the device may obtain information about the user or the environment surrounding the device. Such information may be saved and subsequently utilized in the full detection mode by the gesture detection engine or during the processing in step 320 and/or in step 325, for example to improve gesture input detection. In some embodiments, when an engagement input involving a hand pose is detected at step 315, the device 100 extracts features or key points of the hand that may be used to subsequently track hand motion in step 320 in order to detect a gesture input in full detection mode.
[0057] At step 325, the computing device 100, now in the full detection mode, determines whether an actionable gesture input has been provided by the user. By way of example, as part of performing step 325, computing device 600 may continuously or periodically analyze sensor data to determine whether a gesture input associated with the active interpretation context has been provided. In the case of computing device 600, such analysis may be performed by processor 604 in conjunction with module 612 and the library of gesture inputs 620.
[0058] In one embodiment of the present disclosure, the full detection mode may only last for a predetermined period of time (e.g., 10 seconds or 10 seconds since the last valid input was detected), such that if an actionable gesture input is not detected within such time, the gesture detection mode "times out" and the device returns to the limited detection mode described above. This "time out" feature is depicted in FIG. 3 at 318, and may be implemented using a time lapse counter which, upon reaching a time limit or expiring, triggers cancellation of the full detection mode and re-initialization of the limited detection mode. When such a counter is used, the counter may be configured to start as soon as a detected engagement input is no longer detected, as shown by step 318. In this way, a user can hold an engagement pose or other such input and then delay in deciding upon an input gesture, without the gesture detection mode timing out before the user provides the gesture input.
[0059] In some embodiments, the user may perform a certain gesture or another predefined engagement that "cancels" a previously provided engagement input, thereby allowing the user to reinitialize the time out counter or change gesture interpretation contexts without having to wait for the time out counter to expire.
[0060] As depicted in FIG. 3, if the computing device determines, at step 325, that a gesture has not yet been detected, then in step 330, the computing device determines if the time-out counter has expired. If the counter has expired, the device notifies the user that the full detection mode has timed out (e.g., by displaying a user interface, playing a sound, etc.), and subsequently reenters the limited detection mode, as depicted by the return arrow to step 310. If the time-out counter has not yet expired, the device continues to process sensor information in the full detection mode, as depicted by the return arrow to step 320.
[0061] If, at any time while in the full detection mode, the computing device detects an actionable gesture input (i.e. a gesture input that is part of the active gesture interpretation context) at step 325, then at step 335, the computing device interprets the gesture based on the active gesture input interpretation context. Interpreting the gesture may include determining which command(s) should be executed in response to the gesture in accordance with the active gesture input interpretation context. As discussed above, different contexts (corresponding to different engagements) may allow for control of different functionalities, the use of different gestures, or both. For instance, a navigational context may allow for control of a navigation application, while a telephonic context may allow for control of a telephone application.
[0062] In certain embodiments of the present invention, the detection of engagement inputs in the limited detection mode and/or selection of an interpretation context may be independent of the location at which the user provides the engagement input. In such cases, the device may be configured to activate the full detection mode and a gesture interpretation context regardless of the position relative to the device sensor at which the engagement input is detected. Additionally or alternatively, the device may be configured so that detection of input gestures in the full detection mode is independent of the position at which the gesture input is provided. Further, when elements are being displayed, for example on the screen 120, detection of an engagement input and/or selection of an input interpretation context may be independent of what is being displayed.
[0063] Certain embodiments of the present invention may involve gesture input interpretation contexts having only one layer of one-to-one mapping between inputs and corresponding commands. In such a case, all commands may be available to the user through the execution of only a single gesture input. Additionally or alternatively, the gesture input interpretation contexts used by a device may incorporate nested commands which cannot be executed unless a series of two consecutive gesture inputs are provided by the user. For example, in an example gesture input interpretation context incorporating single layer, one-to-one command mapping, an extended thumb and forefinger gesture input may directly correspond to a command for accessing a telephone application. In an example system in which nested commands are used, a gesture input involving a circular hand pose may directly correspond to a command to initialize a navigation application. Subsequent to the circular hand pose being provided as gesture input, an open palm or closed fist may thereafter correspond to a functional command within the navigation application. In this way, the functional command corresponding to the open palm is a nested command, and an open palm gesture input may not cause the functional command to be executed unless the circular hand pose has been detected first.
[0064] Additional embodiments may involve the device being configured to be operate based on nested engagement inputs. For example, a device using nested engagement inputs may be configured to recognize a first and second engagement input, or any series of engagement inputs. Such a device may be configured so as to not enter the full detection mode until after a complete series of engagement.
[0065] A device capable of operations based on nesting of engagement inputs may enable a user to provide a first engagement input indicating an application which the user desires to activate. A subsequent engagement input may then specify a desired gesture interpretation context associated with the indicated application. The subsequent engagement input may also trigger the full detection mode, and activation of the indicated application and context. The device may be configured to respond to the second engagement input in a manner dictated by the first detected engagement input. Thus, in certain such device configurations, different engagement input sequences involving identical second engagement inputs may cause the device to activate different gesture input interpretation contexts.
[0066] At step 340, the device may execute the one or more commands which, in the active gesture input interpretation context, correspond to the previously detected gesture input. As depicted by the return arrow shown after step 340, the device may then return to processing sensor information at step 320, while the active gesture input interpretation context is maintained. In some embodiments, the time-out counter is reset at 340 or 320. Alternatively, the device may return to the limited detection mode or some other mode of operation.
[0067] FIG. 4 illustrates an example table of engagement poses and gestures that may be recognized by a computing device in accordance with one or more illustrative aspects of the disclosure. As seen in FIG. 4, in some existing approaches, a "swipe right" gesture 405 may cause a computing device to execute a "next track" command within a media player application.
[0068] In one or more embodiments, however, by first performing an engagement, such as an "open palm" engagement pose 410 or a "closed fist" engagement pose 420, the same gesture may be mapped to different functions depending on the context set by the engagement pose. As seen in FIG. 4, for instance, if a user performs an "open palm" engagement pose 410 and then performs a "swipe right" gesture 415, the computing device may execute a "next track" command within the media player based on a track- level control context set by the "open palm" engagement pose 410. On the other hand, if a user performs a "closed fist" engagement pose 420 and then performs a "swipe right" gesture 425, the computing device may execute a "next album" command within the media player based on the album-level control context set by the "closed first" engagement pose 420.
[0069] Having described multiple aspects of engagement-dependent gesture recognition, an example of a computing system in which various aspects of the disclosure may be implemented will now be described with respect to FIG. 5. According to one or more aspects, a computer system as illustrated in FIG. 5 may be incorporated as part of a computing device, which may implement, perform, and/or execute any and/or all of the features, methods, and/or method steps described herein. For example, a handheld device may be entirely or partially composed of a computer system 500. The hand-held device may be any computing device with sensor capable of sensing user inputs, such as a camera and/or a touchscreen display unit. Examples of a hand-held device include but are not limited to video game consoles, tablets, smart phones, and mobile devices. The system 500 of FIG. 5 is one of many structures which may be used implement some or all of the features and methods described previously with regards to the device 100.
[0070] In accordance with the present disclosure, the structure depicted in FIG. 5 may be used within a host computer system, a remote kiosk/terminal, a point-of-sale device, a mobile device, a set-top box, or any other type of computer system configured to detect user inputs. FIG. 5 is meant only to provide a generalized illustration of various components, any and/or all of which may be utilized as appropriate. FIG. 5, therefore, broadly illustrates how individual system elements may be implemented and is not intended to depict that all such system elements must be disposed in in integrated manner. In accordance with this disclosure, system components such as the ones shown in FIG. 5 may be incorporated within a common electrical structure, or may be located in separate structures. Although certain of these components are depicted as hardware, the components are not to be so limited, and may be embodied in or exist as software, processor modules, one or more micro-control devices, logical circuitry, algorithms, remote or local data storage, or any other suitable devices, structures, or implementations known in the arts relevant to user input detection systems.
[0071] The computer system 500 is shown comprising hardware elements that can be electrically coupled via a bus 505 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 510, including without limitation one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 515, which can include without limitation a camera, a mouse, a keyboard and/or the like; and one or more output devices 520, which can include without limitation a display unit, a printer and/or the like. The bus 505 may also provide communication between cores of the processor 510 in some embodiments. [0072] The computer system 500 may further include (and/or be in communication with) one or more non-transitory storage devices 525, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, flash-updateable and/or the like. Such storage devices may be configured to implement any appropriate data storage, including without limitation, various file systems, database structures, and/or the like.
[0073] The computer system 500 might also include a communications subsystem 530, which can include without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device and/or chipset (such as a Bluetooth® device, an 802.1 1 device, a WiFi device, a WiMax device, cellular communication facilities, etc.), and/or the like. The communications subsystem 530 may permit data to be exchanged with a network (such as the network described below, to name one example), other computer systems, and/or any other devices described herein. In many embodiments, the computer system 500 will further comprise a non-transitory working memory 535, which can include a RAM or ROM device, as described above.
[0074] The computer system 500 also can comprise software elements, shown as being currently located within the working memory 535, including an operating system 540, device drivers, executable libraries, and/or other code, such as one or more application programs 545, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above, for example as described with respect to FIG. 3, might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods. The processor 510, memory 535, operating system 540, and/or application programs 545 may comprise a gesture detection engine, as discussed above, and/or may be used to implement any or all of blocks 305-340 described with respect to FIG. 3. [0075] A set of these instructions and/or code might be stored on a computer- readable storage medium, such as the storage device(s) 525 described above. In some cases, the storage medium might be incorporated within a computer system, such as computer system 500. In other embodiments, the storage medium might be separate from a computer system (e.g., a removable medium, such as a compact disc), and/or provided in an installation package, such that the storage medium can be used to program, configure and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer system 500 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 500 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
[0076] Substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.
[0077] Some embodiments may employ a computer system (such as the computer system 500) to perform methods in accordance with the disclosure. For example, some or all of the procedures of the described methods may be performed by the computer system 500 in response to processor 510 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 540 and/or other code, such as an application program 545) contained in the working memory 535. Such instructions may be read into the working memory 535 from another computer- readable medium, such as one or more of the storage device(s) 525. Merely by way of example, execution of the sequences of instructions contained in the working memory 535 might cause the processor(s) 510 to perform one or more procedures of the methods described herein, for example a method described with respect to FIG. 3.
[0078] The terms "machine-readable medium" and "computer-readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer system 500, various computer-readable media might be involved in providing instructions/code to processor(s) 510 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer- readable medium is a physical and/or tangible storage medium. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical and/or magnetic disks, such as the storage device(s) 525. Volatile media include, without limitation, dynamic memory, such as the working memory 535. Transmission media include, without limitation, coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 505, as well as the various components of the communications subsystem 530 (and/or the media by which the communications subsystem 530 provides communication with other devices). Hence, transmission media can also take the form of waves (including without limitation radio, acoustic and/or light waves, such as those generated during radio-wave and infrared data communications).
[0079] Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
[0080] Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 510 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 500. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
[0081] The communications subsystem 530 (and/or components thereof) generally will receive the signals, and the bus 505 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 535, from which the processor(s) 510 retrieves and executes the instructions. The instructions received by the working memory 535 may optionally be stored on a non-transitory storage device 525 either before or after execution by the processor(s) 510.
[0082] FIG. 6 depicts a second device which may alternatively be used to implement any of the methods, steps, processes or algorithms previously disclosed herein. FIG. 6 includes one or more sensors 602 which can be used to sense engagement inputs and/or gesture inputs, and which provides sensor information regarding such inputs to a processor 604. The sensor 602 may comprise ultrasound technology (e.g., using microphones and/or ultrasound emitters), image or video capturing technologies such as a camera, IR or UV technology, magnetic field technology, emitted electromagnetic radiation technology, an accelerometer and/or gyroscope, and/or other technologies that may be used to sense an engagement input and/or a gesture input. In some embodiments, the sensor 602 comprises a camera configured to capture two-dimensional images. Such camera may be included in a cost- efficient system in some embodiments, and the use of input interpretation contexts may expand the number of commands that may be efficiently detected by the camera in some embodiments.
[0083] Processor 604 may store some or all sensor information in memory 606. Furthermore, processor 604 is configured to communicate with a module 608 for detecting engagement inputs, module 610 for selecting and activating input interpretation contexts, module 612 for detecting gesture inputs, and module 614 for determining and executing commands.
[0084] Additionally, each of the modules, 608, 610, 612 and 614 may have access to the memory 606. Memory 606 may include or interface with libraries, lists, arrays, databases, or other storage structures used to store sensor data, user preferences and information about input gesture interpretation contexts, actionable gesture inputs for each context, and/or commands corresponding to the different actionable gesture inputs. The memory may also store information about engagement inputs and the gesture input interpretation contexts corresponding to each engagement input. In FIG. 6, the modules 608, 610, 612, and 614 are illustrated separate from the processor 604 and the memory 606. In some embodiments, one or more of the modules 608, 610, 612, and 614 may be implemented by the processor 604 and/or in the memory 606. [0085] In an example arrangement depicted in FIG. 6, the memory 606 contains an engagement input library 616, an input interpretation context library 618, a library of gesture inputs 620, and a library of commands 622. Each library may contain indices which enable modules 608, 610, 612 and 614 to identify the one or more elements as corresponding to one or more elements of another one of the libraries 616, 618, 620 and 622. Each of the modules 608, 610, 612 and 614, as well as processor 604 may have access to the memory 606 and each library therein, and may be capable of writing and reading data to and from the memory 606. Moreover, the processor 604 and module for determining and executing commands 614 may be configured to access and/or control the output component 624.
[0086] Libraries 616, 618, 620 and 622 may be hard encoded with information descriptive of the various actionable engagement inputs and their corresponding gesture input interpretation contexts, the gesture inputs associated with each context, and the commands linked to each such gesture input. Additionally, they may be supplemented with information provided by the user based on the user's preference, or may store information as determined by software or other medium executable by the device.
[0087] Certain components depicted in FIG. 6 may be understood as being configurable to perform certain of the steps which may be performed by components depicted in FIG. 5. For example, in certain embodiments of the device of FIG. 6, processor 604, in conjunction with modules 608, 610, 612 and 614, may be configured to perform certain steps described previously with regards to the discussion of processor(s) 510 in FIG. 5. Memory 606 may provide storage functionality similar to storage device(s) 525. Output component 624 may be configured to provide device outputs similar to those of output device(s) 520 in FIG. 5. Additionally, sensor 602 may be configured to enable certain functionality similar to functionality of input device(s) 515.
[0088] FIG. 7 depicts an example detailed algorithmic process which may be used by the device of FIG. 6 to implement certain methods according to the present disclosure. As depicted at 702, while the device 700 is in the limited detection mode, the processor 604 may signal output component 624 to prompt the user for one or more engagement inputs. The processor may interface with memory 606, and specifically, the engagement input library 616, to obtain information descriptive of the one or more engagement inputs for use in the prompt. Subsequently, the device remains in the limited detection mode and processor 604 continuously or intermittently processes sensor information relevant to detecting an engagement input.
[0089] At 704, sometime after being prompted, the user provides the engagement input. At 706, processor 604 processes sensor information associated with the engagement input. The processor 604 identifies the engagement input by using the module for detecting an engagement input 608 to review the engagement library and determining that the sensor information matches a descriptive entry in the engagement input library 616. As described generally, at 708, the processor 604 then selects a gesture input interpretation context by using the module for selecting an input interpretation context 610 to scan the input interpretation context library 618 for the gesture input interpretation context entry that corresponds to the detected engagement input. At 709, the processor 604 activates the selected gesture input interpretation context and activates the full detection mode.
[0090] At 710, the processor accesses the gesture input library 620 and the library of commands 622 to determine actionable gesture inputs for the active gesture input interpretation context, as well as the commands corresponding to these gesture inputs. At 711 , the processor commands the output component 624 to output communication to inform the user of one or more of the actionable gesture inputs and corresponding commands associated with the active gesture input interpretation context.
[0091] At 712, the processor begins analyzing sensor information to determine if the user has provided a gesture input. This analysis may involve the processor using the module for detecting gesture inputs 612 to access the library of gesture inputs 620 for the purpose of determining if an actionable gesture input has been provided. The module for detecting gesture inputs 612 may compare sets of sensor information to descriptions of actionable gesture inputs in the library 620, and may detect a gesture input when a set of sensor information matches with one of the stored descriptions.
[0092] Subsequently, while the processor continues to analyze sensor information, the user provides a gesture input at 714. At 716, the processor, in conjunction with the module for detecting gesture inputs 612, detects and identifies the gesture input by determining a match with an actionable gesture input description stored in the library of gesture inputs and associated with the active gesture input interpretation context. [0093] Subsequently, at 718, the processor activates the module 614 for determining and executing commands. The processor, in conjunction with module 614, for example, may access the library of commands 622 and find the command having an index corresponding to the previously identified gesture input. At 720, the processor executes the determined command.
[0094] FIG. 8 is a flow diagram depicting example operations of a gesture recognition device in accordance with the present disclosure. As depicted, at 802, the device may detect an engagement input, for example using the module 608, the processor 604, data from the sensor 602, and/or the library 618. At 804, the device selects an input interpretation context from amongst a plurality of input interpretation contexts, for example using the module 610, the processor 604, the library 618, and/or the library 616. The selecting is based on the detected engagement input. In some embodiments, the engagement input detected at 802 is one of a plurality of engagement inputs, and each of the plurality of engagement inputs corresponds to a respective one of the plurality of input interpretation contexts. In such embodiments, the selecting at 804 may comprise selecting the input interpretation context corresponding to the detected engagement input At 806, the device detects a gesture input subsequent to the selecting an input interpretation context, or example using the module 612, the processor 604, and/or the library 620. In some embodiments, the detection at 806 is based on the input interpretation context selected at 804. For example, one or more parameters associated with the selected input interpretation context may be used to detect the gesture input. Such parameters may be stored in the library 616, for example, or loaded into the library 620 or a gesture detection engine when the input interpretation context is selected. In some embodiments, a gesture detection engine may be initialized or activated, for example to detect motion when the engagement comprises a static pose. In some embodiments, a gesture detection engine is implemented by the module 612 and/or by the processor 604, and/or as described above. Potential gestures available in the selected input interpretation context may be loaded into the gesture detection engine in some embodiments, for example from the library 616 and/or 620. In some embodiments, detectable or available gestures may be linked to function, for example in a lookup table or the library 622 or another portion of the memory 606. In some embodiments, gestures for an application may be registered with the gesture detection engine, and/or hand or gesture models for certain gestures or poses may be selected or used or loaded based on the selection of the input interpretation context. At 808, the device executes a command based on the detected gesture input and the selected input interpretation context, for example using the module 614, the processor 604, and/or the library 622.
[0095] The methods, systems, and devices discussed above are examples. Various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods described may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples that do not limit the scope of the disclosure to those specific examples.
[0096] Specific details are given in the description to provide a thorough understanding of the embodiments. However, embodiments may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the embodiments. This description provides example embodiments only, and is not intended to limit the scope, applicability, or configuration of the invention. Rather, the preceding description of the embodiments will provide those skilled in the art with an enabling description for implementing embodiments of the invention. Various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention.
[0097] Also, some embodiments were described as processes depicted as flow diagrams or block diagrams. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, embodiments of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the associated tasks may be stored in a computer-readable medium such as a storage medium. Processors may perform the associated tasks.
[0098] Having described several embodiments, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may merely be a component of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not limit the scope of the disclosure.

Claims

WHAT IS CLAIMED IS:
1. A method comprising:
detecting an engagement input;
selecting an input interpretation context from amongst a plurality of input interpretation contexts, the selecting being done based on the detected engagement input;
detecting a gesture input subsequent to the selecting an input interpretation context; and
executing a command based on the detected gesture input and the selected input interpretation context.
2. The method of claim 1, wherein detecting an engagement input comprises detecting an engagement pose being maintained for a threshold amount of time.
3. The method of claim 2, wherein the engagement pose comprises a hand pose and wherein the hand pose comprises a substantially open palm and outstretched fingers.
4. The method of claim 2, wherein the engagement pose comprises a hand pose and wherein the hand pose comprises a closed fist and an outstretched arm.
5. The method of claim 2, wherein the engagement pose comprises a hand pose, and wherein the selecting is independent of a position of the hand when the hand pose is detected.
6. The method of claim 1, wherein detecting an engagement input comprises detecting a gesture or an occlusion of a sensor.
7. The method of claim 1, wherein detecting an engagement input comprises detecting an audio engagement, the audio engagement comprising a word or phrase spoken by a user.
8. The method of claim 1, wherein the detected engagement input comprises one of a plurality of engagement inputs, each of the plurality of engagement inputs corresponding to a respective one of the plurality of input interpretation contexts, and wherein the selecting comprises selecting the input interpretation context corresponding to the detected engagement input.
9. The method of claim 1, further comprising:
in response to the detecting an engagement input, displaying a user interface which identifies an active input interpretation context.
10. The method of claim 1, further comprising:
providing audio feedback in response to the detecting an engagement input, wherein the audio feedback identifies an active input interpretation context.
11. The method of claim 1 , wherein the selected input interpretation context is defined at an application level such that said selected input interpretation context is defined by an application that is in focus.
12. The method of claim 1, further comprising causing one or more elements to be displayed prior to detecting the engagement input, wherein the selecting is independent of the one or more elements being displayed.
13. The method of claim 1, wherein detecting the gesture input comprises detecting the gesture input based on one or more parameters associated with the selected input interpretation context.
14. The method of claim 1, further comprising ignoring sensor input irrelevant to detecting the engagement input, the ignoring done prior to detecting the engagement input.
15. The method of claim 1,
wherein detecting an engagement input comprises:
detecting a first engagement input associated with a first input interpretation context for controlling a first functionality; and detecting a second engagement input associated with a second input interpretation context for controlling a second functionality different from the first functionality.
16. The method of claim 15, wherein the first functionality is associated with a first type of subsystem within an automobile control system, and wherein the second functionality is associated with a second type of subsystem within the automobile control system.
17. The method of claim 15, wherein the first functionality is associated with a first type of subsystem within a media player application, and wherein the second functionality is associated with a second type of subsystem within the media player application.
18. The method of claim 1, wherein the selected input interpretation context is globally defined.
19. The method of claim 1, wherein detecting the engagement input comprises detecting an initial engagement input and a later engagement input, and wherein detecting the later engagement input comprises using an input interpretation context associated with the initial engagement input.
20. An apparatus comprising:
an engagement detection module configured to detect an engagement input; a selection module configured to select an input interpretation context from amongst a plurality of input interpretation contexts, the selection module being configured to perform the selecting based on the detected engagement input;
a detection module configured to detect a gesture input subsequent to the selection module selecting the input interpretation context; and
a processor configured to execute a command based on the detected gesture input and the selected input interpretation context.
21. The apparatus of claim 20, wherein the engagement detection module is configured to detect an engagement pose being maintained for a threshold amount of time.
22. The apparatus of claim 21, wherein the engagement pose comprises a hand pose and wherein the selection module is configured to select the input interpretation context independent of a position of the hand when the hand pose is detected.
23. The apparatus of claim 20, further comprising a display screen, wherein the processor is further configured to cause the display screen to display a user interface in response to detecting an engagement input, and wherein the user interface identifies an active input interpretation context.
24. The apparatus of claim 20, further comprising an audio speaker, wherein the processor is further configured to cause the audio speaker to output audio feedback in response to detecting an engagement input, wherein the audio feedback identifies an active input interpretation context.
25. The apparatus of claim 20, wherein the input interpretation context is defined at an application level such that the input interpretation context is defined by an application that is in focus.
26. The apparatus of claim 20, further comprising a camera configured to capture two-dimensional images, wherein the engagement detection module is configured to detect the engagement input based on at least one image captured by the camera, and wherein the detection module is configured to detect the gesture input using at least one other image captured by the camera.
27. The apparatus of claim 20, further comprising a sensor configured to input sensor data to the engagement detection module, and wherein the processor is further configured to cause the apparatus to ignore sensor data irrelevant to detecting an engagement input.
28. The apparatus of claim 20, wherein the engagement detection module is configured to:
detect a first engagement input associated with a first input interpretation context for controlling a first functionality, and
detect a second engagement input associated with a second input interpretation context for controlling a second functionality different from the first functionality, wherein the first functionality is associated with a first subsystem within an automobile control system or media player application, and wherein the second functionality is associated with a second subsystem within the automobile control system or media player application.
29. The apparatus of claim 20, wherein the detected engagement input comprises one of a plurality of engagement inputs, each of the plurality of engagement inputs corresponding to a respective one of the plurality of input interpretation contexts, and wherein the selection module is configured to select the input interpretation context corresponding to the detected engagement input.
30. The apparatus of claim 20, wherein the selected input interpretation context is globally defined.
31. The apparatus of claim 20, wherein detecting an engagement input comprises detecting an initial engagement input and a later engagement input, and wherein the detecting a later engagement input comprises using an input interpretation context selected based on the initial engagement input.
32. An apparatus comprising:
means for detecting an engagement input;
means for selecting an input interpretation context from amongst a plurality of input interpretation contexts, the selecting being based on the detected engagement input;
means for detecting a gesture input subsequent to the selecting an input interpretation context; and
means for executing a command based on the detected gesture input and the selected input interpretation context.
33. The apparatus of claim 32, wherein the means for detecting an engagement input comprise means for detecting an engagement pose being maintained for a threshold amount of time.
34. The apparatus of claim 33, wherein the engagement pose is a hand pose and wherein the means for selecting comprises means for selecting the input interpretation context independent of a position of the hand when the hand pose is detected.
35. The apparatus of claim 32, wherein the means for detecting an engagement input comprises means for detecting at least one of an engagement gesture, an occlusion of a sensor, or an audio engagement.
36. The apparatus of claim 32, further comprising:
means for providing feedback to a user of the apparatus in response to the selecting, wherein the feedback identifies the selected input interpretation context.
37. The apparatus of claim 32, wherein the detected engagement input comprises one of a plurality of engagement inputs, each of the plurality of engagement inputs corresponding to a respective one of the plurality of input interpretation contexts, and wherein the means for selecting comprises means for selecting the input interpretation context corresponding to the detected engagement input.
38. The apparatus of claim 32, wherein the means for selecting an input interpretation context comprise means for selecting an input interpretation context defined at an application level by an application that is in focus.
39. The apparatus of claim 32, wherein the means for detecting a gesture input comprise means for detecting a gesture input based on parameters associated with the selected input interpretation context.
40. The apparatus of claim 32, further comprising means for ignoring input irrelevant to detecting an engagement input prior to the means for detecting an engagement input detecting the engagement input.
41. The apparatus of claim 32,
wherein the means for detecting a gesture input comprises:
means for detecting a first engagement input associated with a first input interpretation context for controlling a first functionality of a system, and
means for detecting a second engagement input associated with a second input interpretation context for controlling a second functionality of the system different from the first functionality.
42. The apparatus of claim 32, wherein the means for selecting comprises means for selecting a globally defined input interpretation context.
43. The apparatus of claim 32, wherein the means for detecting an engagement input comprises means for detecting an initial engagement input and a later engagement input, and wherein means for detecting the later engagement input comprises means for using an input interpretation context associated with the initial engagement input to detect the later engagement input.
44. A non-transitory computer readable medium having instructions stored thereon, the instructions for causing an apparatus to:
detect an engagement input;
select, based on the detected engagement input, an input interpretation context from amongst a plurality of input interpretation contexts;
detect a gesture input subsequent to selection of the input interpretation context; and
execute a command based on the detected gesture input and the selected input interpretation context.
PCT/US2013/025971 2012-02-13 2013-02-13 Engagement-dependent gesture recognition WO2013123077A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP13707952.1A EP2815292A1 (en) 2012-02-13 2013-02-13 Engagement-dependent gesture recognition
CN201380008650.4A CN104115099A (en) 2012-02-13 2013-02-13 Engagement-dependent gesture recognition
JP2014556822A JP2015510197A (en) 2012-02-13 2013-02-13 Engagement-dependent gesture recognition
IN1753MUN2014 IN2014MN01753A (en) 2012-02-13 2014-09-01

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201261598280P 2012-02-13 2012-02-13
US61/598,280 2012-02-13
US13/765,668 2013-02-12
US13/765,668 US20130211843A1 (en) 2012-02-13 2013-02-12 Engagement-dependent gesture recognition

Publications (1)

Publication Number Publication Date
WO2013123077A1 true WO2013123077A1 (en) 2013-08-22

Family

ID=48946381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/025971 WO2013123077A1 (en) 2012-02-13 2013-02-13 Engagement-dependent gesture recognition

Country Status (6)

Country Link
US (1) US20130211843A1 (en)
EP (1) EP2815292A1 (en)
JP (1) JP2015510197A (en)
CN (1) CN104115099A (en)
IN (1) IN2014MN01753A (en)
WO (1) WO2013123077A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10956025B2 (en) 2015-06-10 2021-03-23 Tencent Technology (Shenzhen) Company Limited Gesture control method, gesture control device and gesture control system
US11392213B2 (en) 2018-05-04 2022-07-19 Google Llc Selective detection of visual cues for automated assistants

Families Citing this family (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9857868B2 (en) 2011-03-19 2018-01-02 The Board Of Trustees Of The Leland Stanford Junior University Method and system for ergonomic touch-free interface
DE112012004769T5 (en) * 2011-11-16 2014-09-04 Flextronics Ap, Llc Configurable hardware unit for car systems
US9084058B2 (en) 2011-12-29 2015-07-14 Sonos, Inc. Sound field calibration using listener localization
US20140309878A1 (en) 2013-04-15 2014-10-16 Flextronics Ap, Llc Providing gesture control of associated vehicle functions across vehicle zones
WO2012126426A2 (en) * 2012-05-21 2012-09-27 华为技术有限公司 Method and device for contact-free control by hand gesture
US9690271B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration
US9106192B2 (en) 2012-06-28 2015-08-11 Sonos, Inc. System and method for device playback calibration
US9690539B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration user interface
US9219460B2 (en) 2014-03-17 2015-12-22 Sonos, Inc. Audio settings based on environment
US9668049B2 (en) 2012-06-28 2017-05-30 Sonos, Inc. Playback device calibration user interfaces
US9706323B2 (en) 2014-09-09 2017-07-11 Sonos, Inc. Playback device calibration
US10585530B2 (en) 2014-09-23 2020-03-10 Neonode Inc. Optical proximity sensor
JP2014086849A (en) * 2012-10-23 2014-05-12 Sony Corp Content acquisition device and program
US20140130116A1 (en) * 2012-11-05 2014-05-08 Microsoft Corporation Symbol gesture controls
US11157436B2 (en) 2012-11-20 2021-10-26 Samsung Electronics Company, Ltd. Services associated with wearable electronic device
US10185416B2 (en) * 2012-11-20 2019-01-22 Samsung Electronics Co., Ltd. User gesture input to wearable electronic device involving movement of device
US8994827B2 (en) 2012-11-20 2015-03-31 Samsung Electronics Co., Ltd Wearable electronic device
US10551928B2 (en) 2012-11-20 2020-02-04 Samsung Electronics Company, Ltd. GUI transitions on wearable electronic device
US10423214B2 (en) 2012-11-20 2019-09-24 Samsung Electronics Company, Ltd Delegating processing from wearable electronic device
US9477313B2 (en) 2012-11-20 2016-10-25 Samsung Electronics Co., Ltd. User gesture input to wearable electronic device involving outward-facing sensor of device
US11372536B2 (en) 2012-11-20 2022-06-28 Samsung Electronics Company, Ltd. Transition and interaction model for wearable electronic device
US11237719B2 (en) 2012-11-20 2022-02-01 Samsung Electronics Company, Ltd. Controlling remote electronic device with wearable electronic device
US9092665B2 (en) * 2013-01-30 2015-07-28 Aquifi, Inc Systems and methods for initializing motion tracking of human hands
US9129155B2 (en) 2013-01-30 2015-09-08 Aquifi, Inc. Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map
US20170371492A1 (en) * 2013-03-14 2017-12-28 Rich IP Technology Inc. Software-defined sensing system capable of responding to cpu commands
WO2014145746A1 (en) 2013-03-15 2014-09-18 Sonos, Inc. Media playback system controller having multiple graphical interfaces
US9298266B2 (en) 2013-04-02 2016-03-29 Aquifi, Inc. Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US10295338B2 (en) 2013-07-12 2019-05-21 Magic Leap, Inc. Method and system for generating map data from an image
US20150052430A1 (en) * 2013-08-13 2015-02-19 Dropbox, Inc. Gestures for selecting a subset of content items
US9804712B2 (en) 2013-08-23 2017-10-31 Blackberry Limited Contact-free interaction with an electronic device
US9582737B2 (en) * 2013-09-13 2017-02-28 Qualcomm Incorporated Context-sensitive gesture classification
KR20150087544A (en) * 2014-01-22 2015-07-30 엘지이노텍 주식회사 Gesture device, operating method thereof and vehicle having the same
US10691332B2 (en) 2014-02-28 2020-06-23 Samsung Electronics Company, Ltd. Text input on an interactive display
US9652044B2 (en) * 2014-03-04 2017-05-16 Microsoft Technology Licensing, Llc Proximity sensor-based interactions
US9264839B2 (en) 2014-03-17 2016-02-16 Sonos, Inc. Playback device configuration based on proximity detection
US9891794B2 (en) 2014-04-25 2018-02-13 Dropbox, Inc. Browsing and selecting content items based on user gestures
US10089346B2 (en) 2014-04-25 2018-10-02 Dropbox, Inc. Techniques for collapsing views of content items in a graphical user interface
US9519413B2 (en) 2014-07-01 2016-12-13 Sonos, Inc. Lock screen media playback control
GB201412268D0 (en) * 2014-07-10 2014-08-27 Elliptic Laboratories As Gesture control
US9910634B2 (en) 2014-09-09 2018-03-06 Sonos, Inc. Microphone calibration
US9891881B2 (en) 2014-09-09 2018-02-13 Sonos, Inc. Audio processing algorithm database
US9952825B2 (en) 2014-09-09 2018-04-24 Sonos, Inc. Audio processing algorithms
US10127006B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Facilitating calibration of an audio playback device
US10002005B2 (en) 2014-09-30 2018-06-19 Sonos, Inc. Displaying data related to media content
CN104281265B (en) * 2014-10-14 2017-06-16 京东方科技集团股份有限公司 A kind of control method of application program, device and electronic equipment
US20160156992A1 (en) 2014-12-01 2016-06-02 Sonos, Inc. Providing Information Associated with a Media Item
SG11201705579QA (en) * 2015-01-09 2017-08-30 Razer (Asia-Pacific) Pte Ltd Gesture recognition devices and gesture recognition methods
TWI552892B (en) * 2015-04-14 2016-10-11 鴻海精密工業股份有限公司 Control system and control method for vehicle
WO2016172593A1 (en) 2015-04-24 2016-10-27 Sonos, Inc. Playback device calibration user interfaces
US10664224B2 (en) 2015-04-24 2020-05-26 Sonos, Inc. Speaker calibration user interface
US9538305B2 (en) 2015-07-28 2017-01-03 Sonos, Inc. Calibration error conditions
US10488937B2 (en) * 2015-08-27 2019-11-26 Verily Life Sciences, LLC Doppler ultrasound probe for noninvasive tracking of tendon motion
US9693165B2 (en) 2015-09-17 2017-06-27 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
JP6437695B2 (en) 2015-09-17 2018-12-12 ソノズ インコーポレイテッド How to facilitate calibration of audio playback devices
US20180356945A1 (en) * 2015-11-24 2018-12-13 California Labs, Inc. Counter-top device and services for displaying, navigating, and sharing collections of media
US9743207B1 (en) 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US11106423B2 (en) 2016-01-25 2021-08-31 Sonos, Inc. Evaluating calibration of a playback device
US10003899B2 (en) 2016-01-25 2018-06-19 Sonos, Inc. Calibration with particular locations
US9864574B2 (en) 2016-04-01 2018-01-09 Sonos, Inc. Playback device calibration based on representation spectral characteristics
US9860662B2 (en) 2016-04-01 2018-01-02 Sonos, Inc. Updating playback device configuration information based on calibration data
US9763018B1 (en) 2016-04-12 2017-09-12 Sonos, Inc. Calibration of audio playback devices
US20170351336A1 (en) * 2016-06-07 2017-12-07 Stmicroelectronics, Inc. Time of flight based gesture control devices, systems and methods
US10754161B2 (en) * 2016-07-12 2020-08-25 Mitsubishi Electric Corporation Apparatus control system
US9794710B1 (en) 2016-07-15 2017-10-17 Sonos, Inc. Spatial audio correction
US9860670B1 (en) 2016-07-15 2018-01-02 Sonos, Inc. Spectral correction using spatial calibration
US10372406B2 (en) 2016-07-22 2019-08-06 Sonos, Inc. Calibration interface
US10459684B2 (en) 2016-08-05 2019-10-29 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
DE102016221564A1 (en) * 2016-10-13 2018-04-19 Bayerische Motoren Werke Aktiengesellschaft Multimodal dialogue in a motor vehicle
US10296586B2 (en) * 2016-12-23 2019-05-21 Soundhound, Inc. Predicting human behavior by machine learning of natural language interpretations
US10468022B2 (en) * 2017-04-03 2019-11-05 Motorola Mobility Llc Multi mode voice assistant for the hearing disabled
CN107422856A (en) * 2017-07-10 2017-12-01 上海小蚁科技有限公司 Method, apparatus and storage medium for machine processing user command
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration
US11206484B2 (en) 2018-08-28 2021-12-21 Sonos, Inc. Passive speaker authentication
CN110297545B (en) * 2019-07-01 2021-02-05 京东方科技集团股份有限公司 Gesture control method, gesture control device and system, and storage medium
US10684686B1 (en) * 2019-07-01 2020-06-16 INTREEG, Inc. Dynamic command remapping for human-computer interface
US11868537B2 (en) * 2019-07-26 2024-01-09 Google Llc Robust radar-based gesture-recognition by user equipment
US10734965B1 (en) 2019-08-12 2020-08-04 Sonos, Inc. Audio calibration of a portable playback device
US11409364B2 (en) * 2019-09-13 2022-08-09 Facebook Technologies, Llc Interaction with artificial reality based on physical objects
KR20210034843A (en) * 2019-09-23 2021-03-31 삼성전자주식회사 Apparatus and method for controlling a vehicle
US11175730B2 (en) 2019-12-06 2021-11-16 Facebook Technologies, Llc Posture-based virtual space configurations
US11257280B1 (en) 2020-05-28 2022-02-22 Facebook Technologies, Llc Element-based switching of ray casting rules
US11418863B2 (en) 2020-06-25 2022-08-16 Damian A Lynch Combination shower rod and entertainment system
US11256336B2 (en) * 2020-06-29 2022-02-22 Facebook Technologies, Llc Integration of artificial reality interaction modes
US11178376B1 (en) 2020-09-04 2021-11-16 Facebook Technologies, Llc Metering for display modes in artificial reality
US11921931B2 (en) * 2020-12-17 2024-03-05 Huawei Technologies Co., Ltd. Methods and systems for multi-precision discrete control of a user interface control element of a gesture-controlled device
US20220229524A1 (en) * 2021-01-20 2022-07-21 Apple Inc. Methods for interacting with objects in an environment
US11294475B1 (en) 2021-02-08 2022-04-05 Facebook Technologies, Llc Artificial reality multi-modal input switching model
TWI773134B (en) * 2021-02-09 2022-08-01 圓展科技股份有限公司 Document image capturing device and control method thereof
US12112009B2 (en) 2021-04-13 2024-10-08 Apple Inc. Methods for providing an immersive experience in an environment
US11966515B2 (en) * 2021-12-23 2024-04-23 Verizon Patent And Licensing Inc. Gesture recognition systems and methods for facilitating touchless user interaction with a user interface of a computer system
US20230315208A1 (en) * 2022-04-04 2023-10-05 Snap Inc. Gesture-based application invocation
WO2024014182A1 (en) * 2022-07-13 2024-01-18 株式会社アイシン Vehicular gesture detection device and vehicular gesture detection method
US12112011B2 (en) 2022-09-16 2024-10-08 Apple Inc. System and method of application-based three-dimensional refinement in multi-user communication sessions
US12099653B2 (en) 2022-09-22 2024-09-24 Apple Inc. User interface response based on gaze-holding event assessment
US12108012B2 (en) 2023-02-27 2024-10-01 Apple Inc. System and method of managing spatial states and display modes in multi-user communication sessions
US12118200B1 (en) 2023-06-02 2024-10-15 Apple Inc. Fuzzy hit testing
US12099695B1 (en) 2023-06-04 2024-09-24 Apple Inc. Systems and methods of managing spatial groups in multi-user communication sessions

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110173574A1 (en) * 2010-01-08 2011-07-14 Microsoft Corporation In application gesture interpretation
US20110221666A1 (en) * 2009-11-24 2011-09-15 Not Yet Assigned Methods and Apparatus For Gesture Recognition Mode Control
US20110313768A1 (en) * 2010-06-18 2011-12-22 Christian Klein Compound gesture-speech commands

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5442376A (en) * 1992-10-26 1995-08-15 International Business Machines Corporation Handling multiple command recognition inputs in a multi-tasking graphical environment
US6438523B1 (en) * 1998-05-20 2002-08-20 John A. Oberteuffer Processing handwritten and hand-drawn input and speech input
JP2001216069A (en) * 2000-02-01 2001-08-10 Toshiba Corp Operation inputting device and direction detecting method
US8972902B2 (en) * 2008-08-22 2015-03-03 Northrop Grumman Systems Corporation Compound gesture recognition
JP2008146243A (en) * 2006-12-07 2008-06-26 Toshiba Corp Information processor, information processing method and program
US20090265671A1 (en) * 2008-04-21 2009-10-22 Invensense Mobile devices with motion gesture recognition
WO2009016607A2 (en) * 2007-08-01 2009-02-05 Nokia Corporation Apparatus, methods, and computer program products providing context-dependent gesture recognition
US9261979B2 (en) * 2007-08-20 2016-02-16 Qualcomm Incorporated Gesture-based mobile interaction
US9772689B2 (en) * 2008-03-04 2017-09-26 Qualcomm Incorporated Enhanced gesture-based image manipulation
CN102112945B (en) * 2008-06-18 2016-08-10 奥布隆工业有限公司 Control system based on attitude for vehicle interface
US7996793B2 (en) * 2009-01-30 2011-08-09 Microsoft Corporation Gesture recognizer system architecture
WO2010147600A2 (en) * 2009-06-19 2010-12-23 Hewlett-Packard Development Company, L, P. Qualified command
US8334842B2 (en) * 2010-01-15 2012-12-18 Microsoft Corporation Recognizing user intent in motion capture system
US9009594B2 (en) * 2010-06-10 2015-04-14 Microsoft Technology Licensing, Llc Content gestures
JP5685837B2 (en) * 2010-06-15 2015-03-18 ソニー株式会社 Gesture recognition device, gesture recognition method and program
WO2013022218A2 (en) * 2011-08-05 2013-02-14 Samsung Electronics Co., Ltd. Electronic apparatus and method for providing user interface thereof
US20130155237A1 (en) * 2011-12-16 2013-06-20 Microsoft Corporation Interacting with a mobile device within a vehicle using gestures
WO2013170383A1 (en) * 2012-05-16 2013-11-21 Xtreme Interactions Inc. System, device and method for processing interlaced multimodal user input

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110221666A1 (en) * 2009-11-24 2011-09-15 Not Yet Assigned Methods and Apparatus For Gesture Recognition Mode Control
US20110173574A1 (en) * 2010-01-08 2011-07-14 Microsoft Corporation In application gesture interpretation
US20110313768A1 (en) * 2010-06-18 2011-12-22 Christian Klein Compound gesture-speech commands

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10956025B2 (en) 2015-06-10 2021-03-23 Tencent Technology (Shenzhen) Company Limited Gesture control method, gesture control device and gesture control system
US11392213B2 (en) 2018-05-04 2022-07-19 Google Llc Selective detection of visual cues for automated assistants

Also Published As

Publication number Publication date
CN104115099A (en) 2014-10-22
IN2014MN01753A (en) 2015-07-03
JP2015510197A (en) 2015-04-02
EP2815292A1 (en) 2014-12-24
US20130211843A1 (en) 2013-08-15

Similar Documents

Publication Publication Date Title
US20130211843A1 (en) Engagement-dependent gesture recognition
KR102230630B1 (en) Rapid gesture re-engagement
JP6158913B2 (en) Interact with devices using gestures
EP2766790B1 (en) Authenticated gesture recognition
US9773158B2 (en) Mobile device having face recognition function using additional component and method for controlling the mobile device
US9646200B2 (en) Fast pose detector
US10599823B2 (en) Systems and methods for coordinating applications with a user interface
US9377860B1 (en) Enabling gesture input for controlling a presentation of content
US20130144629A1 (en) System and method for continuous multimodal speech and gesture interaction
US20110221666A1 (en) Methods and Apparatus For Gesture Recognition Mode Control
KR20200075909A (en) Operation method and apparatus using fingerprint identification, and mobile terminal
JP2003131785A (en) Interface device, operation control method and program product
US20150077381A1 (en) Method and apparatus for controlling display of region in mobile device
KR101119896B1 (en) Mobile terminal and method for displaying object using distance and eyes sensing
US9405375B2 (en) Translation and scale invariant features for gesture recognition
CN112534390B (en) Electronic device for providing virtual input tool and method thereof
US11199906B1 (en) Global user input management
US20220350997A1 (en) Pointer-based content recognition using a head-mounted device
KR20140034666A (en) Control device based on non-motion signal and motion signal, and device control method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13707952

Country of ref document: EP

Kind code of ref document: A1

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
ENP Entry into the national phase

Ref document number: 2014556822

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2013707952

Country of ref document: EP