US20120016641A1 - Efficient gesture processing - Google Patents

Efficient gesture processing Download PDF

Info

Publication number
US20120016641A1
US20120016641A1 US12/835,079 US83507910A US2012016641A1 US 20120016641 A1 US20120016641 A1 US 20120016641A1 US 83507910 A US83507910 A US 83507910A US 2012016641 A1 US2012016641 A1 US 2012016641A1
Authority
US
United States
Prior art keywords
gesture
data
algorithms
algorithm
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/835,079
Inventor
Giuseppe Raffa
Lama Nachman
Jinwon Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tahoe Research Ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/835,079 priority Critical patent/US20120016641A1/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, JINWON, NACHMAN, LAMA, RAFFA, GIUSEPPE
Priority to PCT/US2011/043319 priority patent/WO2012009214A2/en
Priority to CN201180034400.9A priority patent/CN102985897B/en
Priority to TW100124609A priority patent/TWI467418B/en
Publication of US20120016641A1 publication Critical patent/US20120016641A1/en
Priority to US14/205,210 priority patent/US9535506B2/en
Priority to US15/397,511 priority patent/US10353476B2/en
Assigned to TAHOE RESEARCH, LTD. reassignment TAHOE RESEARCH, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTEL CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C19/00Gyroscopes; Turn-sensitive devices using vibrating masses; Turn-sensitive devices without moving masses; Measuring angular rate using gyroscopic effects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • G01P15/18Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration in two or more dimensions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion

Definitions

  • Embodiments of the invention generally pertain to electronic devices, and more particularly, to gesture recognition systems.
  • Gesture interfaces based on inertial sensors such as accelerometers and gyroscopes embedded in small form factor devices are becoming increasingly common in user devices such as smart phones, remote controllers and game consoles.
  • gesture interaction is an attractive alternative to traditional interfaces because it does not contain the shrinking of the form factor of traditional input devices such as a keyboard and mouse and screen.
  • gesture interaction is more supportive of mobility, as users can easily do subtle gestures as they walk around or drive.
  • “Dynamic 3D gestures” are based on atomic movements of a user using inertial sensors such as micro-electromechanical system (MEMS) based accelerometers and gyroscopes.
  • MEMS micro-electromechanical system
  • Statistical recognition algorithms such as Hidden Markov Model algorithms (HMM) are widely used for gesture and speech recognition and many other machine learning tasks. Research has shown HMM to be extremely effective for recognizing complex gestures and enabling rich gesture input vocabularies.
  • HMM is computationally demanding (e.g., O(num_of_samples*HMM_num_state ⁇ 2)). Furthermore, to obtain highly accurate results, continuous Gaussian Mixtures are usually employed in HMM's output probabilities, whose probability density function evaluation is computationally expensive. Matching an incoming signal with several models (typically one per trained gesture) for finding the best match (e.g. using Viterbi decoding in HMM) is also computationally intensive.
  • gestures should be easy to use. Common techniques based on push/release buttons for gesture spotting should be avoided. Inexact interaction based only on shake/whack gestures limits the user experience. Finally, using a simple and easily recognizable gesture to trigger gesture recognition would be cumbersome in complex and sustained gesture-based user interactions.
  • gesture interfaces also typically choose one single algorithm to recognize all the gestures, based on the type of expected user gestures. For example, dynamic movement tracking is typically employed by smart-phone applications, while continuous tracking may be used in motion detection gaming consoles. Thus, gesture recognition devices are typically configured to recognize and process only a specific type of gesture.
  • FIG. 1A is a flow diagram of a process utilizing an embodiment of the invention.
  • FIG. 1B is an example sensor data stream.
  • FIG. 2 is a block diagram of an embodiment of the invention.
  • FIG. 3 is a flow diagram describing an embodiment of the invention.
  • FIG. 4 is a diagram of time-domain signal characteristics that may be used by an embodiment of the invention.
  • FIG. 5 is a high level architecture of a system according to one embodiment of the invention.
  • Embodiments of the invention describe a system to efficiently execute gesture recognition algorithms. Embodiments of the invention further describe a system to accommodate many types of algorithms depending on the type of gesture that is needed in any particular situation. Examples of recognition algorithms include but are not limited to, HMM for complex dynamic gestures (e.g. write a number in the air), Decision Trees (DT) for static poses, peak detection for coarse shake/whack gestures or inertial methods (INS) for pitch/roll detection.
  • HMM complex dynamic gestures
  • DT Decision Trees
  • INS inertial methods
  • HMM Hidden Markov Model algorithms
  • embodiments of the invention describe a power efficient staged gesture recognition pipeline including multimodal interaction detection, context based optimized recognition, and context based optimized training and continuous learning.
  • low-accuracy low-computation stages are executed via a low-power sensing unit (LPSU) continuously analyzing a device sensor's data stream.
  • LPSU may be physically attached to a main mobile device (e.g. a sensor subsystem) or included in a peripheral device (e.g. a wrist watch) and wirelessly connected.
  • MPU main processor unit
  • computationally intensive stages e.g. feature extraction, normalization and statistical analysis of the data stream using HMM.
  • Embodiments of the invention may further reduce unnecessary invocations of gesture recognition algorithms by leveraging user context as well as simple/easy-to-detect gestures to determine time periods in which gesture interaction may be performed by the user. For example, if a phone call comes in to a mobile device utilizing an embodiment of the invention, specific gestures may be enabled to “reject”, “answer”, or “transfer” the call. In another embodiment, if the user is in physical proximity of a friend, gestures will be enabled to “send” and “receive” contact information. Simple/easy-to-detect gestures (such as a “shake”) may also be used as a signaling mechanism for starting gesture recognition of enabled gestures.
  • simple/easy-to-detect gestures such as a “shake” may also be used as a signaling mechanism for starting gesture recognition of enabled gestures.
  • gesture recognition models may be loaded based only on enabled gestures. It is to be understood that selectively loading specific gesture recognition models diminishes false positives, as it enables only a subset of the available gestures and not an entire input vocabulary.
  • a filler model for rejecting spurious gestures may be constructed and based on the gestures not used, enhancing the precision of the system. Real time requirements may not allow a filler model to be generated on the fly, thus the needed filler model may be pre-compiled in advance according to the possible contexts of use. As the number of gestures is finite, all the possible combinations of gestures may be potentially pre-compiled as filler models. If only a subset of combinations is used for specific context-based interactions (e.g. two specific sets of gestures for phone calls and social interactions), only those specific combinations will be used to pre-compile the needed filler models.
  • a gesture recognition system implementing an embodiment of the invention may further utilize context and activity information, if available in the system, to optimize training and recognition.
  • Algorithms such as HMM typically rely on annotated training samples in order to generate the models with well-known algorithms (such as Baum-Welch). Gestures are heavily dependent on several factors such as user posture, movement noise and physical activity. Differences in those factors are hard to eliminate by using only mathematical or statistical tools.
  • embodiments of the invention may further utilize a “tag” for each gesture's training sample. These tags may identify not only with the type of gesture (e.g. “EarTouch”) but also with the activity in which it has been performed (e.g. “in train” or “walking”). In this way, the training procedure will produce a separate model for each gesture/activity pair instead of each gesture.
  • the context information will be used to choose the correct gesture/activity models in the same way as in training mode.
  • an easy-to-use continuous learning module is used to collect enough data in order to make a system's HMM models reliable and to account for a user's gesture changes over time.
  • the continuous learning module may employ a two-gestures confirm/ignore notification. For example, right after a gesture is performed, the user may indicate that the gesture is suitable to be included in the training set (or not) by performing simple always detectable gestures (e.g. two poses of the hand or whack gestures). Hence the new training sample data along with the detected activity are used to create new gesture/activity models or enhance existing ones.
  • gesture recognition may be performed with a high degree of accuracy in a power efficient manner.
  • FIG. 1A is a flow diagram of a process utilizing an embodiment of the invention.
  • Flow diagrams as illustrated herein provide examples of sequences of various process actions. Although shown in a particular sequence or order, unless otherwise specified, the order of the actions can be modified. Thus, the illustrated implementations should be understood only as examples, and the illustrated processes can be performed in a different order, and some actions may be performed in parallel. Additionally, one or more actions can be omitted in various embodiments of the invention; thus, not all actions are required in every implementation. Other process flows are possible.
  • Data is collected from at least one sensor (e.g., a 3D accelerometer or gyroscope), 100 .
  • the sensor is separate from a mobile processing device, and communicates the data via wireless protocols known in the art (e.g., WiFi, Bluetooth).
  • the sensor is included in the mobile processing device.
  • the data from the sensor indicates a motion from a user.
  • User context may also be retrieved from the mobile device, 105 .
  • User context may identify, for example, an application the mobile device is running or location of the device/user.
  • the user device may then access a database that associates context and activity information inputs with the gestures that may be allowed in any point in time and which algorithms may be used to detect these gestures.
  • user context is used as a filter for enabling a subset of gestures (e.g. “Eartouch” when a mobile device is executing a phone application).
  • the user activity may further enable the choice of the right models during recognition (e.g. the “Eartouch” model that is tagged with “walking” as activity).
  • the frequency of context and activity updates may be relatively low, as it corresponds with the user's context change events in daily life.
  • the entire gesture recognition processing pipeline may be enabled, 112 , when it is determined that one or more gestures may be performed given the user context (e.g., the user is using the mobile device as a phone, and thus gestures are disabled), and/or a simple/easy-to-detect gesture (e.g. a shake of a wristwatch or a whack gesture on the device) has been performed by the user, 110 . Otherwise, sensor data is discarded, 111 .
  • a Finite State Automata can be programmed with the desired behavior.
  • Embodiments of the invention may further perform a segmentation of the sensor data in intervals based on the energy levels of the data, 115 .
  • This segmentation may be “button-less” in that no user input is required to segment the sensor data into a “movement window.”
  • Proper hysteresis may be used to smooth out high frequency variation of energy value.
  • energy may be measured using evaluating a sensor's standard deviation on a moving window. Data occurring outside the “movement window” is discarded, 111 , while data performed within the movement window is subsequently processed.
  • a low-computation Template Matching is executed by comparing characteristics of the current stream to be analyzed (e.g. signal duration, overall energy, minimum and maximum values for signal duration and energy levels) to a single template obtained from all training samples of “allowed gestures”, 120 . In this way, for example, abnormally long or low-energy gestures will be discarded in the beginning of the pipeline without running computationally expensive HMM algorithms on an MPU.
  • “allowed gestures” are further based on training samples and “tags” for each training sample identifying the appropriate user context for the gesture. For example, a user may be executing an application (e.g., a game) that only enables specific “shake” type gestures. Therefore, movements that do not exhibit similar signal characteristics (i.e., high maximum energy values) are discarded, as these movements are not enabled given the user context.
  • an application e.g., a game
  • decisions 110 , 115 and 120 may be determined via low-complex algorithms as described in the examples above, and that operations 100 - 120 may be performed by a low power processing unit.
  • embodiments of the invention may enable continuous sensor data processing while duty-cycling the main processor. If the current signal matches at least one of the templates then the gesture's signal is “passed” to the main processing unit (waking up the main processor if necessary), 125 . Otherwise, the signal is discarded, 111 .
  • the workload associated with gesture recognition processing is balanced between a low power processing unit and main processor.
  • FIG. 1B is an example sensor data stream. Assuming user context allows for gestures (as described in operation 110 ), interaction 1000 is segmented into three data segments (as described in operation 115 )—potential gestures 1100 , 1200 and 1300 . In this example, potential gesture 1200 is abnormally long and thus discarded (as described in operation 120 ). Potential gestures 1100 and 1300 are passed to the MPU providing they match an allowed gesture template (as described in operation 125 ).
  • Normalization and Feature extraction may be performed on the passed gesture signal, if needed by the appropriate gesture algorithm (e.g., HMM) 130 . In another embodiment, this operation may also be performed via an LPSU if the computation requirements allow. Normalization procedures may include, for example, re-sampling, amplitude normalization and average removal (for accelerometer) for tilt correction. Filtering may include, for example, an Exponential Moving Average low-pass filtering.
  • Embodiments of the invention may further take as input the user context from 105 and produce as output a model gesture data set, 135 .
  • a model gesture data set 135
  • one model for each allowed gesture plus a Filler model for filtering out spurious gestures not in the input vocabulary may be provided. If context is not available, all the gestures will be allowed in the current HMM “grammar”.
  • Filler models may be constructed utilizing the entire sample set or “garbage” gestures that are not in the set of recognized gestures.
  • An embodiment may utilize only the “not allowed” gestures (that is, the entire gesture vocabulary minus the allowed gesture) to create a Filler model that is optimized for a particular situation (it is optimized because it does not contain the allowed gestures). For example, if the entire gesture set is A-Z gestures and one particular interaction allows only A-D gestures, than E-Z gestures will be used to build the Filler model. Training a Filler model in real time may be not feasible if a system has a low latency requirement, hence the set of possible contexts may be enumerated and the associated Filler models pre-computed and stored. If not possible (e.g. all gestures are possible), a default Filler model may be used.
  • a gesture recognition result is produced from the sensor data using the model gesture and Filler algorithms, 140 .
  • Template Matching may further be performed in order to further alleviate false positives on gestures performed by the user but that are not in the current input vocabulary of allowed gestures, 145 .
  • processing will be executed to match the recognized gesture's data stream measurements (e.g. duration, energy) against the stored Template of the candidate gesture (obtained from training data) and not on the entire set of allowed gestures as in operation 120 . If the candidate gesture's measurements match the Template, a gesture event is triggered to an upper layer system (e.g., an Operating System (OS)), 150 . Otherwise, the gesture is discarded, 155 .
  • OS Operating System
  • it is assumed a rejection during this portion of processing indicates the user was attempting to in fact gesture an input command to the system; therefore, the user is notified of the rejection of said gesture.
  • Embodiments of the invention may further enable support of multiple gesture detection algorithms.
  • Systems may require support for multiple gesture algorithms because a single gesture recognition algorithm may not be adequately accurate across different types of gestures.
  • gestures may be clustered into multiple types including dynamic gestures (e.g. write a letter in the air), static poses (e.g. hold your hand face up) and shake/whack gestures.
  • dynamic gestures e.g. write a letter in the air
  • static poses e.g. hold your hand face up
  • shake/whack gestures For each of these gesture types, there are specific recognition algorithms that work best for that type. Thus, a mechanism is needed to select the appropriate algorithm. To run all algorithms in parallel and, based on some metric, select the “best output” is clearly not computationally efficient, especially with algorithms like HMM which tend to be computationally intensive.
  • embodiments of the invention may incorporate a selector system to preselect an appropriate gesture recognition algorithm in real-time based on features of the sensor data and the user's context.
  • the selector system may include a two-stage recognizer selector that decides which algorithm may run at any given time based on signal characteristics.
  • the first stage may perform a best-effort selection of one or more algorithms based on signal characteristics that can be measured before the complete gesture's raw data segment is available. For example it can base its selection on the instantaneous energy magnitude, spikes in the signal or time duration of the signal.
  • the first stage may compare these features against a template matching database and enable the algorithms whose training gestures' signal characteristics match the input signal's characteristics.
  • each algorithm When enabled, each algorithm identifies candidate gestures in the raw data stream.
  • a gesture's data stream is shorter than the entire period of time the algorithm has been enabled; furthermore, the algorithm may identify multiple gestures (i.e. multiple “shakes” gestures or a series of poses) in the entire time window.
  • Each enabled algorithm may perform an internal segmentation of the raw data stream by determining gestures' end points (e.g. HMM) or finding specific patterns in the signal (e.g. peak detection). Therefore some signal characteristics (such as its spectral characteristic or total energy content) may be analyzed only after a gesture has been tentatively recognized and its associated data stream is available.
  • the second stage may analyze the data streams associated with each candidate gesture, compare calculated features (e.g., spectral content, energy content) against a Template Matching database and choose the best match among the algorithms, providing as output the recognized gesture.
  • calculated features e.g., spectral content, energy content
  • FIG. 2 is a block diagram of an embodiment of the invention.
  • RSPre 210 upstream with respect to the Gesture Recognizers 220 - 250 , is fed in real time by a raw data stream of sensor 290 .
  • RSPre enables one or more of Gesture Recognizers 220 - 250 based on measures obtained from the raw signals of sensor 290 and allowed algorithms based on the user context.
  • User Context Filter (UCF) 200 retrieves algorithms mapped to context via database 205 . Templates of signals for any algorithm may be obtained from Template Matching Database 215 and a Template Matching procedure may be performed; hence only the subset of Gesture Recognizers 220 - 250 that match the signal characteristics coming in will be enabled.
  • a template matching operation will produce a similarity measure for each algorithm and the first N-best algorithms will be chosen and activated if the similarity satisfies a predefined Similarity Threshold.
  • User Context Filter (UCF) 200 keeps track of current user context such as location, social context and physical activity, system and applications events (e.g. a phone call comes in). UCF 200 keeps track of allowed gestures given the context and updates RSPre 210 in real time with the algorithms needed to recognize the allowed Gesture Recognizers. UCF 200 uses a Gestures-to-Algorithms Mapping database 205 that contains the unique mapping from each gesture ID to the Algorithm used. For example, gestures “0” to “9” (waving the hand in the air) may be statically mapped in database 205 to HMM (used by recognizer 220 ) while poses such as “hand palm down/up” may be mapped to Decision Tree (used by recognizer 230 ).
  • UCF 200 is fed by external applications that inform which gestures are currently meaningful for the actual user context. For example, if a phone application is active, “0” to “9” gestures will be activated and UCF 200 will activate only HMM.
  • the output of UCF 200 (algorithms allowed) is used by RSPre 210 . This filter reduces false positives when a gesture “out of context” is being made by the user and detected by sensor 290 .
  • RSPre 210 provides appropriate hysteresis mechanisms in order to segment the data stream from sensor 290 in meaningful segments, for example using a Finite State Automata with transitions based on the similarity of thresholds between data from sensor 290 and the templates of database 215 .
  • RSPost 260 is downstream to Gesture Recognizers 220 - 250 and is fed in real time by the recognized gesture events plus the raw data stream from sensor 290 . In case more than one gesture is recognized as candidate in the same time interval, RSPost 260 will perform a Template Matching (accessing templates in database 265 ) and will output the most probable recognized gesture. RSPost 260 provides appropriate heuristics mechanisms in order to choose a single gesture if the Template Matching outputs more than one gesture ID. For example, a similarity measure may be generated from the Template Matching algorithm for each matching algorithm and the best match will be chosen.
  • Database 265 contains the signal “templates” (e.g. min-max values of energy level or signal Fast Fourier Transformation (FFT) characteristics) for each of Gesture Recognizers 220 - 250 .
  • the average gesture_energy may be
  • FIG. 3 is a flow diagram describing an embodiment of the invention.
  • there are four algorithms present in a system HMM 310 , Decision Trees 320 , Peak Detection 330 and Pitch/Roll Inertial 340 ).
  • User context is analyzed to determine suitable algorithms to consider for sensor data, 350 .
  • user context eliminates Pitch/Roll Inertial 340 from being a suitable algorithm to process any incoming signal from system sensors.
  • the incoming signal is analyzed (via RSPre) to enable some of the remaining algorithms present in the system, 360 .
  • RSPre enables HMM 310 and Peak Detection 320 to run. These two algorithms run in parallel and the results are analyzed, via RSPost, to determine the proper algorithm to use (if more than one is enabled via RSPre) and the gesture from the incoming signal, 370 .
  • RSPost chooses HMM 310 along with the gesture recognized by HMM.
  • Template Matching algorithms used by RSPre and RSPost may utilize, for example, time duration, energy magnitude and frequency spectrum characteristics of sensor data.
  • RSPre analyzes the incoming signal using time duration or energy magnitude characteristics of the incoming signal, while RSPost analyzes the incoming signal using frequency spectrum characteristics of the incoming signal.
  • FIG. 4 is a diagram of time-domain signal characteristics that may be used in the Template Matching algorithms used in RSPre and RSPost, such as running average of movement energy (here represented by Standard Deviation of the signal) or magnitude to decide whether for example a pose (segments 410 , 430 and 450 ), dynamic gesture (segment 420 ) or shake (segment 440 ) is being performed, segmenting accordingly the data stream in stationary, dynamic or high energy intervals.
  • a Decision Tree algorithm will be enabled as the algorithm of choice for static “poses” 410 , 430 and 450 .
  • a statistical HMM algorithm will be enabled, as the state-of-the-art algorithm for dynamic gestures.
  • a Peak Detection algorithm will be enabled.
  • Template Matching algorithms used by RSPre and RSPost may rely, for example, on min-max comparison of features, calculated over a sliding window of the signal(s), such as mean, standard deviation and spectral components energy.
  • the Template Matching algorithms may be applied to each signal separately or to a combined measure derived from the signals.
  • a “movement magnitude” measure may be derived from a 3D accelerometer.
  • the templates may be generated using the training data.
  • all the training data for HMM-based gestures may provide the min-max values and spectral content for HMM algorithm for X, Y, Z axis and overall magnitude, if an accelerometer is used to recognize gestures.
  • Context may also be used to constrain the choice to a subset of the possible gestures and algorithms by indicating either allowed or disallowed gestures.
  • an application may define two different gestures recognized by two different algorithms for rejecting or accepting a call: a pose “hand palm up” for rejecting the incoming call and a movement towards the user ear for accepting the call.
  • UCF will enable only the Decision Tree and the HMM as the only two algorithms needed for recognizing the allowed gestures. Accordingly, RSPre and RSPost will compute Template Matching only on this subset of algorithms.
  • FIG. 5 shows a high level architecture of a system according to one embodiment of the invention.
  • System 500 is a scalable and generic system, and is able discriminate from dynamic “high energy” gestures down to static poses. Gesture processing as described above is performed in real-time, depending on signal characteristics and user context.
  • System 500 includes sensors 550 , communication unit 530 , memory 520 and processing unit 510 , each of which is operatively coupled via system bus 540 . It is to be understood that each components of system 500 may be included in a single or multiple devices.
  • system 500 utilizes gesture processing modules 525 that include the functionality described above.
  • Gesture processing modules 525 are included in a storage area of memory 520 , and are executed via processing unit 510 .
  • processing unit 510 includes a low-processing sub-unit and a main processing sub-unit, each to execute a specific gesture processing modules as described above.
  • Sensors 550 may communicate data to gesture processing modules 525 via the communications unit 530 in a wired and/or wireless manner.
  • wired communication means may include, without limitation, a wire, cable, bus, printed circuit board (PCB), Ethernet connection, backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optic connection, and so forth.
  • wireless communication means may include, without limitation, a radio channel, satellite channel, television channel, broadcast channel infrared channel, radio-frequency (RF) channel, Wireless Fidelity (WiFi) channel, a portion of the RF spectrum, and/or one or more licensed or license-free frequency bands.
  • Sensors 550 may include any device that provides three dimensional readings (along x, y, and z axis) for measuring linear acceleration and sensor orientation (e.g., an accelerometer).
  • Each component described herein includes software or hardware, or a combination of these.
  • the components can be implemented as software modules, hardware modules, special-purpose hardware (e.g., application specific hardware, ASICs, DSPs, etc.), embedded controllers, hardwired circuitry, etc.
  • Software content e.g., data, instructions, and configuration
  • the content may result in a computer performing various functions/operations described herein.
  • a computer readable storage medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a computer (e.g., computing device, electronic system, etc.), such as recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.).
  • the content may be directly executable (“object” or “executable” form), source code, or difference code (“delta” or “patch” code).
  • a computer readable storage medium may also include a storage or database from which content can be downloaded.
  • a computer readable medium may also include a device or product having content stored thereon at a time of sale or delivery. Thus, delivering a device with stored content, or offering content for download over a communication medium may be understood as providing an article of manufacture with such content described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the invention describe a system to efficiently execute gesture recognition algorithms. Embodiments of the invention describe a power efficient staged gesture recognition pipeline including multimodal interaction detection, context based optimized recognition, and context based optimized training and continuous learning. Embodiments of the invention further describe a system to accommodate many types of algorithms depending on the type of gesture that is needed in any particular situation. Examples of recognition algorithms include but are not limited to, HMM for complex dynamic gestures (e.g. write a number in the air), Decision Trees (DT) for static poses, peak detection for coarse shake/whack gestures or inertial methods (INS) for pitch/roll detection.

Description

    FIELD
  • Embodiments of the invention generally pertain to electronic devices, and more particularly, to gesture recognition systems.
  • BACKGROUND
  • Gesture interfaces based on inertial sensors such as accelerometers and gyroscopes embedded in small form factor devices (e.g. a sensor-enabled handheld device or wrist-watch) are becoming increasingly common in user devices such as smart phones, remote controllers and game consoles.
  • In the mobile space, gesture interaction is an attractive alternative to traditional interfaces because it does not contain the shrinking of the form factor of traditional input devices such as a keyboard and mouse and screen. In addition, gesture interaction is more supportive of mobility, as users can easily do subtle gestures as they walk around or drive.
  • “Dynamic 3D gestures” are based on atomic movements of a user using inertial sensors such as micro-electromechanical system (MEMS) based accelerometers and gyroscopes. Statistical recognition algorithms, such as Hidden Markov Model algorithms (HMM), are widely used for gesture and speech recognition and many other machine learning tasks. Research has shown HMM to be extremely effective for recognizing complex gestures and enabling rich gesture input vocabularies.
  • Several challenges arise when using HMM for gesture recognition in mobile devices. HMM is computationally demanding (e.g., O(num_of_samples*HMM_num_stateŝ2)). Furthermore, to obtain highly accurate results, continuous Gaussian Mixtures are usually employed in HMM's output probabilities, whose probability density function evaluation is computationally expensive. Matching an incoming signal with several models (typically one per trained gesture) for finding the best match (e.g. using Viterbi decoding in HMM) is also computationally intensive.
  • Low latency requirements of mobile devices pose a problem in real time gesture recognition on resource constrained devices, especially when using techniques for improving accuracy, e.g. changing gesture “grammar” or statistical models on the fly.
  • Additionally, for a high level of usability, gestures should be easy to use. Common techniques based on push/release buttons for gesture spotting should be avoided. Inexact interaction based only on shake/whack gestures limits the user experience. Finally, using a simple and easily recognizable gesture to trigger gesture recognition would be cumbersome in complex and sustained gesture-based user interactions.
  • A straight forward approach to mitigate these issues would be to run continuous HMM (CHMM) for gesture spotting and recognition. However this will trigger many false positives and is not efficient with regards to power consumption and processing.
  • Current gesture interfaces also typically choose one single algorithm to recognize all the gestures, based on the type of expected user gestures. For example, dynamic movement tracking is typically employed by smart-phone applications, while continuous tracking may be used in motion detection gaming consoles. Thus, gesture recognition devices are typically configured to recognize and process only a specific type of gesture.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The following description includes discussion of figures having illustrations given by way of example of implementations of embodiments of the invention. The drawings should be understood by way of example, and not by way of limitation. As used herein, references to one or more “embodiments” are to be understood as describing a particular feature, structure, or characteristic included in at least one implementation of the invention. Thus, phrases such as “in one embodiment” or “in an alternate embodiment” appearing herein describe various embodiments and implementations of the invention, and do not necessarily all refer to the same embodiment. However, they are also not necessarily mutually exclusive.
  • FIG. 1A is a flow diagram of a process utilizing an embodiment of the invention.
  • FIG. 1B is an example sensor data stream.
  • FIG. 2 is a block diagram of an embodiment of the invention.
  • FIG. 3 is a flow diagram describing an embodiment of the invention.
  • FIG. 4 is a diagram of time-domain signal characteristics that may be used by an embodiment of the invention.
  • FIG. 5 is a high level architecture of a system according to one embodiment of the invention.
  • Descriptions of certain details and implementations follow, including a description of the figures, which may depict some or all of the embodiments described below, as well as discussing other potential embodiments or implementations of the inventive concepts presented herein. An overview of embodiments of the invention is provided below, followed by a more detailed description with reference to the drawings.
  • DETAILED DESCRIPTION
  • Embodiments of the invention describe a system to efficiently execute gesture recognition algorithms. Embodiments of the invention further describe a system to accommodate many types of algorithms depending on the type of gesture that is needed in any particular situation. Examples of recognition algorithms include but are not limited to, HMM for complex dynamic gestures (e.g. write a number in the air), Decision Trees (DT) for static poses, peak detection for coarse shake/whack gestures or inertial methods (INS) for pitch/roll detection.
  • Statistical recognition algorithms, such as Hidden Markov Model algorithms (HMM), are widely used for gesture and speech recognition and many other machine learning tasks. These algorithms tend to be resource (e.g., computational resources, bandwidth) intensive. Continuously running HMM algorithms is inefficient in most gesture recognition scenarios, where significant portions of sensor data captured are not related to gesture movements. Furthermore, continuously running gesture recognition algorithms may trigger false positives for non-gesture movements made while using a device (e.g., a user's hand movements while having a conversation are typically not done to signal a device to execute a command).
  • Solutions to reduce the resource use of gesture recognition algorithms include scaling down the implementation of these algorithms; however this also leads to a reduction in gesture recognition accuracy and thus eliminates the possibility of allowing the user to employ a rich gesture vocabulary with a device.
  • Other solutions allow processing for a static (i.e., pre-determined) set of gestures that are used as vocabulary during gesture training and recognition. This solution eliminates the possibility of a rich mobile experience by not allowing the use of different gestures at different times (e.g. in different contexts or locations or activities).
  • To provide for efficient gesture recognition in devices, without the effect of limiting possible gesture inputs, embodiments of the invention describe a power efficient staged gesture recognition pipeline including multimodal interaction detection, context based optimized recognition, and context based optimized training and continuous learning.
  • It is to be understood that designing a gesture recognition system using a pipeline of computational stages, each stage of increasing complexity, improves the computation and power efficiency of the system. In one embodiment, low-accuracy low-computation stages are executed via a low-power sensing unit (LPSU) continuously analyzing a device sensor's data stream. LPSU may be physically attached to a main mobile device (e.g. a sensor subsystem) or included in a peripheral device (e.g. a wrist watch) and wirelessly connected. When a possible gesture-like signal is coarsely recognized, an event can wake up a main processor unit (MPU) to perform computationally intensive stages (e.g. feature extraction, normalization and statistical analysis of the data stream using HMM).
  • Embodiments of the invention may further reduce unnecessary invocations of gesture recognition algorithms by leveraging user context as well as simple/easy-to-detect gestures to determine time periods in which gesture interaction may be performed by the user. For example, if a phone call comes in to a mobile device utilizing an embodiment of the invention, specific gestures may be enabled to “reject”, “answer”, or “transfer” the call. In another embodiment, if the user is in physical proximity of a friend, gestures will be enabled to “send” and “receive” contact information. Simple/easy-to-detect gestures (such as a “shake”) may also be used as a signaling mechanism for starting gesture recognition of enabled gestures.
  • In one embodiment, as gesture interaction is confirmed and relative context is detected, gesture recognition models may be loaded based only on enabled gestures. It is to be understood that selectively loading specific gesture recognition models diminishes false positives, as it enables only a subset of the available gestures and not an entire input vocabulary. In addition, a filler model for rejecting spurious gestures may be constructed and based on the gestures not used, enhancing the precision of the system. Real time requirements may not allow a filler model to be generated on the fly, thus the needed filler model may be pre-compiled in advance according to the possible contexts of use. As the number of gestures is finite, all the possible combinations of gestures may be potentially pre-compiled as filler models. If only a subset of combinations is used for specific context-based interactions (e.g. two specific sets of gestures for phone calls and social interactions), only those specific combinations will be used to pre-compile the needed filler models.
  • A gesture recognition system implementing an embodiment of the invention may further utilize context and activity information, if available in the system, to optimize training and recognition. Algorithms such as HMM typically rely on annotated training samples in order to generate the models with well-known algorithms (such as Baum-Welch). Gestures are heavily dependent on several factors such as user posture, movement noise and physical activity. Differences in those factors are hard to eliminate by using only mathematical or statistical tools. Thus, for improving the performances for gesture recognition algorithms, embodiments of the invention may further utilize a “tag” for each gesture's training sample. These tags may identify not only with the type of gesture (e.g. “EarTouch”) but also with the activity in which it has been performed (e.g. “in train” or “walking”). In this way, the training procedure will produce a separate model for each gesture/activity pair instead of each gesture. During the recognition phase, the context information will be used to choose the correct gesture/activity models in the same way as in training mode.
  • In another embodiment of the invention, an easy-to-use continuous learning module is used to collect enough data in order to make a system's HMM models reliable and to account for a user's gesture changes over time. The continuous learning module may employ a two-gestures confirm/ignore notification. For example, right after a gesture is performed, the user may indicate that the gesture is suitable to be included in the training set (or not) by performing simple always detectable gestures (e.g. two poses of the hand or whack gestures). Hence the new training sample data along with the detected activity are used to create new gesture/activity models or enhance existing ones.
  • Thus, by employing a staged pipeline gesture recognition process, and leveraging user context, gesture recognition may be performed with a high degree of accuracy in a power efficient manner.
  • FIG. 1A is a flow diagram of a process utilizing an embodiment of the invention. Flow diagrams as illustrated herein provide examples of sequences of various process actions. Although shown in a particular sequence or order, unless otherwise specified, the order of the actions can be modified. Thus, the illustrated implementations should be understood only as examples, and the illustrated processes can be performed in a different order, and some actions may be performed in parallel. Additionally, one or more actions can be omitted in various embodiments of the invention; thus, not all actions are required in every implementation. Other process flows are possible.
  • Data is collected from at least one sensor (e.g., a 3D accelerometer or gyroscope), 100. In one embodiment, the sensor is separate from a mobile processing device, and communicates the data via wireless protocols known in the art (e.g., WiFi, Bluetooth). In another embodiment, the sensor is included in the mobile processing device. In this embodiment, the data from the sensor indicates a motion from a user.
  • User context may also be retrieved from the mobile device, 105. User context may identify, for example, an application the mobile device is running or location of the device/user. The user device may then access a database that associates context and activity information inputs with the gestures that may be allowed in any point in time and which algorithms may be used to detect these gestures. Thus, user context is used as a filter for enabling a subset of gestures (e.g. “Eartouch” when a mobile device is executing a phone application). The user activity may further enable the choice of the right models during recognition (e.g. the “Eartouch” model that is tagged with “walking” as activity). The frequency of context and activity updates may be relatively low, as it corresponds with the user's context change events in daily life.
  • The entire gesture recognition processing pipeline may be enabled, 112, when it is determined that one or more gestures may be performed given the user context (e.g., the user is using the mobile device as a phone, and thus gestures are disabled), and/or a simple/easy-to-detect gesture (e.g. a shake of a wristwatch or a whack gesture on the device) has been performed by the user, 110. Otherwise, sensor data is discarded, 111. A Finite State Automata can be programmed with the desired behavior.
  • Embodiments of the invention may further perform a segmentation of the sensor data in intervals based on the energy levels of the data, 115. This segmentation may be “button-less” in that no user input is required to segment the sensor data into a “movement window.” Proper hysteresis may be used to smooth out high frequency variation of energy value. As an example, energy may be measured using evaluating a sensor's standard deviation on a moving window. Data occurring outside the “movement window” is discarded, 111, while data performed within the movement window is subsequently processed.
  • An entire data segment may be subsequently processed. In one embodiment, a low-computation Template Matching is executed by comparing characteristics of the current stream to be analyzed (e.g. signal duration, overall energy, minimum and maximum values for signal duration and energy levels) to a single template obtained from all training samples of “allowed gestures”, 120. In this way, for example, abnormally long or low-energy gestures will be discarded in the beginning of the pipeline without running computationally expensive HMM algorithms on an MPU.
  • In one embodiment, “allowed gestures” are further based on training samples and “tags” for each training sample identifying the appropriate user context for the gesture. For example, a user may be executing an application (e.g., a game) that only enables specific “shake” type gestures. Therefore, movements that do not exhibit similar signal characteristics (i.e., high maximum energy values) are discarded, as these movements are not enabled given the user context.
  • It is to be understood that decisions 110, 115 and 120 may be determined via low-complex algorithms as described in the examples above, and that operations 100-120 may be performed by a low power processing unit. Thus, embodiments of the invention may enable continuous sensor data processing while duty-cycling the main processor. If the current signal matches at least one of the templates then the gesture's signal is “passed” to the main processing unit (waking up the main processor if necessary), 125. Otherwise, the signal is discarded, 111. Thus, the workload associated with gesture recognition processing is balanced between a low power processing unit and main processor.
  • FIG. 1B is an example sensor data stream. Assuming user context allows for gestures (as described in operation 110), interaction 1000 is segmented into three data segments (as described in operation 115)— potential gestures 1100, 1200 and 1300. In this example, potential gesture 1200 is abnormally long and thus discarded (as described in operation 120). Potential gestures 1100 and 1300 are passed to the MPU providing they match an allowed gesture template (as described in operation 125).
  • Returning to FIG. 1A, Normalization and Feature extraction may be performed on the passed gesture signal, if needed by the appropriate gesture algorithm (e.g., HMM) 130. In another embodiment, this operation may also be performed via an LPSU if the computation requirements allow. Normalization procedures may include, for example, re-sampling, amplitude normalization and average removal (for accelerometer) for tilt correction. Filtering may include, for example, an Exponential Moving Average low-pass filtering.
  • Embodiments of the invention may further take as input the user context from 105 and produce as output a model gesture data set, 135. For example, to enable button-less interaction, one model for each allowed gesture plus a Filler model for filtering out spurious gestures not in the input vocabulary may be provided. If context is not available, all the gestures will be allowed in the current HMM “grammar”.
  • Similarly to speech recognition, Filler models may be constructed utilizing the entire sample set or “garbage” gestures that are not in the set of recognized gestures. An embodiment may utilize only the “not allowed” gestures (that is, the entire gesture vocabulary minus the allowed gesture) to create a Filler model that is optimized for a particular situation (it is optimized because it does not contain the allowed gestures). For example, if the entire gesture set is A-Z gestures and one particular interaction allows only A-D gestures, than E-Z gestures will be used to build the Filler model. Training a Filler model in real time may be not feasible if a system has a low latency requirement, hence the set of possible contexts may be enumerated and the associated Filler models pre-computed and stored. If not possible (e.g. all gestures are possible), a default Filler model may be used.
  • In one embodiment of the invention, a gesture recognition result is produced from the sensor data using the model gesture and Filler algorithms, 140. Template Matching may further be performed in order to further alleviate false positives on gestures performed by the user but that are not in the current input vocabulary of allowed gestures, 145. Similar to operation 120, processing will be executed to match the recognized gesture's data stream measurements (e.g. duration, energy) against the stored Template of the candidate gesture (obtained from training data) and not on the entire set of allowed gestures as in operation 120. If the candidate gesture's measurements match the Template, a gesture event is triggered to an upper layer system (e.g., an Operating System (OS)), 150. Otherwise, the gesture is discarded, 155. In one embodiment, it is assumed a rejection during this portion of processing (i.e., MPU processing) indicates the user was attempting to in fact gesture an input command to the system; therefore, the user is notified of the rejection of said gesture.
  • Embodiments of the invention may further enable support of multiple gesture detection algorithms. Systems may require support for multiple gesture algorithms because a single gesture recognition algorithm may not be adequately accurate across different types of gestures. For example, gestures may be clustered into multiple types including dynamic gestures (e.g. write a letter in the air), static poses (e.g. hold your hand face up) and shake/whack gestures. For each of these gesture types, there are specific recognition algorithms that work best for that type. Thus, a mechanism is needed to select the appropriate algorithm. To run all algorithms in parallel and, based on some metric, select the “best output” is clearly not computationally efficient, especially with algorithms like HMM which tend to be computationally intensive. Therefore, embodiments of the invention may incorporate a selector system to preselect an appropriate gesture recognition algorithm in real-time based on features of the sensor data and the user's context. The selector system may include a two-stage recognizer selector that decides which algorithm may run at any given time based on signal characteristics.
  • The first stage may perform a best-effort selection of one or more algorithms based on signal characteristics that can be measured before the complete gesture's raw data segment is available. For example it can base its selection on the instantaneous energy magnitude, spikes in the signal or time duration of the signal. The first stage may compare these features against a template matching database and enable the algorithms whose training gestures' signal characteristics match the input signal's characteristics.
  • When enabled, each algorithm identifies candidate gestures in the raw data stream. In general, a gesture's data stream is shorter than the entire period of time the algorithm has been enabled; furthermore, the algorithm may identify multiple gestures (i.e. multiple “shakes” gestures or a series of poses) in the entire time window. Each enabled algorithm may perform an internal segmentation of the raw data stream by determining gestures' end points (e.g. HMM) or finding specific patterns in the signal (e.g. peak detection). Therefore some signal characteristics (such as its spectral characteristic or total energy content) may be analyzed only after a gesture has been tentatively recognized and its associated data stream is available.
  • In subsequent processing, the second stage may analyze the data streams associated with each candidate gesture, compare calculated features (e.g., spectral content, energy content) against a Template Matching database and choose the best match among the algorithms, providing as output the recognized gesture.
  • FIG. 2 is a block diagram of an embodiment of the invention. RSPre 210, upstream with respect to the Gesture Recognizers 220-250, is fed in real time by a raw data stream of sensor 290. RSPre enables one or more of Gesture Recognizers 220-250 based on measures obtained from the raw signals of sensor 290 and allowed algorithms based on the user context. In one embodiment, User Context Filter (UCF) 200 retrieves algorithms mapped to context via database 205. Templates of signals for any algorithm may be obtained from Template Matching Database 215 and a Template Matching procedure may be performed; hence only the subset of Gesture Recognizers 220-250 that match the signal characteristics coming in will be enabled. In one embodiment, a template matching operation will produce a similarity measure for each algorithm and the first N-best algorithms will be chosen and activated if the similarity satisfies a predefined Similarity Threshold.
  • User Context Filter (UCF) 200 keeps track of current user context such as location, social context and physical activity, system and applications events (e.g. a phone call comes in). UCF 200 keeps track of allowed gestures given the context and updates RSPre 210 in real time with the algorithms needed to recognize the allowed Gesture Recognizers. UCF 200 uses a Gestures-to-Algorithms Mapping database 205 that contains the unique mapping from each gesture ID to the Algorithm used. For example, gestures “0” to “9” (waving the hand in the air) may be statically mapped in database 205 to HMM (used by recognizer 220) while poses such as “hand palm down/up” may be mapped to Decision Tree (used by recognizer 230). UCF 200 is fed by external applications that inform which gestures are currently meaningful for the actual user context. For example, if a phone application is active, “0” to “9” gestures will be activated and UCF 200 will activate only HMM. The output of UCF 200 (algorithms allowed) is used by RSPre 210. This filter reduces false positives when a gesture “out of context” is being made by the user and detected by sensor 290.
  • RSPre 210 provides appropriate hysteresis mechanisms in order to segment the data stream from sensor 290 in meaningful segments, for example using a Finite State Automata with transitions based on the similarity of thresholds between data from sensor 290 and the templates of database 215.
  • RSPost 260 is downstream to Gesture Recognizers 220-250 and is fed in real time by the recognized gesture events plus the raw data stream from sensor 290. In case more than one gesture is recognized as candidate in the same time interval, RSPost 260 will perform a Template Matching (accessing templates in database 265) and will output the most probable recognized gesture. RSPost 260 provides appropriate heuristics mechanisms in order to choose a single gesture if the Template Matching outputs more than one gesture ID. For example, a similarity measure may be generated from the Template Matching algorithm for each matching algorithm and the best match will be chosen.
  • Database 265 contains the signal “templates” (e.g. min-max values of energy level or signal Fast Fourier Transformation (FFT) characteristics) for each of Gesture Recognizers 220-250. For example, for dynamic movements recognized by HMM the average gesture_energy may be
      • Energy_Thresold_min<gesture_energy<Energy_Thresold_max
        and its FFT may have components at frequencies ˜20 Hz. Shake gestures maybe detected if the energy is
      • gesture_energy>Energy_Thresold_max
        and its FFT has significant components at high frequencies. Signal templates may be automatically obtained from training gestures.
  • FIG. 3 is a flow diagram describing an embodiment of the invention. In this example, there are four algorithms present in a system (HMM 310, Decision Trees 320, Peak Detection 330 and Pitch/Roll Inertial 340). User context is analyzed to determine suitable algorithms to consider for sensor data, 350. In this example, user context eliminates Pitch/Roll Inertial 340 from being a suitable algorithm to process any incoming signal from system sensors.
  • The incoming signal is analyzed (via RSPre) to enable some of the remaining algorithms present in the system, 360. In this example, RSPre enables HMM 310 and Peak Detection 320 to run. These two algorithms run in parallel and the results are analyzed, via RSPost, to determine the proper algorithm to use (if more than one is enabled via RSPre) and the gesture from the incoming signal, 370. In this example, RSPost chooses HMM 310 along with the gesture recognized by HMM. Template Matching algorithms used by RSPre and RSPost may utilize, for example, time duration, energy magnitude and frequency spectrum characteristics of sensor data. In one embodiment, RSPre analyzes the incoming signal using time duration or energy magnitude characteristics of the incoming signal, while RSPost analyzes the incoming signal using frequency spectrum characteristics of the incoming signal.
  • FIG. 4 is a diagram of time-domain signal characteristics that may be used in the Template Matching algorithms used in RSPre and RSPost, such as running average of movement energy (here represented by Standard Deviation of the signal) or magnitude to decide whether for example a pose ( segments 410, 430 and 450), dynamic gesture (segment 420) or shake (segment 440) is being performed, segmenting accordingly the data stream in stationary, dynamic or high energy intervals. In stationary intervals, for example, a Decision Tree algorithm will be enabled as the algorithm of choice for static “poses” 410, 430 and 450. For time intervals where the amplitude of the motion is above a certain threshold but less than “high energy” (e.g., segment 420), a statistical HMM algorithm will be enabled, as the state-of-the-art algorithm for dynamic gestures. For time intervals where the amplitude of the motion is “high energy” (e.g., segment 440) a Peak Detection algorithm will be enabled.
  • Template Matching algorithms used by RSPre and RSPost may rely, for example, on min-max comparison of features, calculated over a sliding window of the signal(s), such as mean, standard deviation and spectral components energy.
  • The Template Matching algorithms may be applied to each signal separately or to a combined measure derived from the signals. For example, a “movement magnitude” measure may be derived from a 3D accelerometer.
  • The templates may be generated using the training data. For example, all the training data for HMM-based gestures may provide the min-max values and spectral content for HMM algorithm for X, Y, Z axis and overall magnitude, if an accelerometer is used to recognize gestures.
  • Context may also be used to constrain the choice to a subset of the possible gestures and algorithms by indicating either allowed or disallowed gestures. For example, an application may define two different gestures recognized by two different algorithms for rejecting or accepting a call: a pose “hand palm up” for rejecting the incoming call and a movement towards the user ear for accepting the call. In this specific case, UCF will enable only the Decision Tree and the HMM as the only two algorithms needed for recognizing the allowed gestures. Accordingly, RSPre and RSPost will compute Template Matching only on this subset of algorithms.
  • FIG. 5 shows a high level architecture of a system according to one embodiment of the invention. System 500 is a scalable and generic system, and is able discriminate from dynamic “high energy” gestures down to static poses. Gesture processing as described above is performed in real-time, depending on signal characteristics and user context. System 500 includes sensors 550, communication unit 530, memory 520 and processing unit 510, each of which is operatively coupled via system bus 540. It is to be understood that each components of system 500 may be included in a single or multiple devices.
  • In one embodiment, system 500 utilizes gesture processing modules 525 that include the functionality described above. Gesture processing modules 525 are included in a storage area of memory 520, and are executed via processing unit 510. In one embodiment, processing unit 510 includes a low-processing sub-unit and a main processing sub-unit, each to execute a specific gesture processing modules as described above.
  • Sensors 550 may communicate data to gesture processing modules 525 via the communications unit 530 in a wired and/or wireless manner. Examples of wired communication means may include, without limitation, a wire, cable, bus, printed circuit board (PCB), Ethernet connection, backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optic connection, and so forth. Examples of wireless communication means may include, without limitation, a radio channel, satellite channel, television channel, broadcast channel infrared channel, radio-frequency (RF) channel, Wireless Fidelity (WiFi) channel, a portion of the RF spectrum, and/or one or more licensed or license-free frequency bands. Sensors 550 may include any device that provides three dimensional readings (along x, y, and z axis) for measuring linear acceleration and sensor orientation (e.g., an accelerometer).
  • Various components referred to above as processes, servers, or tools described herein may be a means for performing the functions described. Each component described herein includes software or hardware, or a combination of these. The components can be implemented as software modules, hardware modules, special-purpose hardware (e.g., application specific hardware, ASICs, DSPs, etc.), embedded controllers, hardwired circuitry, etc. Software content (e.g., data, instructions, and configuration) may be provided via an article of manufacture including a computer storage readable medium, which provides content that represents instructions that can be executed. The content may result in a computer performing various functions/operations described herein. A computer readable storage medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a computer (e.g., computing device, electronic system, etc.), such as recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). The content may be directly executable (“object” or “executable” form), source code, or difference code (“delta” or “patch” code). A computer readable storage medium may also include a storage or database from which content can be downloaded. A computer readable medium may also include a device or product having content stored thereon at a time of sale or delivery. Thus, delivering a device with stored content, or offering content for download over a communication medium may be understood as providing an article of manufacture with such content described herein.

Claims (24)

1. An article of manufacture comprising a machine-readable storage medium that provides instructions that, if executed by a machine, will cause the machine to perform operations comprising:
receiving data from a sensor indicating a motion, the sensor having an accelerometer;
determining, via a first set of one or more algorithms, whether the motion is a gestural motion based on at least one of time duration of the data and an energy level of the data; and
determining, via a second set of one or more algorithms, a candidate gesture based on the data in response to determining the motion is a gestural motion, the second set of algorithm(s) to include a gesture recognition algorithm.
2. The article of manufacture of claim 1, the operations further comprising:
discarding the data in response to determining the motion is not a gestural motion.
3. The article of manufacture of claim 1, wherein the first set of algorithm(s) includes one or more low-complexity algorithms and the machine includes a low-power processor and a main processing unit, the first set of algorithm(s) to be executed via the low-power processor and the second set of algorithm(s) to be executed via the main processing unit.
4. The article of manufacture of claim 1, wherein the gesture recognition algorithm is based on a Hidden Markov Model (HMM).
5. The article of manufacture of claim 4, wherein determining a candidate gesture comprises:
using context of a system user to select a subset of one or more allowed gestures; and
restricting gesture models loaded by the HMM algorithm to the subset of allowed gesture(s).
6. The article of manufacture of claim 4, wherein determining a candidate gesture comprises:
using context of a system user to reject a subset of one or more disallowed gestures; and
selecting an HMM Filler model that discards the subset of disallowed gesture(s).
7. The article of manufacture of claim 4, wherein an HMM training set and one or more gesture models is based on physical activity of a user of the machine.
8. The article of manufacture of claim 4, the gesture rejection algorithm to validate a gesture recognized by the HMM algorithm by comparing duration and energy of the gestural motion with one or more of a minimum and a maximum value of duration and energy obtained from a database of training data.
9. The article of manufacture of claim 1, wherein the sensor is included in the machine, the machine comprises a mobile device, and determining, via the first algorithm(s), whether the motion is a gestural motion is further based on at least one of a user context of the mobile device, and an explicit action from the user to indicate a period of gesture commands.
10. The article of manufacture of claim 1, wherein determining a candidate gesture based on the data comprises
accessing a database of one or more example gesture inputs, the example gesture input(s) to include a minimum and a maximum time duration; and
verifying the time duration of the gestural motion is within the minimum and maximum time durations of an example gesture input.
11. The article of manufacture of claim 1, wherein the data from the sensor is included in a series of data segments, one or more segments to indicate a motion defined by an energy threshold, and receiving the data from the sensor is in response to the data exceeding the energy threshold.
12. An article of manufacture comprising a machine-readable storage medium that provides instructions that, if executed by a machine, will cause the machine to perform operations comprising:
receiving data from a sensor indicating a motion, the sensor having an accelerometer;
determining a subset of one or more gesture recognition algorithms from a plurality of gesture recognition algorithms based, at least in part, on one or more signal characteristics of the data; and
determining a gesture from the data from the sensor based, at least in part, on applying the subset of gesture recognition algorithm(s) to the data.
13. The article of manufacture of claim 12, wherein the signal characteristic(s) of the data comprise an energy magnitude of the data.
14. The article of manufacture of claim 13, wherein determining the subset of gesture recognition algorithms is based, at least in part, on comparing the energy magnitude of the data with one or more magnitude values associated with one of the plurality of gesture algorithms.
15. The article of manufacture of claim 12, wherein the signal characteristic(s) of the data comprise a time duration of the data.
16. The article of manufacture of claim 15, wherein determining the subset of gesture recognition algorithms is based, at least in part, on comparing the time duration of the data with one or more time values associated with one of the plurality of gesture algorithms.
17. The article of manufacture of claim 12, wherein the signal characteristic(s) of the data comprise a frequency spectrum of the data.
18. The article of manufacture of claim 17, wherein determining the subset of gesture recognition algorithms is based, at least in part, on comparing the frequency spectrum of the data with one or more spectrum patterns stored associated with one of the plurality of gesture algorithms.
19. A method comprising:
receiving data from a sensor indicating a motion, the sensor having an accelerometer;
determining, via a first set of one or more algorithms, whether the motion is a gestural motion based on at least one of time duration of the data and an energy level of the data; and
determining, via a second set of one or more algorithms, a candidate gesture based on the data in response to determining the motion is a gestural motion, the second set of algorithm(s) to include a gesture recognition algorithm.
20. The method of claim 19, the first set of algorithm(s) to comprise one or more low-complexity algorithms to be executed via a low-power processor and the second set of algorithm(s) to be executed via a main processing unit.
21. The method of claim 19, wherein the second set of algorithm(s) includes a Hidden Markov Model (HMM) gesture rejection algorithm and determining a candidate gesture comprises:
using context of a system user to select a subset of one or more allowed gestures; and
restricting gestures used by the HMM algorithm to the subset of allowed gesture(s).
22. A method comprising:
receiving data from a sensor indicating a motion, the sensor having an accelerometer;
determining a subset of one or more gesture recognition algorithms from a plurality of gesture recognition algorithms based, at least in part, on one or more signal characteristics of the data; and
determining a gesture from the data from the sensor based, at least in part, on applying the subset of gesture recognition algorithms to the data.
23. The method of claim 22, wherein the one or more signal characteristic(s) of the data comprise at least one of
an energy magnitude of the data,
a time duration of the data, and
a frequency spectrum of the data.
24. The method of claim 23, wherein determining the subset of gesture recognition algorithms is based, at least in part, on at least one of
comparing the energy magnitude of the data with one or more magnitude values associated with one of the plurality of gesture algorithms if the signal characteristic(s) of the data comprise an energy magnitude of the data,
comparing the time duration of the data with time values associated with one of the plurality of gesture algorithms if the signal characteristic(s) of the data comprise a time duration of the data, and
comparing the frequency spectrum of the data with spectrum patterns associated with one of the plurality of gesture algorithms if the signal characteristic(s) of the data comprise a frequency spectrum of the data.
US12/835,079 2010-07-13 2010-07-13 Efficient gesture processing Abandoned US20120016641A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US12/835,079 US20120016641A1 (en) 2010-07-13 2010-07-13 Efficient gesture processing
PCT/US2011/043319 WO2012009214A2 (en) 2010-07-13 2011-07-08 Efficient gesture processing
CN201180034400.9A CN102985897B (en) 2010-07-13 2011-07-08 Efficient gesture processes
TW100124609A TWI467418B (en) 2010-07-13 2011-07-12 Method for efficient gesture processing and computer program product
US14/205,210 US9535506B2 (en) 2010-07-13 2014-03-11 Efficient gesture processing
US15/397,511 US10353476B2 (en) 2010-07-13 2017-01-03 Efficient gesture processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/835,079 US20120016641A1 (en) 2010-07-13 2010-07-13 Efficient gesture processing

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/205,210 Division US9535506B2 (en) 2010-07-13 2014-03-11 Efficient gesture processing

Publications (1)

Publication Number Publication Date
US20120016641A1 true US20120016641A1 (en) 2012-01-19

Family

ID=45467621

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/835,079 Abandoned US20120016641A1 (en) 2010-07-13 2010-07-13 Efficient gesture processing
US14/205,210 Active 2031-09-10 US9535506B2 (en) 2010-07-13 2014-03-11 Efficient gesture processing
US15/397,511 Active US10353476B2 (en) 2010-07-13 2017-01-03 Efficient gesture processing

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/205,210 Active 2031-09-10 US9535506B2 (en) 2010-07-13 2014-03-11 Efficient gesture processing
US15/397,511 Active US10353476B2 (en) 2010-07-13 2017-01-03 Efficient gesture processing

Country Status (4)

Country Link
US (3) US20120016641A1 (en)
CN (1) CN102985897B (en)
TW (1) TWI467418B (en)
WO (1) WO2012009214A2 (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025901A1 (en) * 2009-07-29 2011-02-03 Canon Kabushiki Kaisha Movement detection apparatus and movement detection method
US20120038555A1 (en) * 2010-08-12 2012-02-16 Research In Motion Limited Method and Electronic Device With Motion Compensation
US20130077820A1 (en) * 2011-09-26 2013-03-28 Microsoft Corporation Machine learning gesture detection
US20130147704A1 (en) * 2011-12-09 2013-06-13 Hsuan-Hao Kuo Electronic device providing shake gesture identification and method
WO2013133923A1 (en) * 2012-03-07 2013-09-12 Motorola Mobility Llc Portable electronic device and method for controlling operation thereof based on user motion
CN103345627A (en) * 2013-07-23 2013-10-09 清华大学 Action recognition method and device
US20130268900A1 (en) * 2010-12-22 2013-10-10 Bran Ferren Touch sensor gesture recognition for operation of mobile devices
EP2677759A1 (en) * 2012-06-20 2013-12-25 Samsung Electronics Co., Ltd. Display apparatus, remote controlling apparatus and control method thereof
US20140009389A1 (en) * 2012-07-06 2014-01-09 Funai Electric Co., Ltd. Electronic Information Terminal and Display Method of Electronic Information Terminal
US20140232642A1 (en) * 2013-02-15 2014-08-21 Orange Method of Temporal Segmentation of an Instrumented Gesture, Associated Device and Terminal
US20140282235A1 (en) * 2013-03-18 2014-09-18 Fujitsu Limited Information processing device
US20140282270A1 (en) * 2013-03-13 2014-09-18 Motorola Mobility Llc Method and System for Gesture Recognition
WO2014143032A1 (en) * 2013-03-15 2014-09-18 Intel Corporation Continuous interaction learning and detection in real-time
EP2818978A1 (en) * 2013-06-28 2014-12-31 Orange System and method for gesture disambiguation
US20150002389A1 (en) * 2013-06-27 2015-01-01 Orange Method for Recognizing a Performed Gesture, Device, User Terminal and Associated Computer Program
US20150046886A1 (en) * 2013-08-07 2015-02-12 Nike, Inc. Gesture recognition
WO2015038866A1 (en) * 2013-09-13 2015-03-19 Qualcomm Incorporated Context-sensitive gesture classification
FR3014216A1 (en) * 2013-12-03 2015-06-05 Movea METHOD FOR CONTINUOUSLY RECOGNIZING GESTURES OF A USER OF A PREHENSIBLE MOBILE TERMINAL HAVING A MOTION SENSOR ASSEMBLY, AND DEVICE THEREFOR
US20150177841A1 (en) * 2013-12-20 2015-06-25 Lenovo (Singapore) Pte, Ltd. Enabling device features according to gesture input
US20150185837A1 (en) * 2013-12-27 2015-07-02 Kofi C. Whitney Gesture-based waking and control system for wearable devices
US9110663B2 (en) 2013-01-22 2015-08-18 Google Technology Holdings LLC Initialize a computing device to perform an action
US20150234477A1 (en) * 2013-07-12 2015-08-20 Magic Leap, Inc. Method and system for determining user input based on gesture
US20150230734A1 (en) * 2014-02-17 2015-08-20 Hong Kong Baptist University Gait measurement with 3-axes accelerometer/gyro in mobile devices
US9147057B2 (en) 2012-06-28 2015-09-29 Intel Corporation Techniques for device connections using touch gestures
US9213403B1 (en) 2013-03-27 2015-12-15 Google Inc. Methods to pan, zoom, crop, and proportionally move on a head mountable display
US9367236B2 (en) 2012-10-05 2016-06-14 Google Inc. System and method for processing touch actions
US20160216769A1 (en) * 2015-01-28 2016-07-28 Medtronic, Inc. Systems and methods for mitigating gesture input error
US9417704B1 (en) * 2014-03-18 2016-08-16 Google Inc. Gesture onset detection on multiple devices
US9477314B2 (en) 2013-07-16 2016-10-25 Google Technology Holdings LLC Method and apparatus for selecting between multiple gesture recognition systems
US9507426B2 (en) * 2013-03-27 2016-11-29 Google Inc. Using the Z-axis in user interfaces for head mountable displays
US9535506B2 (en) 2010-07-13 2017-01-03 Intel Corporation Efficient gesture processing
US20170060966A1 (en) * 2015-08-26 2017-03-02 Quixey, Inc. Action Recommendation System For Focused Objects
US9612403B2 (en) 2013-06-11 2017-04-04 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US9654629B1 (en) * 2015-10-26 2017-05-16 At&T Intellectual Property I, L.P. Telephone user interface providing enhanced call blocking
US9671566B2 (en) 2012-06-11 2017-06-06 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US9740839B2 (en) 2014-08-13 2017-08-22 Google Technology Holdings LLC Computing device chording authentication and control
US9746926B2 (en) 2012-12-26 2017-08-29 Intel Corporation Techniques for gesture-based initiation of inter-device wireless connections
US9811255B2 (en) 2011-09-30 2017-11-07 Intel Corporation Detection of gesture data segmentation in mobile devices
US9811311B2 (en) 2014-03-17 2017-11-07 Google Inc. Using ultrasound to improve IMU-based gesture detection
US20180011619A1 (en) * 2010-12-27 2018-01-11 Sling Media Inc. Systems and methods for adaptive gesture recognition
EP3281089A4 (en) * 2015-07-31 2018-05-02 Samsung Electronics Co., Ltd. Smart device and method of operating the same
US9996109B2 (en) 2014-08-16 2018-06-12 Google Llc Identifying gestures using motion data
EP3358440A1 (en) * 2017-02-02 2018-08-08 STMicroelectronics Inc System and method of determining whether an electronic device is in contact with a human body
CN108700938A (en) * 2017-02-16 2018-10-23 华为技术有限公司 It is a kind of detection electronic equipment close to human body method, apparatus and equipment
US10216403B2 (en) 2013-03-29 2019-02-26 Orange Method to unlock a screen using a touch input
US10228768B2 (en) 2014-03-25 2019-03-12 Analog Devices, Inc. Optical user interface
US10660039B1 (en) 2014-09-02 2020-05-19 Google Llc Adaptive output of indications of notification data
US20220114878A1 (en) * 2020-10-14 2022-04-14 12180502 Canada Inc. Hygiene detection devices and methods
US11347316B2 (en) 2015-01-28 2022-05-31 Medtronic, Inc. Systems and methods for mitigating gesture input error
US11360585B2 (en) 2019-07-31 2022-06-14 Stmicroelectronics S.R.L. Gesture recognition system and method for a digital-pen-like device and corresponding digital-pen-like device
US11429604B2 (en) 2019-09-10 2022-08-30 Oracle International Corporation Techniques of heterogeneous hardware execution for SQL analytic queries for high volume data processing
EP3139248B1 (en) * 2015-09-04 2024-08-14 ams AG Method for gesture based human-machine interaction, portable electronic device and gesture based human-machine interface system

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103685208B (en) * 2012-09-25 2017-07-14 华为技术有限公司 User data mask method, terminal device and server
TWI497424B (en) * 2012-12-14 2015-08-21 Univ Tajen Real time human body poses identification system
US9110561B2 (en) * 2013-08-12 2015-08-18 Apple Inc. Context sensitive actions
CN103941856B (en) * 2014-02-19 2017-12-26 联想(北京)有限公司 A kind of information processing method and electronic equipment
US10203762B2 (en) * 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9639231B2 (en) * 2014-03-17 2017-05-02 Google Inc. Adjusting information depth based on user's attention
US10852838B2 (en) 2014-06-14 2020-12-01 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9746929B2 (en) 2014-10-29 2017-08-29 Qualcomm Incorporated Gesture recognition using gesture elements
JP2018506773A (en) 2014-12-16 2018-03-08 ソマティクス, インコーポレイテッド Method and system for monitoring and influencing gesture-based behavior
US9804679B2 (en) 2015-07-03 2017-10-31 Google Inc. Touchless user interface navigation using gestures
CN106610716B (en) 2015-10-21 2019-08-27 华为技术有限公司 A kind of gesture identification method and device
DE112015007219T5 (en) * 2015-12-23 2021-09-09 Intel Corporation Touch gesture recognition assessment
US10429935B2 (en) 2016-02-08 2019-10-01 Comcast Cable Communications, Llc Tremor correction for gesture recognition
US10824955B2 (en) 2016-04-06 2020-11-03 International Business Machines Corporation Adaptive window size segmentation for activity recognition
CN107346207B (en) * 2017-06-30 2019-12-20 广州幻境科技有限公司 Dynamic gesture segmentation recognition method based on hidden Markov model
US10984341B2 (en) * 2017-09-27 2021-04-20 International Business Machines Corporation Detecting complex user activities using ensemble machine learning over inertial sensors data
CN108537147B (en) * 2018-03-22 2021-12-10 东华大学 Gesture recognition method based on deep learning
CN109100537B (en) * 2018-07-19 2021-04-20 百度在线网络技术(北京)有限公司 Motion detection method, apparatus, device, and medium
CN109460260B (en) * 2018-10-24 2021-07-09 瑞芯微电子股份有限公司 Method and device for quickly starting up
CN110336768B (en) * 2019-01-22 2021-07-20 西北大学 Situation prediction method based on combined hidden Markov model and genetic algorithm
CN110032932B (en) * 2019-03-07 2021-09-21 哈尔滨理工大学 Human body posture identification method based on video processing and decision tree set threshold
IT201900019037A1 (en) * 2019-10-16 2021-04-16 St Microelectronics Srl PERFECTED METHOD FOR DETECTING A WRIST TILT GESTURE AND ELECTRONIC UNIT AND WEARABLE ELECTRONIC DEVICE IMPLEMENTING THE SAME
DE102021208686A1 (en) * 2020-09-23 2022-03-24 Robert Bosch Engineering And Business Solutions Private Limited CONTROL AND METHOD FOR GESTURE RECOGNITION AND GESTURE RECOGNITION DEVICE
CN111964674B (en) * 2020-10-23 2021-01-15 四川写正智能科技有限公司 Method for judging read-write state by combining acceleration sensor and mobile terminal
CN113918019A (en) * 2021-10-19 2022-01-11 亿慧云智能科技(深圳)股份有限公司 Gesture recognition control method and device for terminal equipment, terminal equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072494A (en) * 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US20050278559A1 (en) * 2004-06-10 2005-12-15 Marvell World Trade Ltd. Low power computer with main and auxiliary processors
US6990639B2 (en) * 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
US20080259042A1 (en) * 2007-04-17 2008-10-23 Sony Ericsson Mobile Communications Ab Using touches to transfer information between devices

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69032645T2 (en) * 1990-04-02 1999-04-08 Koninkl Philips Electronics Nv Data processing system with input data based on gestures
US6304674B1 (en) * 1998-08-03 2001-10-16 Xerox Corporation System and method for recognizing user-specified pen-based gestures using hidden markov models
US6466198B1 (en) 1999-11-05 2002-10-15 Innoventions, Inc. View navigation and magnification of a hand-held device with a display
AU2003275134A1 (en) * 2002-09-19 2004-04-08 The Penn State Research Foundation Prosody based audio/visual co-analysis for co-verbal gesture recognition
EP1408443B1 (en) * 2002-10-07 2006-10-18 Sony France S.A. Method and apparatus for analysing gestures produced by a human, e.g. for commanding apparatus by gesture recognition
FI20022282A0 (en) 2002-12-30 2002-12-30 Nokia Corp Method for enabling interaction in an electronic device and an electronic device
US8745541B2 (en) * 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
FI117308B (en) * 2004-02-06 2006-08-31 Nokia Corp gesture Control
US7627142B2 (en) * 2004-04-02 2009-12-01 K-Nfb Reading Technology, Inc. Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine
GB2419433A (en) * 2004-10-20 2006-04-26 Glasgow School Of Art Automated Gesture Recognition
KR100597798B1 (en) 2005-05-12 2006-07-10 삼성전자주식회사 Method for offering to user motion recognition information in portable terminal
US9910497B2 (en) * 2006-02-08 2018-03-06 Oblong Industries, Inc. Gestural control of autonomous and semi-autonomous systems
US8537112B2 (en) * 2006-02-08 2013-09-17 Oblong Industries, Inc. Control system for navigating a principal dimension of a data space
US20090265671A1 (en) * 2008-04-21 2009-10-22 Invensense Mobile devices with motion gesture recognition
US8253770B2 (en) * 2007-05-31 2012-08-28 Eastman Kodak Company Residential video communication system
US9198621B2 (en) * 2007-06-18 2015-12-01 University of Pittsburgh—of the Commonwealth System of Higher Education Method, apparatus and system for food intake and physical activity assessment
US9261979B2 (en) * 2007-08-20 2016-02-16 Qualcomm Incorporated Gesture-based mobile interaction
US8325978B2 (en) * 2008-10-30 2012-12-04 Nokia Corporation Method, apparatus and computer program product for providing adaptive gesture analysis
TWM361902U (en) * 2009-01-22 2009-08-01 Univ Lunghwa Sci & Technology Framework with human body posture identification function
JP2010262557A (en) 2009-05-11 2010-11-18 Sony Corp Information processing apparatus and method
US8145594B2 (en) * 2009-05-29 2012-03-27 Microsoft Corporation Localized gesture aggregation
KR100981200B1 (en) 2009-06-02 2010-09-14 엘지전자 주식회사 A mobile terminal with motion sensor and a controlling method thereof
US8907897B2 (en) 2009-06-16 2014-12-09 Intel Corporation Optical capacitive thumb control with pressure sensor
US8674951B2 (en) 2009-06-16 2014-03-18 Intel Corporation Contoured thumb touch sensor apparatus
WO2011005865A2 (en) * 2009-07-07 2011-01-13 The Johns Hopkins University A system and method for automated disease assessment in capsule endoscopy
KR20110069476A (en) 2009-12-17 2011-06-23 주식회사 아이리버 Hand hrld electronic device to reflecting grip of user and method thereof
US8390648B2 (en) * 2009-12-29 2013-03-05 Eastman Kodak Company Display system for personalized consumer goods
US8432368B2 (en) * 2010-01-06 2013-04-30 Qualcomm Incorporated User interface methods and systems for providing force-sensitive input
US9477324B2 (en) * 2010-03-29 2016-10-25 Hewlett-Packard Development Company, L.P. Gesture processing
US20120016641A1 (en) 2010-07-13 2012-01-19 Giuseppe Raffa Efficient gesture processing
US8929600B2 (en) * 2012-12-19 2015-01-06 Microsoft Corporation Action recognition based on depth maps

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072494A (en) * 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US6990639B2 (en) * 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
US20050278559A1 (en) * 2004-06-10 2005-12-15 Marvell World Trade Ltd. Low power computer with main and auxiliary processors
US20080259042A1 (en) * 2007-04-17 2008-10-23 Sony Ericsson Mobile Communications Ab Using touches to transfer information between devices

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Gesture spotting with body worn internal sensors to detect user activities, by Junker et al., published June 2008. *
Online, Interactive Learning of Gestures for Human/Robot Interfaces, by Lee et al., published 04-1996 *

Cited By (113)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025901A1 (en) * 2009-07-29 2011-02-03 Canon Kabushiki Kaisha Movement detection apparatus and movement detection method
US8610785B2 (en) * 2009-07-29 2013-12-17 Canon Kabushiki Kaisha Movement detection apparatus and movement detection method
US10353476B2 (en) * 2010-07-13 2019-07-16 Intel Corporation Efficient gesture processing
US9535506B2 (en) 2010-07-13 2017-01-03 Intel Corporation Efficient gesture processing
US20120038555A1 (en) * 2010-08-12 2012-02-16 Research In Motion Limited Method and Electronic Device With Motion Compensation
US20130268900A1 (en) * 2010-12-22 2013-10-10 Bran Ferren Touch sensor gesture recognition for operation of mobile devices
US20180011619A1 (en) * 2010-12-27 2018-01-11 Sling Media Inc. Systems and methods for adaptive gesture recognition
US20130077820A1 (en) * 2011-09-26 2013-03-28 Microsoft Corporation Machine learning gesture detection
US9811255B2 (en) 2011-09-30 2017-11-07 Intel Corporation Detection of gesture data segmentation in mobile devices
US20130147704A1 (en) * 2011-12-09 2013-06-13 Hsuan-Hao Kuo Electronic device providing shake gesture identification and method
US9189062B2 (en) 2012-03-07 2015-11-17 Google Technology Holdings LLC Portable electronic device and method for controlling operation thereof based on user motion
TWI578179B (en) * 2012-03-07 2017-04-11 谷歌科技控股有限責任公司 Portable electronic device and method for controlling operation thereof based on user motion
WO2013133923A1 (en) * 2012-03-07 2013-09-12 Motorola Mobility Llc Portable electronic device and method for controlling operation thereof based on user motion
CN104160686A (en) * 2012-03-07 2014-11-19 摩托罗拉移动有限责任公司 Portable electronic device and method for controlling operation thereof based on user motion
US9671566B2 (en) 2012-06-11 2017-06-06 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
CN103513894A (en) * 2012-06-20 2014-01-15 三星电子株式会社 Display apparatus, remote controlling apparatus and control method thereof
EP2677759A1 (en) * 2012-06-20 2013-12-25 Samsung Electronics Co., Ltd. Display apparatus, remote controlling apparatus and control method thereof
US9223416B2 (en) * 2012-06-20 2015-12-29 Samsung Electronics Co., Ltd. Display apparatus, remote controlling apparatus and control method thereof
US8988342B2 (en) 2012-06-20 2015-03-24 Samsung Electronics Co., Ltd. Display apparatus, remote controlling apparatus and control method thereof
US9147057B2 (en) 2012-06-28 2015-09-29 Intel Corporation Techniques for device connections using touch gestures
US20140009389A1 (en) * 2012-07-06 2014-01-09 Funai Electric Co., Ltd. Electronic Information Terminal and Display Method of Electronic Information Terminal
US9367236B2 (en) 2012-10-05 2016-06-14 Google Inc. System and method for processing touch actions
US9952761B1 (en) 2012-10-05 2018-04-24 Google Llc System and method for processing touch actions
US9746926B2 (en) 2012-12-26 2017-08-29 Intel Corporation Techniques for gesture-based initiation of inter-device wireless connections
US9110663B2 (en) 2013-01-22 2015-08-18 Google Technology Holdings LLC Initialize a computing device to perform an action
US10078373B2 (en) * 2013-02-15 2018-09-18 Orange Method of temporal segmentation of an instrumented gesture, associated device and terminal
US20140232642A1 (en) * 2013-02-15 2014-08-21 Orange Method of Temporal Segmentation of an Instrumented Gesture, Associated Device and Terminal
US9442570B2 (en) * 2013-03-13 2016-09-13 Google Technology Holdings LLC Method and system for gesture recognition
US20140282270A1 (en) * 2013-03-13 2014-09-18 Motorola Mobility Llc Method and System for Gesture Recognition
WO2014143032A1 (en) * 2013-03-15 2014-09-18 Intel Corporation Continuous interaction learning and detection in real-time
US10366345B2 (en) 2013-03-15 2019-07-30 Intel Corporation Continuous interaction learning and detection in real-time
US9390380B2 (en) 2013-03-15 2016-07-12 Intel Corporation Continuous interaction learning and detection in real-time
US20140282235A1 (en) * 2013-03-18 2014-09-18 Fujitsu Limited Information processing device
US9507426B2 (en) * 2013-03-27 2016-11-29 Google Inc. Using the Z-axis in user interfaces for head mountable displays
US9213403B1 (en) 2013-03-27 2015-12-15 Google Inc. Methods to pan, zoom, crop, and proportionally move on a head mountable display
US9811154B2 (en) 2013-03-27 2017-11-07 Google Inc. Methods to pan, zoom, crop, and proportionally move on a head mountable display
US10216403B2 (en) 2013-03-29 2019-02-26 Orange Method to unlock a screen using a touch input
US9612403B2 (en) 2013-06-11 2017-04-04 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
EP2821893A2 (en) 2013-06-27 2015-01-07 Orange Method for recognising an gesture and a context, related device, user terminal and computer program
US20150002389A1 (en) * 2013-06-27 2015-01-01 Orange Method for Recognizing a Performed Gesture, Device, User Terminal and Associated Computer Program
US10402060B2 (en) 2013-06-28 2019-09-03 Orange System and method for gesture disambiguation
EP2818978A1 (en) * 2013-06-28 2014-12-31 Orange System and method for gesture disambiguation
US10473459B2 (en) 2013-07-12 2019-11-12 Magic Leap, Inc. Method and system for determining user input based on totem
US10295338B2 (en) 2013-07-12 2019-05-21 Magic Leap, Inc. Method and system for generating map data from an image
US10288419B2 (en) 2013-07-12 2019-05-14 Magic Leap, Inc. Method and system for generating a virtual user interface related to a totem
US10641603B2 (en) 2013-07-12 2020-05-05 Magic Leap, Inc. Method and system for updating a virtual world
US10228242B2 (en) * 2013-07-12 2019-03-12 Magic Leap, Inc. Method and system for determining user input based on gesture
US10352693B2 (en) 2013-07-12 2019-07-16 Magic Leap, Inc. Method and system for obtaining texture data of a space
US9651368B2 (en) 2013-07-12 2017-05-16 Magic Leap, Inc. Planar waveguide apparatus configured to return light therethrough
US10408613B2 (en) 2013-07-12 2019-09-10 Magic Leap, Inc. Method and system for rendering virtual content
US10495453B2 (en) 2013-07-12 2019-12-03 Magic Leap, Inc. Augmented reality system totems and methods of using same
US10533850B2 (en) 2013-07-12 2020-01-14 Magic Leap, Inc. Method and system for inserting recognized object data into a virtual world
US11221213B2 (en) 2013-07-12 2022-01-11 Magic Leap, Inc. Method and system for generating a retail experience using an augmented reality system
US11060858B2 (en) 2013-07-12 2021-07-13 Magic Leap, Inc. Method and system for generating a virtual user interface related to a totem
US10571263B2 (en) 2013-07-12 2020-02-25 Magic Leap, Inc. User and object interaction with an augmented reality scenario
US11029147B2 (en) 2013-07-12 2021-06-08 Magic Leap, Inc. Method and system for facilitating surgery using an augmented reality system
US20150234477A1 (en) * 2013-07-12 2015-08-20 Magic Leap, Inc. Method and system for determining user input based on gesture
US10866093B2 (en) 2013-07-12 2020-12-15 Magic Leap, Inc. Method and system for retrieving data in response to user input
US10767986B2 (en) 2013-07-12 2020-09-08 Magic Leap, Inc. Method and system for interacting with user interfaces
US9857170B2 (en) 2013-07-12 2018-01-02 Magic Leap, Inc. Planar waveguide apparatus having a plurality of diffractive optical elements
US11656677B2 (en) 2013-07-12 2023-05-23 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US10591286B2 (en) 2013-07-12 2020-03-17 Magic Leap, Inc. Method and system for generating virtual rooms
US9952042B2 (en) 2013-07-12 2018-04-24 Magic Leap, Inc. Method and system for identifying a user location
US9791939B2 (en) 2013-07-16 2017-10-17 Google Technology Holdings LLC Method and apparatus for selecting between multiple gesture recognition systems
US9939916B2 (en) 2013-07-16 2018-04-10 Google Technology Holdings LLC Method and apparatus for selecting between multiple gesture recognition systems
US10331223B2 (en) 2013-07-16 2019-06-25 Google Technology Holdings LLC Method and apparatus for selecting between multiple gesture recognition systems
US9477314B2 (en) 2013-07-16 2016-10-25 Google Technology Holdings LLC Method and apparatus for selecting between multiple gesture recognition systems
US11249554B2 (en) 2013-07-16 2022-02-15 Google Technology Holdings LLC Method and apparatus for selecting between multiple gesture recognition systems
CN103345627A (en) * 2013-07-23 2013-10-09 清华大学 Action recognition method and device
US11513610B2 (en) 2013-08-07 2022-11-29 Nike, Inc. Gesture recognition
US20150046886A1 (en) * 2013-08-07 2015-02-12 Nike, Inc. Gesture recognition
US11861073B2 (en) 2013-08-07 2024-01-02 Nike, Inc. Gesture recognition
US11243611B2 (en) * 2013-08-07 2022-02-08 Nike, Inc. Gesture recognition
JP2016530660A (en) * 2013-09-13 2016-09-29 クアルコム,インコーポレイテッド Context-sensitive gesture classification
WO2015038866A1 (en) * 2013-09-13 2015-03-19 Qualcomm Incorporated Context-sensitive gesture classification
US9582737B2 (en) 2013-09-13 2017-02-28 Qualcomm Incorporated Context-sensitive gesture classification
EP2881841A1 (en) * 2013-12-03 2015-06-10 Movea Method for continuously recognising gestures by a user of a hand-held mobile terminal provided with a motion sensor unit, and associated device
US9665180B2 (en) 2013-12-03 2017-05-30 Movea Method for continuous recognition of gestures of a user of a handheld mobile terminal fitted with a motion sensor assembly, and related device
FR3014216A1 (en) * 2013-12-03 2015-06-05 Movea METHOD FOR CONTINUOUSLY RECOGNIZING GESTURES OF A USER OF A PREHENSIBLE MOBILE TERMINAL HAVING A MOTION SENSOR ASSEMBLY, AND DEVICE THEREFOR
US20150177841A1 (en) * 2013-12-20 2015-06-25 Lenovo (Singapore) Pte, Ltd. Enabling device features according to gesture input
US9971412B2 (en) * 2013-12-20 2018-05-15 Lenovo (Singapore) Pte. Ltd. Enabling device features according to gesture input
US20150185837A1 (en) * 2013-12-27 2015-07-02 Kofi C. Whitney Gesture-based waking and control system for wearable devices
US9513703B2 (en) * 2013-12-27 2016-12-06 Intel Corporation Gesture-based waking and control system for wearable devices
US20150230734A1 (en) * 2014-02-17 2015-08-20 Hong Kong Baptist University Gait measurement with 3-axes accelerometer/gyro in mobile devices
US10307086B2 (en) * 2014-02-17 2019-06-04 Hong Kong Baptist University Gait measurement with 3-axes accelerometer/gyro in mobile devices
US9811311B2 (en) 2014-03-17 2017-11-07 Google Inc. Using ultrasound to improve IMU-based gesture detection
US9791940B1 (en) 2014-03-18 2017-10-17 Google Inc. Gesture onset detection on multiple devices
US9417704B1 (en) * 2014-03-18 2016-08-16 Google Inc. Gesture onset detection on multiple devices
US9563280B1 (en) 2014-03-18 2017-02-07 Google Inc. Gesture onset detection on multiple devices
US10048770B1 (en) 2014-03-18 2018-08-14 Google Inc. Gesture onset detection on multiple devices
US10228768B2 (en) 2014-03-25 2019-03-12 Analog Devices, Inc. Optical user interface
US10127370B2 (en) 2014-08-13 2018-11-13 Google Technology Holdings LLC Computing device chording authentication and control
US9740839B2 (en) 2014-08-13 2017-08-22 Google Technology Holdings LLC Computing device chording authentication and control
US9996109B2 (en) 2014-08-16 2018-06-12 Google Llc Identifying gestures using motion data
US10660039B1 (en) 2014-09-02 2020-05-19 Google Llc Adaptive output of indications of notification data
US11126270B2 (en) 2015-01-28 2021-09-21 Medtronic, Inc. Systems and methods for mitigating gesture input error
US10613637B2 (en) * 2015-01-28 2020-04-07 Medtronic, Inc. Systems and methods for mitigating gesture input error
US20160216769A1 (en) * 2015-01-28 2016-07-28 Medtronic, Inc. Systems and methods for mitigating gesture input error
US11347316B2 (en) 2015-01-28 2022-05-31 Medtronic, Inc. Systems and methods for mitigating gesture input error
EP3281089A4 (en) * 2015-07-31 2018-05-02 Samsung Electronics Co., Ltd. Smart device and method of operating the same
US10339078B2 (en) 2015-07-31 2019-07-02 Samsung Electronics Co., Ltd. Smart device and method of operating the same
US20170060966A1 (en) * 2015-08-26 2017-03-02 Quixey, Inc. Action Recommendation System For Focused Objects
EP3139248B1 (en) * 2015-09-04 2024-08-14 ams AG Method for gesture based human-machine interaction, portable electronic device and gesture based human-machine interface system
US10320977B2 (en) 2015-10-26 2019-06-11 At&T Intellectual Property I, L.P. Telephone user interface providing enhanced call blocking
US9654629B1 (en) * 2015-10-26 2017-05-16 At&T Intellectual Property I, L.P. Telephone user interface providing enhanced call blocking
US10802572B2 (en) 2017-02-02 2020-10-13 Stmicroelectronics, Inc. System and method of determining whether an electronic device is in contact with a human body
EP3358440A1 (en) * 2017-02-02 2018-08-08 STMicroelectronics Inc System and method of determining whether an electronic device is in contact with a human body
CN108415588A (en) * 2017-02-02 2018-08-17 意法半导体公司 Determine electronic equipment whether the system and method with human contact
CN108700938A (en) * 2017-02-16 2018-10-23 华为技术有限公司 It is a kind of detection electronic equipment close to human body method, apparatus and equipment
US11360585B2 (en) 2019-07-31 2022-06-14 Stmicroelectronics S.R.L. Gesture recognition system and method for a digital-pen-like device and corresponding digital-pen-like device
US11429604B2 (en) 2019-09-10 2022-08-30 Oracle International Corporation Techniques of heterogeneous hardware execution for SQL analytic queries for high volume data processing
US11348442B2 (en) * 2020-10-14 2022-05-31 12180502 Canada Inc. Hygiene detection devices and methods
US20220114878A1 (en) * 2020-10-14 2022-04-14 12180502 Canada Inc. Hygiene detection devices and methods

Also Published As

Publication number Publication date
CN102985897A (en) 2013-03-20
WO2012009214A2 (en) 2012-01-19
CN102985897B (en) 2016-10-05
TW201218023A (en) 2012-05-01
US20170220122A1 (en) 2017-08-03
WO2012009214A3 (en) 2012-03-29
US10353476B2 (en) 2019-07-16
US9535506B2 (en) 2017-01-03
TWI467418B (en) 2015-01-01
US20140191955A1 (en) 2014-07-10

Similar Documents

Publication Publication Date Title
US10353476B2 (en) Efficient gesture processing
CN107622770B (en) Voice wake-up method and device
CN108735209B (en) Wake-up word binding method, intelligent device and storage medium
US9443536B2 (en) Apparatus and method for detecting voice based on motion information
KR102405793B1 (en) Method for recognizing voice signal and electronic device supporting the same
US9268399B2 (en) Adaptive sensor sampling for power efficient context aware inferences
EP3109797B1 (en) Method for recognising handwriting on a physical surface
KR20220123747A (en) Joint audio-video facial animation system
US20140050354A1 (en) Automatic Gesture Recognition For A Sensor System
Raffa et al. Don't slow me down: Bringing energy efficiency to continuous gesture recognition
EP2699983A2 (en) Methods and apparatuses for facilitating gesture recognition
EP2815292A1 (en) Engagement-dependent gesture recognition
US20140195232A1 (en) Methods, systems, and circuits for text independent speaker recognition with automatic learning features
KR20150087253A (en) Sequential feature computation for power efficient classification
EP3092547A1 (en) System and method for controlling playback of media using gestures
Lu et al. Gesture on: Enabling always-on touch gestures for fast mobile access from the device standby mode
WO2018036023A1 (en) Text input method and device for smart watch
EP3147831A1 (en) Information processing device and information processing method
CN108319960A (en) Activity recognition system and method, equipment and storage medium based on probability graph model
CN111627422B (en) Voice acceleration detection method, device and equipment and readable storage medium
CN112632222B (en) Terminal equipment and method for determining data belonging field
Zhou et al. Pre-classification based hidden Markov model for quick and accurate gesture recognition using a finger-worn device
US20200396531A1 (en) System and method based in artificial intelligence to detect user interface control command of true wireless sound earbuds system on chip,and thereof
CN115344111A (en) Gesture interaction method, system and device
Marasović et al. User-dependent gesture recognition on Android handheld devices

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAFFA, GIUSEPPE;NACHMAN, LAMA;LEE, JINWON;SIGNING DATES FROM 20100708 TO 20100709;REEL/FRAME:024712/0854

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: TAHOE RESEARCH, LTD., IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTEL CORPORATION;REEL/FRAME:061175/0176

Effective date: 20220718