US20150012248A1 - Selecting Feature Types to Extract Based on Pre-Classification of Sensor Measurements - Google Patents

Selecting Feature Types to Extract Based on Pre-Classification of Sensor Measurements Download PDF

Info

Publication number
US20150012248A1
US20150012248A1 US14/321,707 US201414321707A US2015012248A1 US 20150012248 A1 US20150012248 A1 US 20150012248A1 US 201414321707 A US201414321707 A US 201414321707A US 2015012248 A1 US2015012248 A1 US 2015012248A1
Authority
US
United States
Prior art keywords
state
transition
features
stable
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/321,707
Inventor
Deborah Meduna
Tom Waite
Dev RAJNARAYAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensor Platforms Inc
Original Assignee
Sensor Platforms Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensor Platforms Inc filed Critical Sensor Platforms Inc
Priority to US14/321,707 priority Critical patent/US20150012248A1/en
Assigned to SENSOR PLATFORMS, INC. reassignment SENSOR PLATFORMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WAITE, Tom, MEDUNA, Deborah, RAJNARAYAN, Dev
Publication of US20150012248A1 publication Critical patent/US20150012248A1/en
Assigned to SENSOR PLATFORMS, INC. reassignment SENSOR PLATFORMS, INC. CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNOR'S NAME PREVIOUSLY RECORDED ON REEL 034089 FRAME 0748. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNOR'S INTEREST. Assignors: WAITE, Tom, RAJNARAYAN, Dev, VITUS (MEDUNA), DEBORAH
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1123Discriminating type of movement, e.g. walking or running
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the disclosed embodiments relate generally to determining a state associated with a device in accordance with a classification of sensor measurements associated with the device.
  • Devices have access to sensor measurements from one or more sensors. These sensor measurements can be used to determine information about states associated with the device such as a coupling state of the device to one or more entities, a state of one or more entities physically associated with the device and/or a state of an environment in which the device is located.
  • One approach to improving the efficiency of determining a state associated with a device is to use one or more pre-classifications of sensor measurements to determine which features to extract from sensor measurements available to the device.
  • the device can forgo extracting features that are not likely to be useful for determining a current state of the device while extracting features that are likely to be useful for determining the current state of the device.
  • Some embodiments provide a method for determining a state associated with a device at a processing apparatus having one or more processors and memory storing one or more programs that, when executed by the one or more processors, cause the respective processing apparatus to perform the method.
  • the method includes receiving sensor measurements generated by one or more sensors of one or more devices, pre-classifying the sensor measurements as belonging to one of a plurality of pre-classifications, and selecting one or more feature types to extract from the sensor measurements based at least in part on the pre-classification of the sensor measurements.
  • the method further includes extracting features of the one or more selected feature types from the sensor measurements and determining a state of a respective device of the one or more devices in accordance with a classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements.
  • a computer system (e.g., a navigation sensing device or a host computer system) includes one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing the operations of any of the methods described above.
  • a non-transitory computer readable storage medium e.g., for use by a navigation sensing device or a host computer system
  • FIG. 1 illustrates a system for using a navigation sensing device, according to some embodiments.
  • FIG. 2 is a block diagram illustrating an example navigation sensing device and auxiliary device, according to some embodiments.
  • FIGS. 3A-3E are block diagrams illustrating configurations of various components of the system including a navigation sensing device, according to some embodiments.
  • FIG. 4A-4D are diagrams illustrating an example of determining a state associated with a device, according to some embodiments.
  • FIGS. 5A-5I are flow diagrams of a method for determining a state of a device, according to some embodiments.
  • FIG. 6 presents a block diagram of an example navigation sensing device, according to some embodiments.
  • FIG. 7 presents a block diagram of an example host computer system, according to some embodiments.
  • Navigation sensing devices e.g., human interface devices or motion tracking device
  • a determinable multi-dimensional navigational state e.g., one or more dimensions of displacement and/or one or more dimensions of rotation or attitude
  • a navigation sensing device may be used as a motion tracking device to track changes in position and/or orientation of the device over time. These tracked changes can be used to map movements and/or provide other navigational state dependent services (e.g., location or orientation based alerts, etc.).
  • PDR pedestrian dead reckoning
  • the navigation sensing device uses sensor measurements to determine both changes in the physical coupling between the navigation sensing device and the entity (e.g., a “device-to-entity orientation”) and changes in direction of motion of the entity.
  • such a navigation sensing device may be used as a multi-dimensional pointer to control a pointer (e.g., a cursor) on a display of a personal computer, television, gaming system, etc.
  • a navigation sensing device may be used to provide augmented reality views (e.g., by overlaying computer generated elements over a display of a view of the real world) that change in accordance with the navigational state of the navigation sensing device so as to match up with a view of the real world that is detected on a camera attached to the navigation sensing device.
  • such a navigation sensing device may be used to provide views of a virtual world (e.g., views of portions of a video game, computer generated simulation, etc.) that change in accordance with the navigational state of the navigation sensing device so as to match up with a virtual viewpoint of the user based on the orientation of the device.
  • a virtual world e.g., views of portions of a video game, computer generated simulation, etc.
  • orientation, attitude and rotation are used interchangeably to refer to the orientation of a device or object with respect to a frame of reference.
  • a single navigation sensing device is optionally capable of performing multiple different navigation sensing tasks described above either simultaneously or in sequence (e.g., switching between a multi-dimensional pointer mode and a pedestrian dead reckoning mode based on user input).
  • these applications rely on sensors that determine accurate estimates of the current state(s) associated with the device (e.g., a navigational state of the device, a user-device coupling state, a state of a user physically associated with the device and/or a state of an environment of the device).
  • the current state(s) associated with the device e.g., a navigational state of the device, a user-device coupling state, a state of a user physically associated with the device and/or a state of an environment of the device.
  • FIG. 1 illustrates an example system 100 for using a navigation sensing device (e.g., a human interface device such as a multi-dimensional pointer) to manipulate a user interface.
  • a navigation sensing device e.g., a human interface device such as a multi-dimensional pointer
  • FIG. 1 an example Navigation Sensing Device 102 (hereinafter “Device 102 ”) is coupled to a Host Computer System 101 (hereinafter “Host 101 ”) through a wireless interface, according to some embodiments.
  • a User 103 moves Device 102 . These movements are detected by sensors in Device 102 , as described in greater detail below with reference to FIG. 2 .
  • Device 102 or Host 101 , generates a navigational state of Device 102 based on sensor measurements from the sensors and transmits the navigational state to Host 101 .
  • Device 102 generates sensor measurements and transmits the sensor measurements to Host 101 , for use in estimating a navigational state of Device 102 .
  • Host 101 generates current user interface data based on the navigational state of Device 102 and transmits the current user interface data to Display 104 (e.g., a display or a projector), which generates display data that is displayed to the user as the currently displayed User Interface 105 . While Device 102 , Host 101 and Display 104 are shown in FIG. 1 as being separate, in some embodiments the functions of one or more of these elements are combined or rearranged, as described in greater detail below with reference to FIGS. 3A-3E .
  • an Auxiliary Device 106 also generates sensor measurements from one or more sensors and transmits information based on the sensor measurements (e.g., raw sensor measurements, filtered signals generated based on the sensor measurements or other device state information such as a coupling state of Auxiliary Device 106 or a navigational state of Auxiliary Device 106 ) to Device 102 and/or Host 101 via wired or wireless interface, for use in determining a state of Device 102 .
  • Auxiliary Device 106 optionally has one or more of the features, components, or functions of Navigation Sensing Device 102 , but those details are not repeated here for brevity.
  • the user can use Device 102 to issue commands for modifying the user interface, control objects in the user interface, and/or position objects in the user interface by moving Device 102 so as to change its navigational state.
  • Device 102 is sensitive to six degrees of freedom: displacement along the x-axis, displacement along the y-axis, displacement along the z-axis, yaw, pitch, and roll.
  • Device 102 is a navigational state tracking device (e.g., a motion tracking device) that tracks changes in the navigational state of Device 102 over time but does not use these changes to directly update a user interface that is displayed to the user.
  • the updates in the navigational state can be recorded for later use by the user or transmitted to another user or can be used to track movement of the device and provide feedback to the user concerning their movement (e.g., directions to a particular location near the user based on an estimated location of the user).
  • the updates in the navigational state can be recorded for later use by the user or transmitted to another user or can be used to track movement of the device and provide feedback to the user concerning their movement (e.g., directions to a particular location near the user based on an estimated location of the user).
  • external location information e.g., Global Positioning System signals
  • pedestrian dead reckoning devices are also sometimes referred to as pedestrian dead reckoning devices.
  • the wireless interface is selected from the group consisting of: a Wi-Fi interface, a Bluetooth interface, an infrared interface, an audio interface, a visible light interface, a radio frequency (RF) interface, and any combination of the aforementioned wireless interfaces.
  • the wireless interface is a unidirectional wireless interface from Device 102 to Host 101 .
  • the wireless interface is a bidirectional wireless interface.
  • bidirectional communication is used to perform handshaking and pairing operations.
  • a wired interface is used instead of or in addition to a wireless interface. As with the wireless interface, the wired interface is, optionally, a unidirectional or bidirectional wired interface.
  • data corresponding to a navigational state of Device 102 is transmitted from Device 102 and received and processed on Host 101 (e.g., by a host side device driver).
  • Host 101 uses this data to generate current user interface data (e.g., specifying a position of a cursor and/or other objects in a user interface) or tracking information.
  • Device 102 includes one or more Sensors 220 which produce corresponding sensor outputs, which can be used to determine a state associated with Device 102 (e.g., a navigational state of the device, a user-device coupling state, a state of a user physically associated with the device and/or a state of an environment of the device).
  • a state associated with Device 102 e.g., a navigational state of the device, a user-device coupling state, a state of a user physically associated with the device and/or a state of an environment of the device.
  • Sensor 220 - 1 is a multi-dimensional magnetometer generating multi-dimensional magnetometer measurements (e.g., a rotation measurement)
  • Sensor 220 - 2 is a multi-dimensional accelerometer generating multi-dimensional accelerometer measurements (e.g., a rotation and translation measurement)
  • Sensor 220 - 3 is a gyroscope generating measurements (e.g., either a rotational vector measurement or rotational rate vector measurement) corresponding to changes in orientation of the device.
  • Sensors 220 include one or more of gyroscopes, beacon sensors, inertial measurement units, temperature sensors, barometers, proximity sensors, single-dimensional accelerometers and multi-dimensional accelerometers instead of or in addition to the multi-dimensional magnetometer and multi-dimensional accelerometer and gyroscope described above.
  • Auxiliary Device 106 includes one or more Sensors 230 which produce corresponding sensor outputs, which can be used to determine a state associated with Auxiliary Device 106 (e.g., a navigational state of the device, a user-device coupling state, a state of a user physically associated with the device and/or a state of an environment of the device).
  • information corresponding to the sensor outputs of Sensors 230 of Auxiliary Device 106 is transmitted to Device 102 for use in determining a state of Device 102 .
  • information corresponding to the sensor outputs of Sensors 220 of Device 102 is transmitted to Auxiliary Device 106 for use in determining a state of Auxiliary Device 106 .
  • Device 102 is a phone and Auxiliary Device 106 is a Bluetooth headset that is paired with the phone, and the phone and the Bluetooth headset share information based on sensor measurements to more accurately determine a state of Device 102 and/or Auxiliary Device 106 .
  • two mobile phones near each other can be configured to share information about their environmental context and/or their position.
  • a wrist watch with an accelerometer can be configured to share accelerometer measurements and/or derived posture information with a mobile phone held by the user to improve posture estimates for the user.
  • Device 102 also includes one or more of: Buttons 207 , Power Supply/Battery 208 , Camera 214 and/or Display 216 (e.g., a display or projector).
  • Device 102 also includes one or more of the following additional user interface components: one or more processors, memory, a keypad, one or more thumb wheels, one or more light-emitting diodes (LEDs), an audio speaker, an audio microphone, a liquid crystal display (LCD), etc.
  • the various components of Device 102 e.g., Sensors 220 , Buttons 207 , Power Supply 208 , Camera 214 and Display 216 ) are all enclosed in Housing 209 of Device 102 .
  • Device 102 can use Sensors 220 to generate tracking information corresponding changes in navigational state of Device 102 and transmit the tracking information to Host 101 wirelessly or store the tracking information for later transmission (e.g., via a wired or wireless data connection) to Host 101 .
  • one or more processors of Device 102 perform one or more of the following operations: sampling Sensor Measurements 222 , at a respective sampling rate, produced by Sensors 220 ; processing sampled data to determine displacement; transmitting displacement information to Host 101 ; monitoring the battery voltage and alerting Host 101 when the charge of Battery 208 is low; monitoring other user input devices (e.g., keypads, buttons, etc.), if any, on Device 102 and, as appropriate, transmitting information identifying user input device events (e.g., button presses) to Host 101 ; continuously or periodically running background processes to maintain or update calibration of Sensors 220 ; providing feedback to the user as needed on the remote (e.g., via LEDs, etc.); and recognizing gestures performed by user movement of Device 102 .
  • sampling Sensor Measurements 222 at a respective sampling rate, produced by Sensors 220 ; processing sampled data to determine displacement; transmitting displacement information to Host 101 ; monitoring the battery voltage and alerting Host 101 when the charge of Battery
  • FIGS. 3A-3E illustrate configurations of various components of the system for generating navigational state estimates for a navigation sensing device.
  • Sensors 220 which provide sensor measurements that are used to determine a navigational state of Device 102
  • Measurement Processing Module 322 e.g., a processing apparatus including one or more processors and memory
  • Display 104 which displays the currently displayed user interface to the user of Device 102 and/or information corresponding to movement of Device 102 over time.
  • these components can be distributed among any number of different devices.
  • Measurement Processing Module 322 (e.g., a processing apparatus including one or more processors and memory) is a component of the device including Sensors 220 . In some embodiments, Measurement Processing Module 322 (e.g., a processing apparatus including one or more processors and memory) is a component of a computer system that is distinct from the device including Sensors 220 .
  • a first portion of the functions of Measurement Processing Module 322 are performed by a first device (e.g., raw sensor data is converted into processed sensor data at Device 102 ) and a second portion of the functions of Measurement Processing Module 322 are performed by a second device (e.g., processed sensor data is used to generate a navigational state estimate for Device 102 at Host 101 ).
  • a first device e.g., raw sensor data is converted into processed sensor data at Device 102
  • a second portion of the functions of Measurement Processing Module 322 are performed by a second device (e.g., processed sensor data is used to generate a navigational state estimate for Device 102 at Host 101 ).
  • Sensors 220 , Measurement Processing Module 322 and Display 104 are distributed between three different devices (e.g., a navigation sensing device such as a multi-dimensional pointer, a set top box, and a television, respectively; or a motion tracking device, a backend motion processing server and a motion tracking client).
  • a navigation sensing device such as a multi-dimensional pointer, a set top box, and a television, respectively
  • a motion tracking device e.g., a backend motion processing server and a motion tracking client
  • Sensors 220 are included in a first device (e.g., a multi-dimensional pointer or a pedestrian dead reckoning device), while the Measurement Processing Module 322 and Display 104 are included in a second device (e.g., a host with an integrated display).
  • a second device e.g., a host with an integrated display.
  • Sensors 220 and Measurement Processing Module 322 are included in a first device, while Display 104 is included in a second device (e.g., a “smart” multi-dimensional pointer and a television respectively; or a motion tracking device such as a pedestrian dead reckoning device and a display for displaying information corresponding to changes in the movement of the motion tracking device over time, respectively).
  • a second device e.g., a “smart” multi-dimensional pointer and a television respectively; or a motion tracking device such as a pedestrian dead reckoning device and a display for displaying information corresponding to changes in the movement of the motion tracking device over time, respectively.
  • Sensors 220 , Measurement Processing Module 322 and Display 104 are included in a single device (e.g., a mobile computing device, such as a smart phone, personal digital assistant, tablet computer, pedestrian dead reckoning device etc.).
  • a mobile computing device such as a smart phone, personal digital assistant, tablet computer, pedestrian dead reckoning device etc.
  • FIG. 3E Sensors 220 and Display 104 are included in a first device (e.g., a game controller with a display/projector), while Measurement Processing Module 322 is included in a second device (e.g., a game console/server). It should be understood that in the example shown in FIG.
  • the first device will typically be a portable device (e.g., a smart phone or a pointing device) with limited processing power
  • the second device is a device (e.g., a host computer system) with the capability to perform more complex processing operations, or to perform processing operations at greater speed, and thus the computationally intensive calculations are offloaded from the portable device to a host device with greater processing power.
  • a portable device e.g., a smart phone or a pointing device
  • the second device is a device (e.g., a host computer system) with the capability to perform more complex processing operations, or to perform processing operations at greater speed, and thus the computationally intensive calculations are offloaded from the portable device to a host device with greater processing power.
  • FIGS. 4A-4D include block diagrams illustrating an example of determining a state associated with a device, in accordance with some embodiments.
  • FIGS. 4A-4D The implementation of determining a state associated with a device described below with reference to FIGS. 4A-4D is explained with reference to a particular example of determining a coupling state of a device (e.g., determining if and how a device is associated with an entity). However, it should be understood that the general principles described below are applicable to a variety of different states associated with a device (e.g., a navigational state of the device, a state of a user physically associated with the device and/or a state of an environment of the device).
  • FIG. 4A illustrates an overview of a method of determining probabilities of a state associated with the device based on raw sensor data.
  • raw sensor data is converted into filtered signals by one or more Sensor Data Filters 402 .
  • Sensor Data Filters 402 During a pre-classification stage, a pre-classification of the state is determined by Pre-Classifier 404 based on the filtered signals, so as to determine whether to pass the filtered signals to a set of stable-state modules or to pass the filtered signals to a set of state-transition modules.
  • Stable-State Feature Generator 406 After the state has been pre-classified, if the pre-classification indicates that the device is likely in a stable-state, Stable-State Feature Generator 406 generates a stable-state feature vector from the filtered signals and passes the stable-state feature vector to one or more Stable-State Classifiers 408 (described in greater detail below with reference to FIG. 4C ) which provide estimations of a probability that the device is associated with different states in Markov Model 410 (described in greater detail below with reference to FIG. 4B ).
  • Markov Model 410 combines the estimations from Stable-State Classifiers 408 with historical probabilities based on prior state estimates and probabilities of transitions between the states of Markov Model 410 . Markov Model 410 is subsequently used to produce state probabilities corresponding to a probability for a state associated with the device (e.g., whether the device is in a user's pocket or on a table).
  • State-Transition Feature Generator 412 After the state has been pre-classified, if the pre-classification indicates that the device is likely in a state-transition, State-Transition Feature Generator 412 generates a state-transition feature vector from the filtered signals and passes the state-transition feature vector to one or more State-Transition Classifiers 414 (described in greater detail below with reference to FIG. 4D ) which provide estimations of a probability of transitions between various states in Markov Model 410 (described in greater detail below with reference to FIG. 4B ). Markov Model 410 uses the estimations from State-Transition Classifiers 414 to determine/adjust model transition probabilities for Markov Model 410 . Markov Model 410 is subsequently used to produce state probabilities corresponding to a probability for a state associated with the device (e.g., assigning a probability that the device is in a user's pocket and a probability that the device is on a table).
  • only one feature vector e.g., stable-state feature vector or state-transition feature vector
  • multiple feature vectors e.g., multiple sets of stable-state features or multiple sets of state-transition features are generated.
  • Pre-Classifier 404 selects multiple different stable-state feature vectors to be generated by Stable-State Classifiers 408 (e.g., Pre-Classifier 404 selects generation of a first set of stable state features to produce a first stable-state feature vector for use in identifying stable states of the respective device under “Walking” conditions and a second set of stable state features to produce a second stable-state feature vector for use in identifying stable states of the respective device under “Stationary” conditions).
  • the feature vectors are used by corresponding classifiers (e.g., stable-state classifiers or state-transition classifiers) to generate stable-state measurements or state-transition measurements that are provided to Markov Model 410 , which is used to produce state probabilities corresponding to a probability for a state associated with the device (e.g., whether the device is in a user's pocket, in the user's hand or on a table).
  • classifiers e.g., stable-state classifiers or state-transition classifiers
  • Markov Model 410 is used to produce state probabilities corresponding to a probability for a state associated with the device (e.g., whether the device is in a user's pocket, in the user's hand or on a table).
  • Markov Model 410 optionally provides this information to Pre-Classifier 404 and Pre Classifier 404 uses this information to reduce the frequency with which measurement epochs (e.g., cycles of Pre-Classification, Feature Extraction and Classification) are performed.
  • Information about a device coupling state can be used for a variety of purposes at the device. For example, an estimate of a device coupling state can improve power management (e.g., by enabling the device to enter a lower-power state when the user is not interacting with the device). As another example, an estimate of a device coupling state can enable the device to turn on/off other algorithms (if the device is off Body, and thus is not physically associated with the user it would be a waste of energy for the device to perform step counting for the user). In some embodiments, the classification of device coupling includes whether the device is on Body or off Body, as well as the specific location of the device in the case that it is physically associated with the user (e.g., in a pocket, bag, the user's hand).
  • Determinations about device coupling can be made by the device based on signatures present in small amplitude body motion as well as complex muscle tremor features that are distributed across X, Y and Z acceleration signals measured by the device. In some implementations, these signals are acquired at sampling rates of 40 Hz or greater.
  • Sensor Data Filters 402 take in three axes of raw acceleration data and generate filtered versions of the acceleration data to be used in both Pre-Classifier 404 and either Stable-State Feature Generator 406 or State-Transition Feature Generator 412 .
  • filtered signals used for user-device coupling are described in Table 1 below.
  • Filter Type Filter Specifics Low Pass 0-2.5 Hz band. Uses 51 tap FIR (finite impulse response) filter. High Pass 1.5-20 Hz band. Uses 51 tap FIR filter. Derivative Low Central difference derivative of low pass signal. Pass Envelope Uses a 31 tap Hilbert transform. The Hilbert transform Derivative Low produces complex analytic signal, and taking the Pass magnitude of the analytic signal produces the envelope. Envelope High Low pass filter the high pass using an 11 tap tent FIR Pass filter.
  • Pre-Classifier 404 is responsible for determining which types of features to generate (e.g., stable-state features or state-transition features), and passing an appropriate segment of sensor data (e.g., at least a subset of the filtered signals) to these feature generators (e.g., Stable-State Feature Generator 406 or State-Transition Feature Generator 412 ).
  • the determination of segment type is performed based on a combination of device motion context as well as based on features of the filtered signals generated by Sensor Data Filters 402 .
  • Pre-Classifier 404 serves as a resource allocation manager. For example, Pre-Classifier 404 allocates resources by specifying that one type of feature set is produced at a time (e.g., either producing stable-state features or state-transition features but not both). Additionally, in a situation where Pre-Classifier 404 determines that the device is in a stable-state (e.g., based on information from Markov Model 410 ), Pre-Classifier 404 manages the rate at which the device iterates through measurement epochs (e.g., a rate at which sets of filtered signals are sent to Stable-State Feature Generator 406 ).
  • Pre-Classifier 404 manages the rate at which the device iterates through measurement epochs (e.g., a rate at which sets of filtered signals are sent to Stable-State Feature Generator 406 ).
  • the rate of the measurement epochs is decreased.
  • a transition just occurred or if the model state is uncertain e.g., the most likely model state has less than a predefined amount of certainty or the difference between the probability of the two most likely model states is below a predefined threshold
  • the rate of the measurement epochs is increased.
  • the provision of filtered signals to one of the feature generators determines whether or not the device is working to generate features from the filtered signals.
  • the feature generators e.g., Stable-State Feature Generator 406 or State-Transition Feature Generator 412 .
  • Pre-Classifier 404 determines whether to provide the filtered signals to Stable-State Feature Generator 406 or State-Transition Feature Generator 412 based on finding corresponding peaks in the low and high pass envelope signals indicative of sudden and/or sustained changes in motion of the device.
  • the classifiers e.g., Stable-State Classifiers 408 and/or State-Transition Classifiers 414 ) receive signal features. These features are extracted from either a state-transition or stable-state segment of low and high pass filtered signals (e.g., the filtered signals generated by Sensor Data Filters 402 ) provided by the Pre-Classifier 404 .
  • the features used by Stable-State Classifiers 408 for stable-state classification differ from the features used by State-Transition Classifiers for state-transition classification, however both use the same underlying filtered signals produced by Sensor Data Filter(s) 402 .
  • Stable-State Classifiers 408 use one or more of the Stable-State Features described in Tables 2a-2c, below, while State-Transition Classifiers 414 use one or more of the State-Transition Features described in Table 3, below. It should be understood that the features described in Tables 2a-2c and 3 are not an exhaustive list but are merely examples of features that are used in some embodiments.
  • Hjorth purity is calculated from the square of the power of the first derivative of the high pass signal scaled by the product of the variance of the high pass signal and the power of the second derivative of the high pass signal Spectral Energy Normalized power of the spectrum of the high pass (high pass) signal and normalized spectral bins, 4 Hz in width, between 0 and 20 Hz. Normalization of the power is based on training subject distribution and bin normalization is based on the power of the subjects given time segment
  • Walk and stride frequencies are estimated by finding two harmonic peaks in the spectrum of the vertical signal within a predefined frequency range (below 2.25 Hz) Range and variance of signals in Range and variance of signals transformed into a the User's Vertical and pseudo-user frame over a single stride Horizontal directions.
  • the term “Hjorth mobility” used in Table 2b corresponds to the square root of a comparison between (1) the variance of the rate of change of movement in a respective direction (e.g., the y direction) and (2) the variance of the amount of movement in the respective direction (e.g., using Equation 1, below)
  • the term “Hjorth purity” used in Table 2 corresponds to the square root of a comparison between (1) the square of the variance of the rate of change of movement in a respective direction (e.g., the y direction) and (2) the product of the variance of the amount of movement in the respective direction and the variance of the acceleration in the respective direction (e.g., as shown in Equation 2, below)
  • Max Energy and Time of Max Peak amplitude and normalized time to peak of the Energy envelope of the high pass signal Spectral Energy Dominant modes of the spectrum of the high pass signal. Tilt Variation Extrema and Signed peak amplitude and normalized time to peak of Associated Time the derivative of the low pass signal.
  • FIG. 4B illustrates an example of a probabilistic model which defines the specific set of states associated with the device.
  • the probabilistic model is Markov Model 410 . While four states (e.g., “On Table,” “In Hand at Side,” “In Hand at Front,” and “In Pocket”) are shown in FIG. 4B , it should be understood that, in principle Markov Model 410 could have any number of states.
  • the probabilistic model imposes logical constraints on the transition between the states, preventing infeasible events such as going from “On Table” to “In Pocket” without first going through “In Hand at Side” (e.g., the transition probability P(T 4 ) from X 1 to X 4 is set to zero).
  • the same set of states and transitions are used when the device is in a stable state and when the device is in a state transition.
  • output from Stable-State Classifiers 408 is used to update the state probabilities of Markov Model 410 , optionally, without updating the model transition probabilities (e.g., as described in greater detail below with reference to FIG. 4C ).
  • output from the State-Transition Classifiers 414 is used to update the model transition probabilities are changed from P to P′ (e.g., as described in greater detail below with reference to FIG. 4D ).
  • the use of a probabilistic model for determining device state increases the robustness of the overall classification and allows for improved management of resource utilization.
  • the probabilistic model e.g., Markov Model 410
  • the probabilistic model e.g., Markov Model 410
  • the probabilistic model is, optionally, used to adapt the update rate of the underlying classifiers based on the current confidence level (probability) of one or more of the states (e.g., each state).
  • the update rate of the stable state measurements e.g., the frequency of measurement epochs
  • the update rate of the stable state measurements is, optionally, decreased until a transition measurement occurs, at which point the update rate increases again.
  • Markov Model 410 has two different modes of operation, a stable-state update mode of operation for use when Pre-Classifier 404 does not detect a transition between states and a state-transition update mode of operation for use when Pre-Classifier 404 detects a transition between states.
  • a stable-state updated mode a Stable-State Markov Model Transition Matrix 420 is used.
  • a State-Transition Markov Model Transition Matrix 422 is used.
  • a stable-state update of Markov Model 410 is invoked by an updated Stable-State Classifier 408 output.
  • the update consists of two parts, a motion update (e.g., equation 3, below) and a measurement update (e.g., equation 4, below):
  • Equation 3 updates the model states, where ⁇ tilde over (P) ⁇ (X i,t ) is the model-predicted probability of state X i at time t, which is calculated by adding up the probabilities that the state transitioned from other states X j to state X i .
  • the probability that state X j transitioned to state X i is based on a state-transition matrix P(X i,t
  • P(X j,t-1 ) e.g., Stable-State Markov Model Transition Matrix 420 in FIG. 4B
  • a combined probability is determined based on the model-predicted probability and a measurement probability based on the Stable-State Classifier 408 outputs (e.g., using equation 4).
  • Equation 4 computes a combined probability of model states, where P(X i,t ) is the combined probability of state X i at time t, which is calculated by combining the model-predicted probability of state X i at time t, ⁇ tilde over (P) ⁇ (X i,t ), with a measurement probability, P(X i,t
  • is a scaling parameter.
  • X j,t ⁇ 1 ), are deterministic and defined based on a given model.
  • this component of the model allows for diffusion of the probabilities over time (e.g., over sequential measurement epochs). In other words, in some situations, without any observations (e.g., contributions from measurement probability P(X i,t
  • the state-transition update of Markov Model 410 is invoked by an updated State-Transition Classifier 414 output.
  • the update involves first computing transition probabilities for P′ based on State-Transition Classifier 414 outputs and prior model state probabilities (e.g., as shown in equation 5, below), and then updating the model state probability accordingly (e.g., as shown in equation 6, below). It is effectively a motion update with a modified state transition matrix built from the outputs of the transition classifiers.
  • Equation 5 computes a modified transition matrix, where P′(X i,t
  • the updated transition probability is the same as the measurement transition probabilities computed by State-Transition Classifiers 414 .
  • the measurement transition probabilities are modified by a transition definition matrix P(X i,t
  • the elements of the transition definition matrix are l's and 0's, which encode the arrows shown in Markov Model 410 in FIG. 4B .
  • OnTableFromPocket) 1
  • OnTableFromPocket) 0 (for a ToTable transition, the probability that the next state is table is 100%, whereas the probability that the next state is anything else is 0%).
  • the transition definition matrix can have elements with values between 1 and 0 that encode these more complex dependencies).
  • probabilities of the states of Markov Model 410 are updated using the modified state transition matrix (e.g., using equation 6) to determine updated probabilities for the model states of Markov Model 410 .
  • Equation 6 updates the model states, where P(X i,t ) is the model-predicted probability of state X i at time t, which is calculated by adding up the probabilities that the state transitioned from other states X j to state X i .
  • the probability that state X j transitioned to state X i is based on a measurement-based state transition matrix P′(X i,t
  • the measurement-based state transition matrix is combined with the probabilities P(X j,t-1 ) of states X j being a current state associated with the device to generate updated model-predicted probabilities for the various model states.
  • P′(T 3 ) also referred to as P′(X 3,t
  • X 1,t-1 ) will be increased to approximately 1 and any probability that the device was in the On Table state at the prior time step will flow to a probability the device is In Hand at Front at the next time step.
  • X 1,t-1 ) will be increased to approximately 1 and any probability that the device was in the On Table state at the prior time step will flow to a probability the device is In Hand at Front at the next time step.
  • the error correction benefits of Markov Model 410 are illustrated, as a single erroneously identified transition (e.g., a transition that corresponds to a transition from a state that is not a current state of the device) will have very little impact on the overall model state probabilities, while a correctly identified transition (e.g., a transition that corresponds to a transition from a state that is a current state of the device) will enable the device to quickly switch from a prior state to a next state.
  • a single erroneously identified transition e.g., a transition that corresponds to a transition from a state that is not a current state of the device
  • a correctly identified transition e.g., a transition that corresponds to a transition from a state that is a current state of the device
  • FIG. 4C illustrates a set of example Stable-State Classifiers 408 in accordance with some embodiments.
  • Stable-State Classifiers 408 are implemented as a voting machine.
  • an On Body vs Off Body Classifier 430 receives a Stable-State Feature vector and determines a probability that the device is On Body or Off Body.
  • Off Body Classifier(s) 432 determine a probability that, if the device is Off Body, the device is In Trunk or On Table.
  • In Hand vs. In Container Classifier(s) 434 determine a probability that, if the device is On Body, the device is In Hand or In a Container.
  • corresponding classifiers e.g., In Hand Classifier(s) 436 and In Container Classifier(s) 438 ) further divide probabilities so as to produce a set of measurement probabilities, P(X i,t
  • the probability of a current state of device is divided between “In Trunk,” “On Table,” “In Hand at Side,” “In Hand at Front,” “In Pocket,” and “In Bag.” While the above six states are provided for ease of illustration, it should be understood that, in principle, Stable-State Classifiers 408 could determine measurement probabilities for any number of states of a probabilistic model. Additionally, while a specific voting machine is illustrated in FIG. 4C , any of a variety of different types of voting machines (e.g. support vector machine, neural network, decision tree) could be used depending on the situation and implementation. The resulting overall output of the classifier voting machines are interpreted as probabilities, indicating the likelihood of the given state.
  • the output of the voting machine classifier is more robust and can more accurately reflect the confidence of the classification.
  • a feedback infinite impulse filter on the voting machine output is also employed to suppress spurious misclassifications.
  • FIG. 4D illustrates a set of example State-Transition Classifiers 414 in accordance with some embodiments.
  • State-Transition Classifiers 414 are implemented in a hierarchy.
  • a REAL vs NULL Classifier 450 receives a State-Transition Feature vector and determines a probability that the device experienced a transition (e.g., determining whether Pre-Classifier 404 accidentally identified a transition or identified a transition that is not modeled in the probabilistic model). The probability that the transition was a null transition is assigned to all of the null transitions in the matrix.
  • State-Transition Classifiers 414 determine conditional probabilities using a plurality of conditional classifiers. From Table Hand-F vs Down Classifier 452 determines a probability, if the state is On Table, that the state transitioned to Hand-F (In Hand at Front) or some other state.
  • From Table Hand-S vs Pocket Classifier 454 determines a probability, if the state is On Table, that the state transitioned to Hand-S (In Hand at Side) or In Pocket.
  • From In Hand at Side Classifier 456 determines a probability, if the state is In Hand at Side, that the state transitioned to On Table, Hand-F (In Hand at Front) or In Pocket.
  • From In Hand at Front Classifier 458 determines a probability, if the state is In Hand at Front, that the state transitioned to On Table, Hand-S (In Hand at Side at Front) or In Pocket.
  • State-Transition Classifiers 414 assign probabilities so as to produce a set of measurement transition probabilities, P(T k,t
  • measurement transition probabilities are calculated for null transitions for “On Table,” “In Hand at Side,” “In Hand at Front,” “In Pocket,” and measurement transition probabilities are calculated for transitions away from “On Table,” “In Hand at Side,” “In Hand at Front,” states. While examples of thirteen transitions states are provided above for ease of illustration, it should be understood that, in principle, State-Transition Classifiers 414 could determine measurement transition probabilities for any number of transitions between states of the probabilistic model. Additionally, while a specific voting machine is illustrated in FIG. 4D , any of a variety of different types of voting machines (e.g. support vector machine, neural network, decision tree) could be used depending on the situation.
  • voting machines e.g. support vector machine, neural network, decision tree
  • the resulting overall output of the classifier voting machines are interpreted as transition probabilities, indicating the likelihood of transition between states. Additionally, in some implementations, the results of the voting machine are normalized, weighted and/or combined to produce an overall transition probability result and confidence metric. By using a group of underlying classification algorithms rather than any single one, the output of the voting machine classifier is more robust and can more accurately reflect the confidence of the transition probability. In some embodiments, a feedback infinite impulse filter on the voting machine output is also employed to suppress spurious misclassification of transitions.
  • the examples of the system and method of determining a state associated with a device described above relate primarily to device coupling states, it should be understood that the state associated with the device could be virtually any state of a device, state of an environment of a device or state of an entity associated with a device.
  • the framework described above is general enough to be applicable to a number of different types of state classification that include measurable stable-states and state-transitions between those stable-states.
  • the framework above could also be used in a situation where the device is associated with an entity (e.g., user) and the states are body posture states of the user.
  • the steady states include one or more of sitting, standing, walking, running and the transitions refer to changes between those (sitDown, standUp, startWalking, stopWalking) While the overall framework is the same as that discussed above, at least some of the specific details would be slightly different, as described below.
  • Pre-Classifier 404 would make determinations as to whether to operate in a stable-state mode or a state-transition mode based on filtered signals corresponding to YZ accelerations detected by sensors of the device. For example, the device low-pass filters the YZ norm of the raw accelerations, and median filters that norm. In some embodiments, the transient is obtained by subtracting the median filtered value from the low-pass filtered one.
  • Pre-Classifier 404 seeks half a second of relative inactivity, followed by period of elevated transient accelerations, followed by another half second of “silence.” In some embodiment, the length of the period of elevated activity is required to fall within reasonable bounds (e.g., from 1.75 to 3 seconds) for common activities such as sitting/standing transitions.
  • inactivity or silence detection is determined by a threshold on the transient YZ acceleration norm.
  • An example of a stillness threshold for filtered accelerations is approximately 0.5 m/s 2 .
  • Table 4 includes a list of example user-posture features for use as stable-state and/or state-transition features for the user-posture implementation.
  • Sit/Stand Gravity The “shape” of the coefficients corresponding to the Work Signature first four gravity work modes. This is computed by Fit dividing the associated four coefficients by the sum of the absolute values of those four coefficients.
  • Sit/Stand Balance The “shape” of the coefficients corresponding to the Lean Signature first four balance lean modes. This is computed by Fit dividing the associated four coefficients by the sum of the absolute values of those four coefficients.
  • one set of state information (e.g., states of a first Markov Model) is optionally used to provide context information that influences a second set of state information (e.g., states of a second Markov Model).
  • state information e.g., states of a first Markov Model
  • second set of state information e.g., states of a second Markov Model
  • user-posture state information to condition user-device coupling classifiers.
  • the features described for device-body coupling above with reference to Table 4 are relevant when the user is either sitting or standing.
  • a different set of features can be used which take advantage of the relation between motion of the phone while walking based on phone location.
  • the features used to distinguish whether a device is in a user's pocket or a user's hand while a user is walking are different from (e.g., includes one or more features not included in) the features used to distinguish whether a device is in a user's pocket or a user's hand while the user is stationary (e.g., detected tremor and/or device orientation).
  • the stable-state classifiers can be modified to include one or more sets of conditional classifiers for one or more respective classifications (e.g., a set including an “In Hand Classifier (v1)” for not-walking and an In Hand Classifier (v2) for walking for the “In Hand at Side” vs “In Hand at Front” classification).
  • v1 In Hand Classifier
  • v2 In Hand Classifier
  • only one of the classifiers in each set is used depending on which differentiating condition is most likely (e.g., walking or not-walking)
  • the output of the classifiers in a set of two or more classifiers for a respective classification are combined by a weighted sum, where the weight is based on the probability of the differentiating condition (e.g., walking or not-walking) determined by a separate posture context algorithm.
  • Sensor Data Filter(s) 402 receive Signals from the acceleration channels and break them into low and high pass bands using 51 tap FIR filters, where the low pass band is 0-3.5 Hz and the high pass is 4.5-20 Hz. Sensor Data Filter(s) 402 further generate an envelope of the high pass signal and low pass filter the envelope. Sensor Data Filter(s) 402 optionally resample both the low pass and envelope signal to 12.5 Hz. In some embodiments, in addition to pre-filtered low and high pass filtered traces, Sensor Data Filter(s) 402 extracts a narrow-band walk signal and provides some of the features for phone location determination.
  • Pre-Classifier 404 acts as a resource manager by managing how frequently the features (e.g., Stable-State Feature Vector or State-Transition Feature Vector), and thus classifiers (e.g., Stable-State Classifiers 408 and/or State-Transition Classifier s 414 ) and Markov Model 410 , are updated.
  • features e.g., Stable-State Feature Vector or State-Transition Feature Vector
  • classifiers e.g., Stable-State Classifiers 408 and/or State-Transition Classifier s 414
  • Markov Model 410 Markov Model
  • a stable-state does not necessarily imply a motionless state, rather a stable-state is a state in which the current state of the device has been unchanged for a period of time. For example, if the device has been sitting on a car seat during a long drive, the device will see forward motion (or even changes in motion as the device speeds up, slows down and turns). However, Pre-Classifier 404 can still go into a low power state where measurement epochs occur infrequently once it has determined that the device is off Body.
  • Pre-Classifier 404 for the user-device coupling implementation is a simple peak detection algorithm which can be done with very minimal processing resources. Additional power management can be achieved by turning off some of the classifiers when contextual information indicates that the classifiers are not providing useful information. For example, if the device is off Body one or more classifiers, including the posture detection classifiers and, optionally, Pre-Classifier 404 will be turned off.
  • multiple feature vectors e.g., multiple sets of stable-state features or multiple sets of state-transition features
  • multiple feature vectors are generated (e.g., if there is some uncertainty as to a current conditions under which the device is operating). For example, in some situations there is a set of features that are used to determine different device coupling states under a set of conditions where a user is walking and there is a different set of features that are used to determine different device coupling states under a set of conditions where a user is stationary (e.g., standing or sitting).
  • classifiers will identify multiple probability components that correspond to the probability that user is in a respective posture for a plurality of different transport states.
  • y t ) from equation 4 is computed as the sum of these probability components as shown in equation 7 (separate device coupling probabilities P(X t
  • the measurement probability is given by:
  • NotInVehicle,y t ) 1 ⁇ P(InVehicle).
  • NotInVehicle,y t ) are computed using sets of conditional device coupling classifiers and P(InVehicle
  • a predefined threshold e.g., 0, 5, 10, 15, 30 or 50 percent likely or some other reasonable threshold
  • the device when preclassifier triggers generation of a walking stable state feature set and a stationary stable state feature set, the device generates feature sets for use in determining two probabilities P(X t ,Walking
  • NotInVehicle,y t ) is, optionally, computed by summing the total probability that the user is walking and subtracting that probability from 1. For example, as shown in Equations 11-12:
  • Preclassifier 404 when a respective set is preclassified as having a probability above a predefined threshold (e.g., 0, 5, 10, 15, 30 or 50 percent likely or some other reasonable threshold), Preclassifier 404 triggers a feature extraction and classification path corresponding to the respective set of features.
  • a predefined threshold e.g., 0, 5, 10, 15, 30 or 50 percent likely or some other reasonable threshold
  • the device when preclassifier triggers generation of an in vehicle stable state, the device generates feature sets for determining three probabilities: The probability that the device is in a particular coupling state and a user is walking in a vehicle P (X t , Walking
  • Standing Sitting,InVehicle,y t ) P(X t
  • B 5 B 6 ), and the probability that a user is in a vehicle P(InVehicle) P(B 4 B 5 B 6 ) which are combined as shown in Equation 13, below.
  • InVehicle,y t ) is, optionally, computed by summing the total probability that the user is walking and subtracting that probability from 1. For example:
  • generating different sets of features for a plurality of different conditions under which a device may be operating and combining probability estimations for the different sets of features, as described above provides a more accurate estimation of the state associated with the device (e.g., posture or device coupling state) than attempting to classify the state associated with the device without taking into account the possible different conditions under which the device is operating.
  • the state associated with the device e.g., posture or device coupling state
  • the system and method for determining a state associated with a device has a number of advantages over conventional methods for determining a state associated with a device, however an important advantage is that the method is able to efficiently and effectively recover from misclassifications (e.g., missed state-transition classifications and/or spurious stable-state misclassifications).
  • misclassifications e.g., missed state-transition classifications and/or spurious stable-state misclassifications.
  • State-Transition Classifiers 414 misclassify a transition with high confidence or Pre-Classifier 404 completely misses a transition, it is possible that the overall model will end up in the “wrong state” for a period of time.
  • the system and method described herein mitigates this effect is through use of priors from Markov Model 404 . For example, if the Markov Model 404 is very confident that the phone is in On Table, but the detected transition starts from In Hand (In Hand at Side to Pocket, In Hand at Front to Table), that transition will have a low weight in the Markov Model.
  • both stable-state classifiers and state-transition classifiers in the state determination process accounts for the fact that no classifier is perfect. Even in the case that the device misses a transition or misclassifies a transition leading to a wrong state in Markov Model 410 , the model can still recover and reach the correct state classification using the stable-state classifiers. In some embodiments, the latency of this recovery period is tied to both the degree of misclassification and the confidence of the stable-state measurements.
  • Markov Model 410 can recover very quickly (within 1 to 2 stable-state measurements), thus in many situations is better for the model to have an “unknown” (low confidence) classification than to have a high confidence misclassification, because the device can recover from an “unknown” (low confidence) classification more quickly.
  • the output of classifiers is weighted. Rather than simply return a 1 or 0 from an individual classifier, the classifier outputs are converted to probabilities based on the strength of the classification. For a support vector machine model, the strength is determined based on the distance of the features from the margin. For a multi-layer perceptron model, the strength is determined based on the magnitude of the output neurons as well as the relative strength of the different neurons.
  • multiple classifier models support vector machines, multi-layer perceptron, decision trees are used concurrently for the same classification problem, and their results combined to produce the overall classification.
  • Markov Model 410 incorporates prior state information so that even if the voting machine (e.g., Stable-State Classifiers 408 and/or State-Transition Classifiers 414 ) produces a strong misclassification, this is moderated at Markov Model 410 by the prior state probabilities.
  • voting machine e.g., Stable-State Classifiers 408 and/or State-Transition Classifiers 414
  • the model For example, if the model is highly confident that the previous state was On Table, it will put less weight on a strong In Pocket classification. However, typically with more than one strong classification, the model will start to produce predicted probabilities of the states associated with the device that are in agreement with the classifiers. As such, the system and method described herein (and further explained below with reference to method 500 ) provides accurate classifications, efficient and quick recovery from misclassifications and low power usage and thus is particularly useful in mobile applications where quick recovery from misclassifications and low power usage are particularly important.
  • FIGS. 5A-5I illustrate a method 500 for determining a state associated with a device, in accordance with some embodiments.
  • Method 500 is, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of one or more computer systems (e.g., Device 102 , FIG. 6 or Host 101 , FIG. 7 ).
  • processors of one or more computer systems e.g., Device 102 , FIG. 6 or Host 101 , FIG. 7 .
  • FIGS. 5A-5I typically corresponds to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., Memory 1110 of Device 102 in FIG. 6 or Memory 1210 of Host 101 in FIG. 7 ).
  • the computer readable storage medium optionally (and typically) includes a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices.
  • the computer readable instructions stored on the computer readable storage medium typically include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted or executed by one or more processors. In various embodiments, some operations in method 500 are combined and/or the order of some operations is changed from the order shown in FIGS. 5A-5I .
  • the processing apparatus has one or more processors and memory storing one or more programs that, when executed by the one or more processors, cause the respective processing apparatus to perform the method.
  • the processing apparatus is a component of Device 102 (e.g., the processing apparatus includes the one or more CPU(s) 1102 in FIG. 6 ).
  • the processing apparatus is separate from Device 102 (e.g., the processing apparatus includes the one or more CPU(s) 1202 in FIG. 7 ).
  • the processing apparatus receives ( 502 ) sensor measurements generated by one or more sensors of one or more devices (e.g., one or more sensors 220 of Navigation Sensing Device 102 and/or one or more sensors 230 of Auxiliary Device 106 ).
  • the one or more sensors include ( 504 ) a respective sensor of the respective device.
  • the processing apparatus pre-classifies ( 506 ) the sensor measurements as belonging to one of a plurality of pre-classifications (e.g., coupling pre-classifications such as stable-state or state-transition). For example, as described above with reference to FIGS.
  • Pre-Classifier 404 determines whether to provide filtered signals to Stable-State Feature Generator 406 or to State-Transition Feature Generator 412 based on whether the pre-classification indicates that the filtered signals correspond to a stable-state or a state-transition.
  • the plurality of pre-classifications include ( 508 ) a first pre-classification corresponding to a plurality of transition feature types (e.g., state-transition features) associated with identifying a transition between two different states of the respective device; a second pre-classification corresponding to a first subset of a plurality of device-state feature types (e.g., stable-state features) associated with identifying a state of the respective device; and/or a third pre-classification corresponding to a second subset of the plurality of device-state feature types (e.g., stable-state features), where the second subset of device-state feature types is different from the first subset of device-state feature types.
  • transition feature types e.g., state-transition features
  • the first subset of device-state feature types enables identification of a device state of the respective device from a first subset of states of the respective device (e.g., device states corresponding to motion of the respective device) and the second subset of device-state feature types enables identification of a device state from a second subset of states of the respective device (e.g., stationary device states of the respective device).
  • processing apparatus has a set of state-transition classifiers (e.g., 414 in FIG. 4A ) and two subsets of stable-state classifiers (e.g., 408 in FIG.
  • Pre-Classifier 404 selects one of three different pathways (e.g., state-transition classifiers, device in motion stable-state classifiers, and device stationary stable-state classifiers).
  • the processing apparatus obtains ( 510 ) context information indicative of a current context of the respective device (e.g., the processing apparatus retrieves or determines/generates context information), and pre-classifies ( 512 ) the sensor measurements based at least in part on the current context of the respective device. For example, if there is a high probability that the respective device is Off Body, then the processing apparatus would turn off all subsequent posture-type feature generation, as the processing apparatus will not typically be able to detect the posture of the user if the device is not physically associated with the user.
  • the processing apparatus can conserve energy by ceasing, at least temporarily, to extract features other than those that are helpful in determining whether or not the respective device has been removed from the trunk of the car.
  • the current context of the respective device is determined based at least in part on system signals. For example, a system signal indicating that the respective device is connected to a charger optionally prompts the processing apparatus to forgo generating features related to user body posture.
  • the current context of the respective device is determined based on stored information about a current user of the respective device (e.g., some classifiers are turned on, turned off, or modified if a user is male or is over 200 lbs).
  • pre-classifying the sensor measurements includes ( 514 ) extracting features of one or more pre-classification feature types from the sensor measurements, and extracting features of the pre-classification feature types from the sensor measurements is more resource efficient than extracting features of the one or more selected feature types from the sensor measurements (e.g., the pre-classification features are generated using low-power algorithms to provide a rough estimate of the general type of behavior currently exhibited by the respective device). For example, extracting the same quantity of features of the pre-classification feature types would take a smaller amount of CPU resources than extracting a similar quantity of features of the selected feature types.
  • extracting features of the pre-classification feature types enables the processing apparatus to forgo extracting features of one or more unselected feature types, thereby reducing the CPU usage and power usage due to extracting features from the sensor measurements and conserving battery life if the respective device is battery operated.
  • the pre-classification feature types include peak detection on frequency filtered sensor measurements.
  • the pre-classification feature types include threshold detection on frequency filtered sensor measurements (e.g., the “Derivative Low Pass” signal in Table 1).
  • the pre-classification feature types include peak detection on envelope filtered sensor measurements (e.g., the “Envelope Low Pass” signal and the “Envelope High Pass” signal in Table 1).
  • the pre-classification feature types include threshold detection on envelope filtered sensor measurements (e.g., the “Envelope Low Pass” signal and the “Envelope High Pass” signal in Table 1).
  • the processing apparatus selects ( 516 ) one or more feature types to extract from the sensor measurements based at least in part on the pre-classification of the sensor measurements (e.g., the processing apparatus determines whether to extract stable-state features or transition features). For example, as described above with respect to FIGS. 4A-4D , Pre-Classifier 404 determines whether to transmit the filtered signals to Stable-State Feature Generator 406 or State-Transition Feature Generator 412 . In some embodiments, the processing apparatus selects ( 518 ) between a first set of feature types and a second set of feature types. In some implementations, the first set of feature types is ( 520 ) different from the second set of feature types.
  • the first set of feature types includes ( 522 ) a respective feature type from the second set of feature types. In some implementations, the first set of feature types includes ( 524 ) a respective feature type not included in the second set of feature types. In some implementations, the first set of feature types and the second set of feature types are ( 526 ) mutually exclusive.
  • the first set of feature types are feature types that enable ( 528 ) classification of a coupling state of the respective device (e.g., a Stable-State Feature Vector for processing by Stable-State Classifiers 408 as described in greater detail above with reference to FIGS. 4A-4D ).
  • the first set of feature types include device orientation angle of the respective device (e.g., the “Tilt Angle” feature in Table 2c).
  • the first set of feature types include device orientation variation of the respective device (e.g., the “Tilt Variation and High Frequency Variation” feature in Table 2b).
  • the first set of feature types include correlation of movement of the respective device along two or more axes (e.g., the “Coordinated Movement” features in Table 2b).
  • the first set of feature types include bandwidth of a sensor measurement signal (e.g., the “Signal Bandwidth” feature in Table 2b).
  • the first set of feature types include variability in bandwidth of a sensor measurement signal (e.g., the “Variability in Signal Bandwidth” feature in Table 2b).
  • the first set of feature types include spectral energy of a sensor measurement signal (e.g., the “Spectral Energy” feature in Table 2a).
  • the second set of feature types are feature types that enable ( 530 ) classification of a transition between two different coupling states of the respective device (e.g., a State-Transition Feature Vector for processing by State-Transition Classifiers 412 as described in greater detail above with reference to FIGS. 4A-4D ).
  • the second set of feature types include device orientation angle of the respective device (e.g., the “Tilt Angle and Tilt Variation” feature in Table 3).
  • the second set of feature types include device orientation variation of the respective device (e.g., the “Tilt Angle and Tilt Variation” feature in Table 3).
  • the second set of feature types include mutual information between movement of the respective device along two or more axes (e.g., the “Coordinated Motion (high pass, mutual information) and Coordinated Motion (low pass, mutual information)” features in Table 3).
  • the second set of feature types include correlation of movement of the respective device along two or more axes (e.g., the “Coordinated Motion (high pass, correlation) and Coordinated Motion (low pass, correlation)” features in Table 3).
  • the second set of feature types include maximum energy of a sensor measurement signal (e.g., the “Max Energy and Time of Max Energy” feature in Table 3).
  • the second set of feature types include spectral energy of a sensor measurement signal (e.g., the “Spectral Energy” feature in Table 3). In some embodiments, the second set of feature types include device orientation variation extrema of the respective device (e.g., the “Tilt Variation Extrema and Associated Time” feature in Table 3).
  • the first set of feature types includes ( 532 ) features adapted for determining a state of the respective device while the respective device is stationary and the second set of feature types includes features adapted for determining a state of the respective device while the respective device is in motion.
  • the feature selection process performed by the pre-classifier includes not just selecting between stable or transition features, but also selecting between different types of stable features or transition features. For example, if the user is walking, the processing apparatus generates a different set of “stable” features for device coupling than if the user is standing still. In some implementations, determining which set of features to use is based in part on other context information.
  • the processing apparatus performs different operations based on whether the first sensor measurements correspond to a first pre-classification or a second pre-classification (e.g., whether the Filtered Signals correspond to a Stable-State or a State-Transition, as described above with reference to FIGS. 4A-4D ).
  • the processing apparatus selects ( 536 ) a first set of feature types as the one or more selected feature types and, optionally, forgoes ( 538 ) extracting features of one or more of the second set of feature types while extracting features of the first set of feature types.
  • the processing apparatus selects ( 542 ) a second set of feature types as the one or more selected feature types (different from the first set of feature types) and, optionally, forgoes ( 544 ) extracting features of one or more of the first set of feature types while extracting features of the second set of feature types.
  • the processing apparatus does not extract both stable-state and transition features at the same time, which reduces the processing and energy requirements of the processing apparatus, thereby conserving energy.
  • the processing apparatus schedules ( 546 ) extraction of one or more features.
  • the processing apparatus schedules ( 548 ) extraction of features of a first subset of a plurality of feature types, where the first subset of feature types includes ( 550 ) the one or more feature types selected based on the pre-classification.
  • the processing apparatus also schedules ( 552 ) extraction of features of a second subset of the plurality of feature types, where the second subset of feature types includes ( 552 ) feature types other than the one or more feature types selected based on the pre-classification.
  • the extraction of features of the second subset of feature types is scheduled to occur after ( 556 ) the extraction of features of the first subset of feature types. In some embodiments, the extraction of features of the second subset of feature types is ( 558 ) subject to cancellation.
  • the processing apparatus extracts ( 560 ) features of the one or more selected feature types from the sensor measurements.
  • the one or more selected feature types includes ( 562 ) a respective feature type that provides an approximation of frequency bandwidth information for a respective sensor-measurement signal that corresponds to sensor measurements collected over time, and extracting features of the one or more selected feature types from the sensor measurements includes extracting features of the respective feature type from the respective sensor-measurement signal in the time domain without converting the respective sensor-measurement signal into the frequency-domain.
  • the “Variability in Signal Bandwidth” and “Signal Bandwidth” features described with reference to Table 2b provide information about the sensor-measurement signal in the time domain without converting the respective sensor-measurement signal into the frequency-domain.
  • the processing apparatus extracts the features of the one or more selected feature types by extracting ( 564 ) features of a first subset of a plurality of feature types, where the first subset includes the one or more feature types selected based on the pre-classification. After extracting the features of the first subset, the processing apparatus determines ( 566 ) whether the features extracted from the first subset of feature types are consistent with the pre-classification. In accordance with a determination that the features extracted from the first subset of feature types are not ( 568 ) consistent with the pre-classification, the processing apparatus extracts ( 569 ) features of a second subset of feature types that includes feature types other than the one or more feature types selected based on the pre-classification.
  • the processing apparatus forgoes ( 571 ) extracting features of the second subset of feature types.
  • the second subset of feature types start to be extracted before extraction of the second subset of feature types is cancelled.
  • the features of the first subset of feature types and the features of the second subset of feature types are features to be extracted from sensor measurements from a same respective measurement epoch.
  • Pre-Classifier 404 instead of Pre-classifier 404 making a final determination as to whether to extract either stable-state features or state-transition features, Pre-Classifier 404 , instead schedules the extraction of both stable-state features and state-transition features and schedule extraction of the type of features that are more likely to be useful based on the pre-classification of the filtered signals (e.g., by placing instructions to extract the features in a work queue of tasks assigned to one or more processors for execution).
  • features of the first subset of feature types e.g., stable-state features or state-transition features
  • features of the second subset of features types e.g., state-transition features or stable-state features, respectively
  • extraction of features of the second subset of feature types is, optionally, cancelled (e.g., removed from the work queue) by the processing apparatus.
  • the pre-classification is determined to be correct when the features of the first subset of feature types can be successfully used (e.g., by Stable-State Classifiers 408 or State-Transition Classifiers 414 ) to generate information (e.g., transition probabilities or state probabilities for updating Markov Model 410 ) for determining a state of the device.
  • the processing apparatus optionally schedules extraction of state-transition features followed by extraction of stable-state features.
  • the processing apparatus optionally cancels extraction of the stable-state features, while if the state-transition features indicate that the device is not undergoing a transition, then the processing apparatus optionally continues with extraction of the stable-state features, and, optionally use the stable-state features to attempt to identify a recognizable state of the device.
  • the processing apparatus optionally schedules extraction of stable-state features followed by extraction of state-transition features.
  • the processing apparatus optionally cancels extraction of the state-transition features, while if the stable-state features indicate that the device is not in a recognizable state, then the processing apparatus optionally continues with extraction of the state-transition features, and, optionally uses the state-transition features to attempt to identify a recognizable transition.
  • An advantage of scheduling extraction of both state-transition features and stable-state features is that, when the processing apparatus mistakenly pre-classifies a state of the device and thus requests extraction of an incorrect type of features, there is a shorter delay to get the correct type of features than if the processing apparatus were to schedule extraction of the correct type of features after the error was identified.
  • the computational efficiency (and energy efficiency) of the system can be preserved by cancelling extraction of features that are not needed when the pre-classification is determined to be correct based on previously extracted features (e.g., the features of the first type of features).
  • the processing apparatus determines ( 572 ) a state of a respective device of the one or more devices in accordance with a classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements (e.g., using Markov Model 410 described in greater detail above with reference to FIGS. 4A-4D ).
  • the state of the respective device is a sensor-interpretation state that is used to interpret sensor measurements from sensors of the respective device.
  • states for a plurality of the one or more devices are determined by the processing apparatus.
  • the state of the respective device is ( 573 ) a physical property of the respective device (e.g., a coupling state of the respective device or a navigational state of the respective device).
  • the state of the respective device is ( 574 ) a state of an environment of the respective device (e.g., a state of an entity associated with the respective device or a state of an environment surrounding the respective device such whether the respective device is in a car or an elevator).
  • the state of the respective device is ( 575 ) a state of an entity associated with the respective device (e.g., a posture of a user of the device).
  • the one or more devices include ( 576 ) a first device (e.g., Navigation Sensing Device 102 ) and a second device (e.g., Auxiliary Device 106 ).
  • a first device e.g., Navigation Sensing Device 102
  • a second device e.g., Auxiliary Device 106
  • the two devices are optionally, a smartphone with a plurality of sensors and a Bluetooth headset with one or more sensors; two smartphones that are in communication with each other; or with a remote computer such as a set top box or home entertainment system.
  • the processing apparatus determines ( 578 ) a state of the first device in accordance with a classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements and determines ( 579 ) a state of the second device in accordance with the classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements.
  • a state of the first device is determined based at least in part on sensor measurements from the second device (e.g., in addition to one or more sensor measurements from the first device).
  • the second device provides the processing apparatus with processes signal features.
  • the second device provides the processing apparatus with raw sensor data.
  • the state of the second device is determined ( 580 ) based at least in part on the state of the first device and/or based on sensor measurements from one or more sensors of the first device.
  • pre-classifying the sensor measurements includes ( 581 ) identifying the sensor measurements as belonging to a respective pre-classification that is associated with multiple feature sets, including a first set of features for use in identifying states of the respective device under a first set of conditions and a second set of features for use in identifying states of the respective device under a second set of conditions different from the first set of conditions; extracting the features includes extracting the first set of features and extracting the second set of features; and determining the state of the respective device includes determining the state of the respective device in accordance with the first set of features and the second set of features.
  • the first set of conditions is “InVehicle” and the second set of conditions is “NotInVehicle.” In some embodiments, the first set of conditions is “Walking” and the second set of conditions is “Stationary.” While the examples described herein specifically identify two feature sets associated with a respective pre-classification, in some situations a larger number of feature sets (e.g., 3, 4, 5 or more feature sets) are associated with a respective pre-classification (e.g., where the pre-classification corresponds to a larger number of possible conditions under which the device is operating).
  • a pre-classification indicates that a device is in a stable state but indicates that it is uncertain whether the device is associated with a walking user or a stationary user and thus the pre-classification will trigger the generation of a set of “Walking” stable-state features and a set of “Stationary” stable-state features (e.g., as described in greater detail above with reference to Equations 7-13).
  • a pre-classification indicates that a device is in a state transition but indicates that it is uncertain whether the device is associated with a walking user or a stationary user and thus the pre-classification will trigger the generation of a set of “Walking” state-transition features and a set of “Stationary” state-transition features.
  • pre-classifying the sensor measurements includes identifying the sensor measurements as belonging to one of a plurality of coupling pre-classifications and determining the state of the respective device includes determining ( 582 ) a coupling state of the respective device in accordance with a coupling classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements.
  • at least some of the features that are used to pre-classify the sensor measurements are extracted from same sensor measurements from which the features that are used to classify the state of the device are made. For example, as described in greater detail above with reference to FIGS.
  • the filtered signals are used by both Pre-Classifier 404 and the classifiers (e.g., Stable-State Feature Generator 406 or State-Transition Feature Generator 412 , depending on the pre-classification of the filtered signals).
  • Pre-Classifier 404 the classifiers (e.g., Stable-State Feature Generator 406 or State-Transition Feature Generator 412 , depending on the pre-classification of the filtered signals).
  • the coupling state of the respective device includes ( 583 ) a plurality of coupling-stable states corresponding to a coupling between the respective device and an entity (e.g., an owner/user of the respective device) associated with the respective device.
  • Markov Model 410 includes at least four coupling states, “On Table,” “In Hand at Side,” “In Hand at Front,” and “In Pocket.”
  • the coupling state of the respective device includes ( 584 ) a plurality of coupling-transition states corresponding to a transition between two of the coupling-stable states.
  • Markov Model 410 includes at least 12 transitions between different coupling states.
  • determining the state of the respective device includes updating ( 586 ) a Markov Model, where the Markov Model includes: a plurality of model states corresponding to states of the respective device; and (e.g., coupling-stable states of the respective device) a plurality of sets of transition probabilities between the plurality of model states.
  • 4B includes at least four model states (X 1 , X 2 , X 3 and X 4 ) and at least sixteen transitions (e.g., T 1 , T 2 , T 3 , T 4 , T 5 , T 6 , T 7 , T 8 , T 9 , T 10 , T 11 , T 12 , T 13 , T 14 , T 14 , T 15 , and T 16 ).
  • the processing apparatus updates ( 587 ) one or more model states in the Markov Model in accordance with a first set of transition probabilities (e.g., a first state transition matrix for the Markov Model corresponding to the first pre-classification).
  • a first set of transition probabilities e.g., a first state transition matrix for the Markov Model corresponding to the first pre-classification.
  • Pre-Classifier 404 determines that the state associated with the device is in stable-state
  • stable-state features are produced and used to generate a measurement probability P(X i,t
  • the processing apparatus updates ( 588 ) one or more model states in the Markov Model in accordance with a second set of transition probabilities different from the first set of transition probabilities (e.g., a second state transition matrix for the Markov Model corresponding to the second pre-classification).
  • the second set of transition probabilities is derived, at least in part, from the features selected by the pre-classifier.
  • Pre-Classifier 404 determines that the state associated with the device is in transition
  • state-transition features are produced and used to generate a measurement-based state transition matrix (e.g., State-Transition Markov Model Transition Matrix 422 in FIG. 4B ) for use in place of a stable-state transition matrix (e.g., Stable-State Transition Markov Model Transition Matrix 420 in FIG. 4B ).
  • a model update is performed using the measurement-based state transition matrix (e.g., in accordance with equation 6).
  • the same model states are modeled in the Markov Model for both a stable-state mode and a state-transition mode, but the transition probabilities of the Markov Model are changed in accordance with depending on the pre-classification of the respective device.
  • the processing apparatus adjusts ( 590 ) operation of the respective device in accordance with the determined state of the respective device. For example, the processing apparatus turns a display on when a device is removed from a pocket or bag or turns a display off when the respective device is placed in a pocket or bag. As another example, the processing apparatus recalibrates sensors of the respective device when the respective is picked up from a table; locks the respective device when the respective device is placed on a table, in a pocket, or in a bag; and/or unlocks the respective device when the respective device is picked up from a table, removed from a pocket, or removed from a bag. As another example, the processing apparatus enables motion tracking features when the respective device is physically associated with the user and/or disables motion tracking features when the respective device is placed on a stationary object such as a table.
  • the processing apparatus repeats ( 592 ) the receiving sensor measurements, pre-classifying sensor measurements, selecting one or more feature types, extracting features and determining the state of the respective device for a plurality of measurement epochs.
  • a state of the respective device determined in a prior measurement epoch is used ( 593 ) as a factor in determining a state in a current measurement epoch.
  • a frequency of the measurement epochs is variable and is determined ( 594 ) at least in part based on a coupling pre-classification of the sensor measurements (e.g., stable-state or transition), a probability of a coupling state of the respective device determined in a prior measurement epoch (e.g., a degree of certainty of a coupling state), and/or a stability of a current coupling state of the respective device (e.g., length of time that a coupling state has remained at high probability).
  • the processing apparatus increases an amount of time between measurement epochs when there is greater certainty and/or stability in the state associated with the device.
  • the processing apparatus decreases an amount of time between measurement epochs when there is less certainty and/or stability in the state associated with the device.
  • the plurality of epochs include ( 596 ) a first epoch and a second epoch, and during the first epoch: the sensor measurements are determined to correspond to a coupling-stable pre-classification and a first set of feature types corresponding to the coupling-stable pre-classification are selected as the one or more selected feature types.
  • the sensor measurements are determined to correspond to a coupling-transition pre-classification and a second set of feature types corresponding to the coupling-transition pre-classification are selected as the one or more selected feature types, where the first set of feature types is different from the second set of feature types.
  • the processing apparatus selects between different features to generate based upon conditions at the device during successive measurement epochs.
  • stable-state features are generated and when the state of the device is in transition, state-transition features are generated, thereby conserving energy and processing power.
  • FIGS. 5A-5I have been described are merely exemplary and are not intended to indicate that the described order is the only order in which the operations could be performed.
  • One of ordinary skill in the art would recognize various ways to reorder the operations described herein.
  • FIG. 6 is a block diagram of Navigation sensing Device 102 (herein “Device 102 ”).
  • Device 102 typically includes one or more processing units (CPUs) 1102 , one or more network or other Communications Interfaces 1104 (e.g., a wireless communication interface, as described above with reference to FIG.
  • Communications Interfaces 1104 include a transmitter for transmitting information, such as accelerometer and magnetometer measurements, and/or the computed navigational state of Device 102 , and/or other information to Host 101 .
  • Communication buses 1109 typically include circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
  • Device 102 optionally includes user interface 1105 comprising Display 1106 (e.g., Display 104 in FIG. 1 ) and Input Devices 1107 (e.g., keypads, buttons, etc.).
  • Memory 1110 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices.
  • Memory 1110 optionally includes one or more storage devices remotely located from the CPU(s) 1102 .
  • Memory 1110 or alternately the non-volatile memory device(s) within Memory 1110 , comprises a non-transitory computer readable storage medium.
  • Memory 1110 stores the following programs, modules and data structures, or a subset thereof:
  • Device 102 does not include a Gesture Determination Module 1154 , because gesture determination is performed by Host 101 .
  • Device 102 also does not include State Determination Module 1120 , Navigational State Estimator 1140 and User Interface Module because Device 102 transmits Sensor Measurements 1114 and, optionally, data representing Button Presses 1116 to a Host 101 at which a navigational state of Device 102 is determined.
  • Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and each of the above identified programs or modules corresponds to a set of instructions for performing a function described above.
  • the set of instructions can be executed by one or more processors (e.g., CPUs 1102 ).
  • the above identified modules or programs i.e., sets of instructions
  • Memory 1110 may store a subset of the modules and data structures identified above.
  • Memory 1110 may store additional modules and data structures not described above.
  • FIG. 6 shows a “Navigation sensing Device 102 ,” FIG. 6 is intended more as functional description of the various features which may be present in a navigation sensing device. In practice, and as recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated.
  • FIG. 7 is a block diagram of Host Computer System 101 (herein “Host 101 ”).
  • Host 101 typically includes one or more processing units (CPUs) 1202 , one or more network or other Communications Interfaces 1204 (e.g., any of the wireless interfaces described above with reference to FIG. 1 ), Memory 1210 , and one or more Communication Buses 1209 for interconnecting these components.
  • Communication Interfaces 1204 include a receiver for receiving information, such as accelerometer and magnetometer measurements, and/or the computed attitude of a navigation sensing device (e.g., Device 102 ), and/or other information from Device 102 .
  • Communication Buses 1209 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
  • Host 101 optionally includes a User Interface 1205 comprising a Display 1206 (e.g., Display 104 in FIG. 1 ) and Input Devices 1207 (e.g., a navigation sensing device such as a multi-dimensional pointer, a mouse, a keyboard, a trackpad, a trackball, a keypad, buttons, etc.).
  • a User Interface 1205 comprising a Display 1206 (e.g., Display 104 in FIG. 1 ) and Input Devices 1207 (e.g., a navigation sensing device such as a multi-dimensional pointer, a mouse, a keyboard, a trackpad, a trackball, a keypad, buttons, etc.).
  • a navigation sensing device such as a multi-dimensional pointer, a mouse, a keyboard, a trackpad, a trackball, a keypad, buttons
  • Memory 1210 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 1210 optionally includes one or more storage devices remotely located from the CPU(s) 1202 . Memory 1210 , or alternately the non-volatile memory device(s) within Memory 1210 , comprises a non-transitory computer readable storage medium. In some embodiments, Memory 1210 stores the following programs, modules and data structures, or a subset thereof:
  • Host 101 does not store data representing Sensor Measurements 1214 , because sensor measurements of Device 102 are processed at Device 102 , which sends data representing Navigational State Estimate 1250 to Host 101 .
  • Device 102 sends data representing Sensor Measurements 1214 to Host 101 , in which case the modules for processing that data are present in Host 101 .
  • Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and each of the above identified programs or modules corresponds to a set of instructions for performing a function described above.
  • the set of instructions can be executed by one or more processors (e.g., CPUs 1202 ).
  • the above identified modules or programs i.e., sets of instructions
  • the actual number of processors and software modules used to implement Host 101 and how features are allocated among them will vary from one implementation to another.
  • Memory 1210 may store a subset of the modules and data structures identified above. Furthermore, Memory 1210 may store additional modules and data structures not described above.
  • method 500 described above is optionally governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of Device 102 or Host 101 . As noted above, in some embodiments these methods may be performed in part on Device 102 and in part on Host 101 , or on a single integrated system which performs all the necessary operations. Each of the operations shown in FIGS. 5A-5I optionally correspond to instructions stored in a computer memory or computer readable storage medium of Device 102 or Host 101 .
  • the computer readable storage medium optionally includes a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices.
  • the computer readable instructions stored on the computer readable storage medium are in source code, assembly language code, object code, or other instruction format that is interpreted or executed by one or more processors.

Abstract

A processing apparatus including one or more processors and memory receives sensor measurements generated by one or more sensors of one or more devices, pre-classifies the sensor measurements as belonging to one of a plurality of pre-classifications, and selects one or more feature types to extract from the sensor measurements based at least in part on the pre-classification of the sensor measurements. The processing apparatus also extracts features of the one or more selected feature types from the sensor measurements and determines a state of a respective device of the one or more devices in accordance with a classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/939,126, filed Jul. 10, 2013, which claims priority to U.S. Provisional Patent Application Ser. No. 61/794,032, filed Mar. 15, 2013, entitled “Selecting Feature Types to Extract Based on Pre-Classification of Sensor Measurements” and U.S. Provisional Patent Application Ser. No. 61/723,744, filed Nov. 7, 2012, entitled “Selecting Feature Types to Extract Based on Pre-Classification of Sensor Measurements,” which applications are all incorporated by reference herein in their entirety.
  • TECHNICAL FIELD
  • The disclosed embodiments relate generally to determining a state associated with a device in accordance with a classification of sensor measurements associated with the device.
  • BACKGROUND
  • Devices have access to sensor measurements from one or more sensors. These sensor measurements can be used to determine information about states associated with the device such as a coupling state of the device to one or more entities, a state of one or more entities physically associated with the device and/or a state of an environment in which the device is located.
  • SUMMARY
  • As the number of sensor measurements available to the device increases, the task of selecting appropriate sensor measurements and analyzing the sensor measurements to determine information about states associated with the device becomes increasingly complicated. The increasing complexity of this task can negatively impact both device performance and energy efficiency of the device. As such, it would be advantageous to find ways to reduce the processing cost and/or increase the energy efficiency of determining a state associated with a device while maintaining the accuracy of the determination. One approach to improving the efficiency of determining a state associated with a device is to use one or more pre-classifications of sensor measurements to determine which features to extract from sensor measurements available to the device. Thus, the device can forgo extracting features that are not likely to be useful for determining a current state of the device while extracting features that are likely to be useful for determining the current state of the device.
  • Some embodiments provide a method for determining a state associated with a device at a processing apparatus having one or more processors and memory storing one or more programs that, when executed by the one or more processors, cause the respective processing apparatus to perform the method. The method includes receiving sensor measurements generated by one or more sensors of one or more devices, pre-classifying the sensor measurements as belonging to one of a plurality of pre-classifications, and selecting one or more feature types to extract from the sensor measurements based at least in part on the pre-classification of the sensor measurements. The method further includes extracting features of the one or more selected feature types from the sensor measurements and determining a state of a respective device of the one or more devices in accordance with a classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements.
  • In accordance with some embodiments, a computer system (e.g., a navigation sensing device or a host computer system) includes one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing the operations of any of the methods described above. In accordance with some embodiments, a non-transitory computer readable storage medium (e.g., for use by a navigation sensing device or a host computer system) has stored therein instructions which when executed by one or more processors, cause a computer system (e.g., a navigation sensing device or a host computer system) to perform the operations of any of the methods described above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a system for using a navigation sensing device, according to some embodiments.
  • FIG. 2 is a block diagram illustrating an example navigation sensing device and auxiliary device, according to some embodiments.
  • FIGS. 3A-3E are block diagrams illustrating configurations of various components of the system including a navigation sensing device, according to some embodiments.
  • FIG. 4A-4D are diagrams illustrating an example of determining a state associated with a device, according to some embodiments.
  • FIGS. 5A-5I are flow diagrams of a method for determining a state of a device, according to some embodiments.
  • FIG. 6 presents a block diagram of an example navigation sensing device, according to some embodiments.
  • FIG. 7 presents a block diagram of an example host computer system, according to some embodiments.
  • Like reference numerals refer to corresponding parts throughout the drawings.
  • DESCRIPTION OF EMBODIMENTS Exemplary Use Cases
  • Navigation sensing devices (e.g., human interface devices or motion tracking device) that have a determinable multi-dimensional navigational state (e.g., one or more dimensions of displacement and/or one or more dimensions of rotation or attitude) are becoming increasingly common for providing input for many different applications. For example, such a navigation sensing device may be used as a motion tracking device to track changes in position and/or orientation of the device over time. These tracked changes can be used to map movements and/or provide other navigational state dependent services (e.g., location or orientation based alerts, etc.). In some situations, pedestrian dead reckoning (PDR) is used to determine changes in position of an entity that is physically associated with a device (e.g., by combining direction of motion information for the entity with stride count and stride length information). However, in circumstances where the physical coupling between the navigation sensing device and the entity is variable, the navigation sensing device uses sensor measurements to determine both changes in the physical coupling between the navigation sensing device and the entity (e.g., a “device-to-entity orientation”) and changes in direction of motion of the entity.
  • As another example, such a navigation sensing device may be used as a multi-dimensional pointer to control a pointer (e.g., a cursor) on a display of a personal computer, television, gaming system, etc. As yet another example, such a navigation sensing device may be used to provide augmented reality views (e.g., by overlaying computer generated elements over a display of a view of the real world) that change in accordance with the navigational state of the navigation sensing device so as to match up with a view of the real world that is detected on a camera attached to the navigation sensing device. In other situations, such a navigation sensing device may be used to provide views of a virtual world (e.g., views of portions of a video game, computer generated simulation, etc.) that change in accordance with the navigational state of the navigation sensing device so as to match up with a virtual viewpoint of the user based on the orientation of the device. In this document, the terms orientation, attitude and rotation are used interchangeably to refer to the orientation of a device or object with respect to a frame of reference. Additionally, a single navigation sensing device is optionally capable of performing multiple different navigation sensing tasks described above either simultaneously or in sequence (e.g., switching between a multi-dimensional pointer mode and a pedestrian dead reckoning mode based on user input).
  • In order to function properly (e.g., return results to the user that correspond to movements of the navigation sensing device in predictable ways), these applications rely on sensors that determine accurate estimates of the current state(s) associated with the device (e.g., a navigational state of the device, a user-device coupling state, a state of a user physically associated with the device and/or a state of an environment of the device). While specific use cases are described above and will be used to illustrate the general concepts described herein, it should be understood that these examples are non-limiting examples and that the embodiments described herein would apply in an analogous manner to any device that would benefit from an accurate estimate of the current state(s) associated with the device (e.g., a navigational state of the device, a user-device coupling state, a state of a user who is physically associated with the device and/or a state of an environment of the device).
  • System Overview
  • Attention is now directed to FIG. 1, which illustrates an example system 100 for using a navigation sensing device (e.g., a human interface device such as a multi-dimensional pointer) to manipulate a user interface. As shown in FIG. 1, an example Navigation Sensing Device 102 (hereinafter “Device 102”) is coupled to a Host Computer System 101 (hereinafter “Host 101”) through a wireless interface, according to some embodiments. In these embodiments, a User 103 moves Device 102. These movements are detected by sensors in Device 102, as described in greater detail below with reference to FIG. 2. Device 102, or Host 101, generates a navigational state of Device 102 based on sensor measurements from the sensors and transmits the navigational state to Host 101. Alternatively, Device 102 generates sensor measurements and transmits the sensor measurements to Host 101, for use in estimating a navigational state of Device 102. Host 101 generates current user interface data based on the navigational state of Device 102 and transmits the current user interface data to Display 104 (e.g., a display or a projector), which generates display data that is displayed to the user as the currently displayed User Interface 105. While Device 102, Host 101 and Display 104 are shown in FIG. 1 as being separate, in some embodiments the functions of one or more of these elements are combined or rearranged, as described in greater detail below with reference to FIGS. 3A-3E.
  • In some embodiments, an Auxiliary Device 106 also generates sensor measurements from one or more sensors and transmits information based on the sensor measurements (e.g., raw sensor measurements, filtered signals generated based on the sensor measurements or other device state information such as a coupling state of Auxiliary Device 106 or a navigational state of Auxiliary Device 106) to Device 102 and/or Host 101 via wired or wireless interface, for use in determining a state of Device 102. It should be understood that Auxiliary Device 106 optionally has one or more of the features, components, or functions of Navigation Sensing Device 102, but those details are not repeated here for brevity.
  • In some implementations, the user can use Device 102 to issue commands for modifying the user interface, control objects in the user interface, and/or position objects in the user interface by moving Device 102 so as to change its navigational state. In some embodiments, Device 102 is sensitive to six degrees of freedom: displacement along the x-axis, displacement along the y-axis, displacement along the z-axis, yaw, pitch, and roll. In some other situations, Device 102 is a navigational state tracking device (e.g., a motion tracking device) that tracks changes in the navigational state of Device 102 over time but does not use these changes to directly update a user interface that is displayed to the user. For example, the updates in the navigational state can be recorded for later use by the user or transmitted to another user or can be used to track movement of the device and provide feedback to the user concerning their movement (e.g., directions to a particular location near the user based on an estimated location of the user). When used to track movements of a user without relying on external location information (e.g., Global Positioning System signals), such motion tracking devices are also sometimes referred to as pedestrian dead reckoning devices.
  • In some embodiments, the wireless interface is selected from the group consisting of: a Wi-Fi interface, a Bluetooth interface, an infrared interface, an audio interface, a visible light interface, a radio frequency (RF) interface, and any combination of the aforementioned wireless interfaces. In some embodiments, the wireless interface is a unidirectional wireless interface from Device 102 to Host 101. In some embodiments, the wireless interface is a bidirectional wireless interface. In some embodiments, bidirectional communication is used to perform handshaking and pairing operations. In some embodiments, a wired interface is used instead of or in addition to a wireless interface. As with the wireless interface, the wired interface is, optionally, a unidirectional or bidirectional wired interface.
  • In some embodiments, data corresponding to a navigational state of Device 102 (e.g., raw measurements, calculated attitude, correction factors, position information, etc.) is transmitted from Device 102 and received and processed on Host 101 (e.g., by a host side device driver). Host 101 uses this data to generate current user interface data (e.g., specifying a position of a cursor and/or other objects in a user interface) or tracking information.
  • Attention is now directed to FIG. 2, which illustrates an example of Device 102 and Auxiliary Device 106, according to some embodiments. In accordance with some embodiments, Device 102 includes one or more Sensors 220 which produce corresponding sensor outputs, which can be used to determine a state associated with Device 102 (e.g., a navigational state of the device, a user-device coupling state, a state of a user physically associated with the device and/or a state of an environment of the device). For example, in one implementation, Sensor 220-1 is a multi-dimensional magnetometer generating multi-dimensional magnetometer measurements (e.g., a rotation measurement), Sensor 220-2 is a multi-dimensional accelerometer generating multi-dimensional accelerometer measurements (e.g., a rotation and translation measurement), and Sensor 220-3 is a gyroscope generating measurements (e.g., either a rotational vector measurement or rotational rate vector measurement) corresponding to changes in orientation of the device. In some implementations Sensors 220 include one or more of gyroscopes, beacon sensors, inertial measurement units, temperature sensors, barometers, proximity sensors, single-dimensional accelerometers and multi-dimensional accelerometers instead of or in addition to the multi-dimensional magnetometer and multi-dimensional accelerometer and gyroscope described above. In accordance with some embodiments, Auxiliary Device 106 includes one or more Sensors 230 which produce corresponding sensor outputs, which can be used to determine a state associated with Auxiliary Device 106 (e.g., a navigational state of the device, a user-device coupling state, a state of a user physically associated with the device and/or a state of an environment of the device). In some implementations, information corresponding to the sensor outputs of Sensors 230 of Auxiliary Device 106 is transmitted to Device 102 for use in determining a state of Device 102. Similarly, in some implementations, information corresponding to the sensor outputs of Sensors 220 of Device 102 is transmitted to Auxiliary Device 106 for use in determining a state of Auxiliary Device 106. For example Device 102 is a phone and Auxiliary Device 106 is a Bluetooth headset that is paired with the phone, and the phone and the Bluetooth headset share information based on sensor measurements to more accurately determine a state of Device 102 and/or Auxiliary Device 106. As another example two mobile phones near each other can be configured to share information about their environmental context and/or their position. Additionally, a wrist watch with an accelerometer can be configured to share accelerometer measurements and/or derived posture information with a mobile phone held by the user to improve posture estimates for the user.
  • In some embodiments, Device 102 also includes one or more of: Buttons 207, Power Supply/Battery 208, Camera 214 and/or Display 216 (e.g., a display or projector). In some embodiments, Device 102 also includes one or more of the following additional user interface components: one or more processors, memory, a keypad, one or more thumb wheels, one or more light-emitting diodes (LEDs), an audio speaker, an audio microphone, a liquid crystal display (LCD), etc. In some embodiments, the various components of Device 102 (e.g., Sensors 220, Buttons 207, Power Supply 208, Camera 214 and Display 216) are all enclosed in Housing 209 of Device 102. However, in implementations where Device 102 is a pedestrian dead reckoning device, many of these features are not necessary, and Device 102 can use Sensors 220 to generate tracking information corresponding changes in navigational state of Device 102 and transmit the tracking information to Host 101 wirelessly or store the tracking information for later transmission (e.g., via a wired or wireless data connection) to Host 101.
  • In some embodiments, one or more processors (e.g., 1102, FIG. 6) of Device 102 perform one or more of the following operations: sampling Sensor Measurements 222, at a respective sampling rate, produced by Sensors 220; processing sampled data to determine displacement; transmitting displacement information to Host 101; monitoring the battery voltage and alerting Host 101 when the charge of Battery 208 is low; monitoring other user input devices (e.g., keypads, buttons, etc.), if any, on Device 102 and, as appropriate, transmitting information identifying user input device events (e.g., button presses) to Host 101; continuously or periodically running background processes to maintain or update calibration of Sensors 220; providing feedback to the user as needed on the remote (e.g., via LEDs, etc.); and recognizing gestures performed by user movement of Device 102.
  • Attention is now directed to FIGS. 3A-3E, which illustrate configurations of various components of the system for generating navigational state estimates for a navigation sensing device. In some embodiments, there are three fundamental components to the system for determining a navigational state of a navigation sensing device described herein: Sensors 220, which provide sensor measurements that are used to determine a navigational state of Device 102, Measurement Processing Module 322 (e.g., a processing apparatus including one or more processors and memory) which uses the sensor measurements generated by one or more of Sensors 220 to generate estimates of the navigational state of Device 102 which can be used to determine current user interface data and/or track movement of Device 102 over time (e.g., using pedestrian dead reckoning), and, optionally, Display 104, which displays the currently displayed user interface to the user of Device 102 and/or information corresponding to movement of Device 102 over time. It should be understood that these components can be distributed among any number of different devices.
  • In some embodiments, Measurement Processing Module 322 (e.g., a processing apparatus including one or more processors and memory) is a component of the device including Sensors 220. In some embodiments, Measurement Processing Module 322 (e.g., a processing apparatus including one or more processors and memory) is a component of a computer system that is distinct from the device including Sensors 220. In some embodiments a first portion of the functions of Measurement Processing Module 322 are performed by a first device (e.g., raw sensor data is converted into processed sensor data at Device 102) and a second portion of the functions of Measurement Processing Module 322 are performed by a second device (e.g., processed sensor data is used to generate a navigational state estimate for Device 102 at Host 101).
  • As one example, in FIG. 3A, Sensors 220, Measurement Processing Module 322 and Display 104 are distributed between three different devices (e.g., a navigation sensing device such as a multi-dimensional pointer, a set top box, and a television, respectively; or a motion tracking device, a backend motion processing server and a motion tracking client). As another example, in FIG. 3B, Sensors 220 are included in a first device (e.g., a multi-dimensional pointer or a pedestrian dead reckoning device), while the Measurement Processing Module 322 and Display 104 are included in a second device (e.g., a host with an integrated display). As another example, in FIG. 3C, Sensors 220 and Measurement Processing Module 322 are included in a first device, while Display 104 is included in a second device (e.g., a “smart” multi-dimensional pointer and a television respectively; or a motion tracking device such as a pedestrian dead reckoning device and a display for displaying information corresponding to changes in the movement of the motion tracking device over time, respectively).
  • As yet another example, in FIG. 3D, Sensors 220, Measurement Processing Module 322 and Display 104 are included in a single device (e.g., a mobile computing device, such as a smart phone, personal digital assistant, tablet computer, pedestrian dead reckoning device etc.). As a final example, in FIG. 3E, Sensors 220 and Display 104 are included in a first device (e.g., a game controller with a display/projector), while Measurement Processing Module 322 is included in a second device (e.g., a game console/server). It should be understood that in the example shown in FIG. 3E, the first device will typically be a portable device (e.g., a smart phone or a pointing device) with limited processing power, while the second device is a device (e.g., a host computer system) with the capability to perform more complex processing operations, or to perform processing operations at greater speed, and thus the computationally intensive calculations are offloaded from the portable device to a host device with greater processing power. While a plurality of common examples have been described above, it should be understood that the embodiments described herein are not limited to the examples described above, and other distributions of the various components could be made without departing from the scope of the described embodiments.
  • Determining a State Associated with a Device
  • Attention is now directed to FIGS. 4A-4D, which include block diagrams illustrating an example of determining a state associated with a device, in accordance with some embodiments.
  • The implementation of determining a state associated with a device described below with reference to FIGS. 4A-4D is explained with reference to a particular example of determining a coupling state of a device (e.g., determining if and how a device is associated with an entity). However, it should be understood that the general principles described below are applicable to a variety of different states associated with a device (e.g., a navigational state of the device, a state of a user physically associated with the device and/or a state of an environment of the device). FIG. 4A illustrates an overview of a method of determining probabilities of a state associated with the device based on raw sensor data. During a sensor data filtering stage, raw sensor data is converted into filtered signals by one or more Sensor Data Filters 402. During a pre-classification stage, a pre-classification of the state is determined by Pre-Classifier 404 based on the filtered signals, so as to determine whether to pass the filtered signals to a set of stable-state modules or to pass the filtered signals to a set of state-transition modules. After the state has been pre-classified, if the pre-classification indicates that the device is likely in a stable-state, Stable-State Feature Generator 406 generates a stable-state feature vector from the filtered signals and passes the stable-state feature vector to one or more Stable-State Classifiers 408 (described in greater detail below with reference to FIG. 4C) which provide estimations of a probability that the device is associated with different states in Markov Model 410 (described in greater detail below with reference to FIG. 4B).
  • Markov Model 410 combines the estimations from Stable-State Classifiers 408 with historical probabilities based on prior state estimates and probabilities of transitions between the states of Markov Model 410. Markov Model 410 is subsequently used to produce state probabilities corresponding to a probability for a state associated with the device (e.g., whether the device is in a user's pocket or on a table).
  • After the state has been pre-classified, if the pre-classification indicates that the device is likely in a state-transition, State-Transition Feature Generator 412 generates a state-transition feature vector from the filtered signals and passes the state-transition feature vector to one or more State-Transition Classifiers 414 (described in greater detail below with reference to FIG. 4D) which provide estimations of a probability of transitions between various states in Markov Model 410 (described in greater detail below with reference to FIG. 4B). Markov Model 410 uses the estimations from State-Transition Classifiers 414 to determine/adjust model transition probabilities for Markov Model 410. Markov Model 410 is subsequently used to produce state probabilities corresponding to a probability for a state associated with the device (e.g., assigning a probability that the device is in a user's pocket and a probability that the device is on a table).
  • In some embodiments, only one feature vector (e.g., stable-state feature vector or state-transition feature vector) is generated. In some embodiments, multiple feature vectors (e.g., multiple sets of stable-state features or multiple sets of state-transition features) are generated. For example, if there is some uncertainty as to a current condition under which the device is operating (e.g., the filtered signals indicated that the device is operating under either “Walking,” or “Stationary” conditions), Pre-Classifier 404 selects multiple different stable-state feature vectors to be generated by Stable-State Classifiers 408 (e.g., Pre-Classifier 404 selects generation of a first set of stable state features to produce a first stable-state feature vector for use in identifying stable states of the respective device under “Walking” conditions and a second set of stable state features to produce a second stable-state feature vector for use in identifying stable states of the respective device under “Stationary” conditions). In some situations, there is some overlap between the multiple feature vectors, and a feature is generated once and used in two or more of the feature vectors. When there are multiple feature vectors, the feature vectors are used by corresponding classifiers (e.g., stable-state classifiers or state-transition classifiers) to generate stable-state measurements or state-transition measurements that are provided to Markov Model 410, which is used to produce state probabilities corresponding to a probability for a state associated with the device (e.g., whether the device is in a user's pocket, in the user's hand or on a table).
  • In some embodiments, there is resource utilization feedback from Markov Model 410 to Pre-Classifier 404 and information from Markov Model 410 is used to control Pre-Classifier 404. For example, if there is a high degree of certainty that the device is associated with a particular state and has been associated with that state for a long time (e.g., a device has been sitting on a table for the last 15 minutes), then Markov Model 410 optionally provides this information to Pre-Classifier 404 and Pre Classifier 404 uses this information to reduce the frequency with which measurement epochs (e.g., cycles of Pre-Classification, Feature Extraction and Classification) are performed.
  • Information about a device coupling state can be used for a variety of purposes at the device. For example, an estimate of a device coupling state can improve power management (e.g., by enabling the device to enter a lower-power state when the user is not interacting with the device). As another example, an estimate of a device coupling state can enable the device to turn on/off other algorithms (if the device is off Body, and thus is not physically associated with the user it would be a waste of energy for the device to perform step counting for the user). In some embodiments, the classification of device coupling includes whether the device is on Body or off Body, as well as the specific location of the device in the case that it is physically associated with the user (e.g., in a pocket, bag, the user's hand). Determinations about device coupling can be made by the device based on signatures present in small amplitude body motion as well as complex muscle tremor features that are distributed across X, Y and Z acceleration signals measured by the device. In some implementations, these signals are acquired at sampling rates of 40 Hz or greater.
  • In some embodiments, Sensor Data Filters 402 take in three axes of raw acceleration data and generate filtered versions of the acceleration data to be used in both Pre-Classifier 404 and either Stable-State Feature Generator 406 or State-Transition Feature Generator 412. One example of filtered signals used for user-device coupling are described in Table 1 below.
  • TABLE 1
    Filtered Signals
    Filter Type Filter Specifics
    Low Pass 0-2.5 Hz band. Uses 51 tap FIR (finite impulse
    response) filter.
    High Pass 1.5-20 Hz band. Uses 51 tap FIR filter.
    Derivative Low Central difference derivative of low pass signal.
    Pass
    Envelope Uses a 31 tap Hilbert transform. The Hilbert transform
    Derivative Low produces complex analytic signal, and taking the
    Pass magnitude of the analytic signal produces the envelope.
    Envelope High Low pass filter the high pass using an 11 tap tent FIR
    Pass filter.
  • Pre-Classifier 404 is responsible for determining which types of features to generate (e.g., stable-state features or state-transition features), and passing an appropriate segment of sensor data (e.g., at least a subset of the filtered signals) to these feature generators (e.g., Stable-State Feature Generator 406 or State-Transition Feature Generator 412). In some embodiments, the determination of segment type is performed based on a combination of device motion context as well as based on features of the filtered signals generated by Sensor Data Filters 402.
  • In some embodiments, Pre-Classifier 404 serves as a resource allocation manager. For example, Pre-Classifier 404 allocates resources by specifying that one type of feature set is produced at a time (e.g., either producing stable-state features or state-transition features but not both). Additionally, in a situation where Pre-Classifier 404 determines that the device is in a stable-state (e.g., based on information from Markov Model 410), Pre-Classifier 404 manages the rate at which the device iterates through measurement epochs (e.g., a rate at which sets of filtered signals are sent to Stable-State Feature Generator 406). For example, if the model state has remained constant with high confidence for a predetermined amount of time (e.g., 1, 5, 10, 15 minutes, or a reasonable amount of time), the rate of the measurement epochs is decreased. Conversely, if a transition just occurred or if the model state is uncertain (e.g., the most likely model state has less than a predefined amount of certainty or the difference between the probability of the two most likely model states is below a predefined threshold), the rate of the measurement epochs is increased. In some embodiments, the provision of filtered signals to one of the feature generators (e.g., Stable-State Feature Generator 406 or State-Transition Feature Generator 412) determines whether or not the device is working to generate features from the filtered signals. As such, reducing or increasing the measurement epoch rate will have a corresponding effect on the overall processor utilization of the device, reducing the processor utilization when the device has been in the same state for a long time and increasing the processor utilization when the device has recently transitioned between states, which increases the overall energy efficiency of the device.
  • As one example (e.g., when a coupling state of the device is being determined), Pre-Classifier 404 determines whether to provide the filtered signals to Stable-State Feature Generator 406 or State-Transition Feature Generator 412 based on finding corresponding peaks in the low and high pass envelope signals indicative of sudden and/or sustained changes in motion of the device. The classifiers (e.g., Stable-State Classifiers 408 and/or State-Transition Classifiers 414) receive signal features. These features are extracted from either a state-transition or stable-state segment of low and high pass filtered signals (e.g., the filtered signals generated by Sensor Data Filters 402) provided by the Pre-Classifier 404. In some embodiments, the features used by Stable-State Classifiers 408 for stable-state classification differ from the features used by State-Transition Classifiers for state-transition classification, however both use the same underlying filtered signals produced by Sensor Data Filter(s) 402. For example, Stable-State Classifiers 408 use one or more of the Stable-State Features described in Tables 2a-2c, below, while State-Transition Classifiers 414 use one or more of the State-Transition Features described in Table 3, below. It should be understood that the features described in Tables 2a-2c and 3 are not an exhaustive list but are merely examples of features that are used in some embodiments.
  • TABLE 2a
    Stable-State Features (Walking or Standing)
    Feature Signal Processing to Derive Feature
    Coordinated Correlation of the XY, XZ and YZ zero-mean low pass
    Movement filtered signals
    (low pass)
    Spectral Energy Normalized power of the spectrum of the high pass
    signal and normalized spectral bins, 4 Hz in width,
    between 0 and 20 Hz. Normalization of the power is
    based on training subject distribution and bin
    normalization is based on the power of the subjects
    given time segment
  • TABLE 2b
    Stable-State Features (Standing)
    Feature Signal Processing to Derive Feature
    Tilt Variation Variance of the low and high pass signals
    and High
    Frequency
    Variation
    Coordinated Correlation of the XY, XZ and YZ envelope of the
    Movement high pass filtered signals
    (high pass)
    Variability in Hjorth mobility for X, Y and Z where Hjorth mobility
    Signal Bandwidth is calculated from the power of the first derivative of
    the high pass signal scaled by the variance of the high
    pass signal.
    Signal Bandwidth Hjorth purity for X, Y and Z where Hjorth purity is
    calculated from the square of the power of the
    first derivative of the high pass signal scaled by the
    product of the variance of the high pass signal and the
    power of the second derivative of the high pass signal
    Spectral Energy Normalized power of the spectrum of the high pass
    (high pass) signal and normalized spectral bins, 4 Hz in width,
    between 0 and 20 Hz. Normalization of the power is
    based on training subject distribution and bin
    normalization is based on the power of the subjects
    given time segment
  • TABLE 2c
    Stable-State Features (Walking)
    Feature Signal Processing to Derive Feature
    Tilt Angle at the start of a step Median + low-pass filtered accelerometer signals
    Tilt Variation over a stride (two Difference in tilt angle between start and end of a stride
    steps) and a single step as well as between start and end of individual steps
    Step Symmetry Ratio of the difference in inertial Z peaks at start and
    end of a step for two successive steps (a single stride)
    Relative power and range of Signals are transformed to a pseudo-user frame using
    signals in the User's Vertical vs the median + low-pass based tilt angle estimates
    Horizontal directions. Ratio of the range in the horizontal versus vertical low-
    pass filtered signals over a stride
    Ratio of the signal energy in the horizontal versus
    vertical low-pass filtered signal
    Amplitude and Phase of Step Peak of the high-pass filtered signal in the pseudo-user
    Impact Response frame vertical direction, as well as phase of the peak
    relative to the start of a step
    Spectral power of walk and Normalized power of the spectrum of the zero-mean
    stride signals signal in pseudo-user frame in spectral bins, 0.5 Hz in
    width, at the estimated walk(step) and stride
    frequencies. Walk and stride frequencies are estimated
    by finding two harmonic peaks in the spectrum of the
    vertical signal within a predefined frequency range
    (below 2.25 Hz)
    Range and variance of signals in Range and variance of signals transformed into a
    the User's Vertical and pseudo-user frame over a single stride
    Horizontal directions.
    Spectral power of signals in the Normalized and unnormalized power of the spectrum
    User's Vertical and Horizontal of the signal in pseudo-user frame in spectral bins,
    directions. 2.5 Hz and 6 Hz in width, between 0 and 17.5 Hz
  • In some embodiments, the term “Hjorth mobility” used in Table 2b corresponds to the square root of a comparison between (1) the variance of the rate of change of movement in a respective direction (e.g., the y direction) and (2) the variance of the amount of movement in the respective direction (e.g., using Equation 1, below)
  • Hjorth Mobility = Var ( y t ) Var ( y ) ( 1 )
  • In some embodiments, the term “Hjorth purity” used in Table 2 corresponds to the square root of a comparison between (1) the square of the variance of the rate of change of movement in a respective direction (e.g., the y direction) and (2) the product of the variance of the amount of movement in the respective direction and the variance of the acceleration in the respective direction (e.g., as shown in Equation 2, below)
  • Hjorth Purity = ( Var ( y t ) ) 2 Var ( y ) Var ( 2 y t 2 ) ( 2 )
  • TABLE 3
    State-Transition Features
    Feature Signal Processing to Derive Feature
    Tilt Angle and Tilt Variation Mean and variance of the low pass filtered signals.
    Coordinated motion (low pass, Mutual information between XY, XZ and YZ pairs of
    mutual information) low pass filtered signals.
    Coordinated motion (high pass, Mutual information between XY, XZ and YZ pairs of
    mutual information) envelope of high pass filtered signals.
    Coordinated motion (low pass, Correlation of the XY, XZ and YZ zero-mean low pass
    correlation) filtered signals.
    Coordinated motion (high pass, Correlation of the XY, XZ and YZ envelope of the
    correlation) high pass filtered signals.
    Max Energy and Time of Max Peak amplitude and normalized time to peak of the
    Energy envelope of the high pass signal.
    Spectral Energy Dominant modes of the spectrum of the high pass
    signal.
    Tilt Variation Extrema and Signed peak amplitude and normalized time to peak of
    Associated Time the derivative of the low pass signal.
  • FIG. 4B illustrates an example of a probabilistic model which defines the specific set of states associated with the device. In FIG. 4B, the probabilistic model is Markov Model 410. While four states (e.g., “On Table,” “In Hand at Side,” “In Hand at Front,” and “In Pocket”) are shown in FIG. 4B, it should be understood that, in principle Markov Model 410 could have any number of states. In some embodiments, the probabilistic model imposes logical constraints on the transition between the states, preventing infeasible events such as going from “On Table” to “In Pocket” without first going through “In Hand at Side” (e.g., the transition probability P(T4) from X1 to X4 is set to zero). The same set of states and transitions are used when the device is in a stable state and when the device is in a state transition. When the device is in a stable state, output from Stable-State Classifiers 408 is used to update the state probabilities of Markov Model 410, optionally, without updating the model transition probabilities (e.g., as described in greater detail below with reference to FIG. 4C). In contrast, when the device is in a state transition, output from the State-Transition Classifiers 414 is used to update the model transition probabilities are changed from P to P′ (e.g., as described in greater detail below with reference to FIG. 4D).
  • The use of a probabilistic model for determining device state increases the robustness of the overall classification and allows for improved management of resource utilization. In terms of robustness, the probabilistic model (e.g., Markov Model 410) incorporates the idea that the past provides information about the future. For example, the longer the device goes without observing a transition between states, the more confident the device is that a current state associated with the device is constant (unchanging with time). In addition, if recent observations have all indicated the same respective state associated with the device, the probabilistic model (e.g., Markov Model 410) will have a high probability of the respective state being the current state and thus will assign a lower probability on other states. This assignment of probabilities effectively places a lower weight on new measurements that indicate a different state from the respective state, which reduces the likelihood that outlier sensor measurements will result in state misclassifications. In terms of resource utilization, the probabilistic model is, optionally, used to adapt the update rate of the underlying classifiers based on the current confidence level (probability) of one or more of the states (e.g., each state). In particular, as a confidence level in a current state increases, the update rate of the stable state measurements (e.g., the frequency of measurement epochs) is, optionally, decreased until a transition measurement occurs, at which point the update rate increases again.
  • Markov Model 410 has two different modes of operation, a stable-state update mode of operation for use when Pre-Classifier 404 does not detect a transition between states and a state-transition update mode of operation for use when Pre-Classifier 404 detects a transition between states. In the stable-state updated mode, a Stable-State Markov Model Transition Matrix 420 is used. In the state-transition updated mode, a State-Transition Markov Model Transition Matrix 422 is used.
  • A stable-state update of Markov Model 410 is invoked by an updated Stable-State Classifier 408 output. The update consists of two parts, a motion update (e.g., equation 3, below) and a measurement update (e.g., equation 4, below):
  • P ~ ( X i , t ) = j = 1 n P ( X i , t | X j , t - 1 ) P ( X j , t - 1 ) ( 3 )
  • Equation 3 updates the model states, where {tilde over (P)}(Xi,t) is the model-predicted probability of state Xi at time t, which is calculated by adding up the probabilities that the state transitioned from other states Xj to state Xi. In equation 3, the probability that state Xj transitioned to state Xi is based on a state-transition matrix P(Xi,t|Xj,t-1) (e.g., Stable-State Markov Model Transition Matrix 420 in FIG. 4B) that specifies a probability of transition between state Xj and state Xi and a probability P(Xj,t-1) of state Xj being a current state associated with the device at a prior time step.
  • After determining the model-predicted probability, a combined probability is determined based on the model-predicted probability and a measurement probability based on the Stable-State Classifier 408 outputs (e.g., using equation 4).

  • P(X i,t)=αP(X i,t |y t){tilde over (P)}(X i,t)  (4)
  • Equation 4 computes a combined probability of model states, where P(Xi,t) is the combined probability of state Xi at time t, which is calculated by combining the model-predicted probability of state Xi at time t, {tilde over (P)}(Xi,t), with a measurement probability, P(Xi,t|yt), that is computed directly by Stable-State Classifiers 408, as described in greater detail below with reference to FIG. 4C. In Equation 4, above, α is a scaling parameter. The elements in the state transition matrix, P(Xi,t|Xj,t−1), are deterministic and defined based on a given model. When the elements of the state transition matrix are other than 1's and 0's, this component of the model allows for diffusion of the probabilities over time (e.g., over sequential measurement epochs). In other words, in some situations, without any observations (e.g., contributions from measurement probability P(Xi,t|yt)), this component will eventually lead to lower certainty in Markov Model 410 states over time.
  • In contrast, the state-transition update of Markov Model 410 is invoked by an updated State-Transition Classifier 414 output. The update involves first computing transition probabilities for P′ based on State-Transition Classifier 414 outputs and prior model state probabilities (e.g., as shown in equation 5, below), and then updating the model state probability accordingly (e.g., as shown in equation 6, below). It is effectively a motion update with a modified state transition matrix built from the outputs of the transition classifiers.
  • P ( X i , t | X j , t - 1 ) = k = 1 m P ( X i , t | T k , t ) P ( T k , t | X j , t - 1 ) ( 5 )
  • Equation 5 computes a modified transition matrix, where P′(Xi,t|Xj,t-1) (e.g., State-Transition Markov Model Transition Matrix 422 in FIG. 4B) is the measurement-based state transition matrix which includes elements corresponding to updated transition probability for a transition from state Xj to state Xi, which is calculated based on a measurement transition probability P(Tk,t|Xj,t-1), that is computed directly by State-Transition Classifiers 414, as described in greater detail below with reference to FIG. 4D. In some embodiments, the updated transition probability is the same as the measurement transition probabilities computed by State-Transition Classifiers 414. In some embodiments, the measurement transition probabilities are modified by a transition definition matrix P(Xi,t|Tk,t) that defines how transitions relate to each other and the model states. In a simple model, the elements of the transition definition matrix are l's and 0's, which encode the arrows shown in Markov Model 410 in FIG. 4B. For example, P(Table|OnTableFromPocket)=1, while P(Pocket|OnTableFromPocket)=0 (for a ToTable transition, the probability that the next state is table is 100%, whereas the probability that the next state is anything else is 0%). In still more complex models (e.g., where there are dependencies between the probabilities of transitioning between different states of the probabilistic model, the transition definition matrix can have elements with values between 1 and 0 that encode these more complex dependencies).
  • After determining the modified state transition matrix, probabilities of the states of Markov Model 410 are updated using the modified state transition matrix (e.g., using equation 6) to determine updated probabilities for the model states of Markov Model 410.
  • P ( X i , t ) = j = 1 n P ( X i , t | X j , t - 1 ) P ( X j , t - 1 ) ( 6 )
  • Equation 6 updates the model states, where P(Xi,t) is the model-predicted probability of state Xi at time t, which is calculated by adding up the probabilities that the state transitioned from other states Xj to state Xi. In contrast to equation 3, in equation 6, the probability that state Xj transitioned to state Xi is based on a measurement-based state transition matrix P′(Xi,t|Xj,t-1) that specifies a probability of transitioning between state Xj and state Xi in accordance with the output of State-Transition Classifiers 414. The measurement-based state transition matrix is combined with the probabilities P(Xj,t-1) of states Xj being a current state associated with the device to generate updated model-predicted probabilities for the various model states.
  • For example, if State-Transition Classifiers 414 indicate that it was almost certain that the device transitioned from On Table to In Hand at Front, then P′(T3) (also referred to as P′(X3,t|X1,t-1)) will be increased to approximately 1 and any probability that the device was in the On Table state at the prior time step will flow to a probability the device is In Hand at Front at the next time step. Thus, if in the prior time step there was a high probability (e.g., approximately 1) that the device was On Table, then there will be a substantially increased probability that the device is in the In Hand at Front state at the next time step. In contrast, if there was a relatively low probability (e.g., approximately 0) that the device was in the On Table state at the prior time step, then there will be relatively little contribution to a change in the probability that the device is in the In Hand at Front state at the next time step due to a flow of probability from the On Table state. In this example, the error correction benefits of Markov Model 410 are illustrated, as a single erroneously identified transition (e.g., a transition that corresponds to a transition from a state that is not a current state of the device) will have very little impact on the overall model state probabilities, while a correctly identified transition (e.g., a transition that corresponds to a transition from a state that is a current state of the device) will enable the device to quickly switch from a prior state to a next state.
  • FIG. 4C illustrates a set of example Stable-State Classifiers 408 in accordance with some embodiments. In this example, Stable-State Classifiers 408 are implemented as a voting machine. As shown in FIG. 4C, an On Body vs Off Body Classifier 430 (or set of classifiers) receives a Stable-State Feature vector and determines a probability that the device is On Body or Off Body. Off Body Classifier(s) 432 determine a probability that, if the device is Off Body, the device is In Trunk or On Table. Similarly, In Hand vs. In Container Classifier(s) 434 determine a probability that, if the device is On Body, the device is In Hand or In a Container. Subsequently, corresponding classifiers (e.g., In Hand Classifier(s) 436 and In Container Classifier(s) 438) further divide probabilities so as to produce a set of measurement probabilities, P(Xi,t|yt) which are combined with model-predicted probabilities to ascertain a combined probability of the states associated with the device for the current measurement epoch, as described above with reference to equations 3-4.
  • For Example in FIG. 4C, the probability of a current state of device is divided between “In Trunk,” “On Table,” “In Hand at Side,” “In Hand at Front,” “In Pocket,” and “In Bag.” While the above six states are provided for ease of illustration, it should be understood that, in principle, Stable-State Classifiers 408 could determine measurement probabilities for any number of states of a probabilistic model. Additionally, while a specific voting machine is illustrated in FIG. 4C, any of a variety of different types of voting machines (e.g. support vector machine, neural network, decision tree) could be used depending on the situation and implementation. The resulting overall output of the classifier voting machines are interpreted as probabilities, indicating the likelihood of the given state. By using a group of underlying classification algorithms rather than any single one, in this embodiment, the output of the voting machine classifier is more robust and can more accurately reflect the confidence of the classification. In some embodiments, a feedback infinite impulse filter on the voting machine output is also employed to suppress spurious misclassifications.
  • FIG. 4D illustrates a set of example State-Transition Classifiers 414 in accordance with some embodiments. In this example, State-Transition Classifiers 414 are implemented in a hierarchy. As shown in FIG. 4D, a REAL vs NULL Classifier 450 (or set of classifiers) receives a State-Transition Feature vector and determines a probability that the device experienced a transition (e.g., determining whether Pre-Classifier 404 accidentally identified a transition or identified a transition that is not modeled in the probabilistic model). The probability that the transition was a null transition is assigned to all of the null transitions in the matrix. For example, if it is almost certain that there was no transition, null transitions for all of Markov Model 410 states are set to approximately 1 and the estimated state of the device will remain substantially constant for the current measurement epoch. In contrast, if the transition is determined to have some probability of being a real transition, a non-zero probability is assigned to the real (e.g., non-null transitions). In particular, State-Transition Classifiers 414 determine conditional probabilities using a plurality of conditional classifiers. From Table Hand-F vs Down Classifier 452 determines a probability, if the state is On Table, that the state transitioned to Hand-F (In Hand at Front) or some other state. The probability that the state transitioned from On Table to some other state is handled by From Table Hand-S vs Pocket Classifier 454, which determines a probability, if the state is On Table, that the state transitioned to Hand-S (In Hand at Side) or In Pocket. Similarly, From In Hand at Side Classifier 456 determines a probability, if the state is In Hand at Side, that the state transitioned to On Table, Hand-F (In Hand at Front) or In Pocket. Likewise, From In Hand at Front Classifier 458 determines a probability, if the state is In Hand at Front, that the state transitioned to On Table, Hand-S (In Hand at Side at Front) or In Pocket. State-Transition Classifiers 414 assign probabilities so as to produce a set of measurement transition probabilities, P(Tk,t|Xj,t-1), which are, optionally, combined with a transition definition matrix and are used to produce a modified state transition matrix for use in updating Markov Model 410 for the current measurement epoch, as described in greater detail above with reference to equations 5-6.
  • For Example in FIG. 4D, measurement transition probabilities are calculated for null transitions for “On Table,” “In Hand at Side,” “In Hand at Front,” “In Pocket,” and measurement transition probabilities are calculated for transitions away from “On Table,” “In Hand at Side,” “In Hand at Front,” states. While examples of thirteen transitions states are provided above for ease of illustration, it should be understood that, in principle, State-Transition Classifiers 414 could determine measurement transition probabilities for any number of transitions between states of the probabilistic model. Additionally, while a specific voting machine is illustrated in FIG. 4D, any of a variety of different types of voting machines (e.g. support vector machine, neural network, decision tree) could be used depending on the situation. The resulting overall output of the classifier voting machines are interpreted as transition probabilities, indicating the likelihood of transition between states. Additionally, in some implementations, the results of the voting machine are normalized, weighted and/or combined to produce an overall transition probability result and confidence metric. By using a group of underlying classification algorithms rather than any single one, the output of the voting machine classifier is more robust and can more accurately reflect the confidence of the transition probability. In some embodiments, a feedback infinite impulse filter on the voting machine output is also employed to suppress spurious misclassification of transitions.
  • Posture States
  • While the examples of the system and method of determining a state associated with a device described above relate primarily to device coupling states, it should be understood that the state associated with the device could be virtually any state of a device, state of an environment of a device or state of an entity associated with a device. In other words, the framework described above is general enough to be applicable to a number of different types of state classification that include measurable stable-states and state-transitions between those stable-states.
  • For example, the framework above could also be used in a situation where the device is associated with an entity (e.g., user) and the states are body posture states of the user. For this user-posture implementation, the steady states include one or more of sitting, standing, walking, running and the transitions refer to changes between those (sitDown, standUp, startWalking, stopWalking) While the overall framework is the same as that discussed above, at least some of the specific details would be slightly different, as described below.
  • In an example of a user-posture implementation, Pre-Classifier 404 would make determinations as to whether to operate in a stable-state mode or a state-transition mode based on filtered signals corresponding to YZ accelerations detected by sensors of the device. For example, the device low-pass filters the YZ norm of the raw accelerations, and median filters that norm. In some embodiments, the transient is obtained by subtracting the median filtered value from the low-pass filtered one.
  • In the user-posture implementation example, Pre-Classifier 404 seeks half a second of relative inactivity, followed by period of elevated transient accelerations, followed by another half second of “silence.” In some embodiment, the length of the period of elevated activity is required to fall within reasonable bounds (e.g., from 1.75 to 3 seconds) for common activities such as sitting/standing transitions. The rationale for the upper bound on the period of elevated activity is that if the device is experiencing a relatively high level of continuous activity, either there is no transition and the device is simply being subjected to continuous transient accelerations or the device is being transitioned between a large number of states quickly (e.g., the user is repeatedly standing up and sitting down) and it would be futile and/or computationally or energy inefficient to attempt to keep track of all of the transitions. In some embodiments, inactivity or silence detection is determined by a threshold on the transient YZ acceleration norm. An example of a stillness threshold for filtered accelerations is approximately 0.5 m/s2. Table 4 includes a list of example user-posture features for use as stable-state and/or state-transition features for the user-posture implementation.
  • TABLE 4
    Posture Transition Features
    Feature Signal Processing to Derive Feature
    Sit vs Stand Ratio of the time of occurrence of the negative peak to
    Gravity Signature the time of occurrence of the positive peak in the
    transient acceleration norm signal.
    Tilt The “direction cosines” of Y and Z tilt. Using the
    median-filtered accelerations, compute the ratio of y
    and Z accelerations to the total median acceleration at
    start and end of the transition segment.
    Sit/Stand Gravity The fraction of the energy in the transient YZ
    Work Signature acceleration norm that is not captured by the first four
    Fit Error gravity work modes.
    Sit/Stand Balance The fraction of the energy in the transient Y
    Lean Signature acceleration that is not captured by the first four
    Fit Error balance lean modes.
    Sit/Stand Gravity The “shape” of the coefficients corresponding to the
    Work Signature first four gravity work modes. This is computed by
    Fit dividing the associated four coefficients by the sum of
    the absolute values of those four coefficients.
    Sit/Stand Balance The “shape” of the coefficients corresponding to the
    Lean Signature first four balance lean modes. This is computed by
    Fit dividing the associated four coefficients by the sum of
    the absolute values of those four coefficients.
  • While device-coupling implementation and user-posture implementation have been described separately and could be implemented independently, in some embodiments multiple different implementations are combined. In other words, one set of state information (e.g., states of a first Markov Model) is optionally used to provide context information that influences a second set of state information (e.g., states of a second Markov Model). One example is using user-posture state information to condition user-device coupling classifiers. In some embodiments, the features described for device-body coupling above with reference to Table 4 are relevant when the user is either sitting or standing. When the user is engaged in more active motion (e.g., walking or running) a different set of features can be used which take advantage of the relation between motion of the phone while walking based on phone location. For example, the features used to distinguish whether a device is in a user's pocket or a user's hand while a user is walking (e.g., period of movement, damping and/or device swinging in hand) are different from (e.g., includes one or more features not included in) the features used to distinguish whether a device is in a user's pocket or a user's hand while the user is stationary (e.g., detected tremor and/or device orientation). Leveraging this information, the stable-state classifiers can be modified to include one or more sets of conditional classifiers for one or more respective classifications (e.g., a set including an “In Hand Classifier (v1)” for not-walking and an In Hand Classifier (v2) for walking for the “In Hand at Side” vs “In Hand at Front” classification). In some embodiments, only one of the classifiers in each set is used depending on which differentiating condition is most likely (e.g., walking or not-walking) In some embodiments, the output of the classifiers in a set of two or more classifiers for a respective classification are combined by a weighted sum, where the weight is based on the probability of the differentiating condition (e.g., walking or not-walking) determined by a separate posture context algorithm.
  • A specific example of user-device coupling that is conditioned on posture is described below, however it should be understood that this example is merely illustrative and does not limit the more general concepts described above. In the posture conditioned user-device coupling example, Sensor Data Filter(s) 402 receive Signals from the acceleration channels and break them into low and high pass bands using 51 tap FIR filters, where the low pass band is 0-3.5 Hz and the high pass is 4.5-20 Hz. Sensor Data Filter(s) 402 further generate an envelope of the high pass signal and low pass filter the envelope. Sensor Data Filter(s) 402 optionally resample both the low pass and envelope signal to 12.5 Hz. In some embodiments, in addition to pre-filtered low and high pass filtered traces, Sensor Data Filter(s) 402 extracts a narrow-band walk signal and provides some of the features for phone location determination.
  • As discussed above, Pre-Classifier 404 acts as a resource manager by managing how frequently the features (e.g., Stable-State Feature Vector or State-Transition Feature Vector), and thus classifiers (e.g., Stable-State Classifiers 408 and/or State-Transition Classifier s 414) and Markov Model 410, are updated. There are two primary ways that Pre-Classifier 404 acts as a resource manager. First, state-transition features are only generated when a transition is determined to be likely by Pre-Classifier 404. Second, stable-state features are generated at a rate corresponding to the current confidence in the state estimate. For example, if Markov Model 410 is very confident in its state estimate and that state has not changed for some time, the update rate will decrease accordingly. It should be noted that a stable-state does not necessarily imply a motionless state, rather a stable-state is a state in which the current state of the device has been unchanged for a period of time. For example, if the device has been sitting on a car seat during a long drive, the device will see forward motion (or even changes in motion as the device speeds up, slows down and turns). However, Pre-Classifier 404 can still go into a low power state where measurement epochs occur infrequently once it has determined that the device is off Body.
  • Similarly, resource utilization is minimized by reducing the processing requirements of Pre-Classifier 404, so that features that are examined by Pre-Classifier 404 do not cause a large amount of power usage. In some embodiments, Pre-Classifier 404 for the user-device coupling implementation is a simple peak detection algorithm which can be done with very minimal processing resources. Additional power management can be achieved by turning off some of the classifiers when contextual information indicates that the classifiers are not providing useful information. For example, if the device is off Body one or more classifiers, including the posture detection classifiers and, optionally, Pre-Classifier 404 will be turned off.
  • Multiple State Estimation
  • As discussed above, in some situations multiple feature vectors (e.g., multiple sets of stable-state features or multiple sets of state-transition features) are generated (e.g., if there is some uncertainty as to a current conditions under which the device is operating). For example, in some situations there is a set of features that are used to determine different device coupling states under a set of conditions where a user is walking and there is a different set of features that are used to determine different device coupling states under a set of conditions where a user is stationary (e.g., standing or sitting). As another example, in some situations there is a set of features that are used to determine different user posture states under a set of conditions where a user is in a first transport state (e.g., in a vehicle) and there is a different set of features that are used to determine different user posture states under a set of conditions where a user is in a second transport state (e.g., not in a vehicle). In such situations, for each of a plurality of user posture states classifiers will identify multiple probability components that correspond to the probability that user is in a respective posture for a plurality of different transport states. For example, when there are multiple probability components that correspond to a probability that the device is in a respective coupling state, P(Xi,t|yt) from equation 4 is computed as the sum of these probability components as shown in equation 7 (separate device coupling probabilities P(Xt|Bj,y,yt) and background condition (transport and posture) probabilities P(Bj,t|yt)) or equation 8 (joint device coupling, posture and transport probabilities P(Xt,Bj,t|yt) for different combinations of device coupling, posture and transport), below.
  • P ( X t | y t ) = j P ( X t | B j , t , y t ) P ( X j , t | y t ) ( 7 ) P ( X t | y t ) = j P ( X t , B j , t | y t ) ( 8 )
  • In some embodiments, the background condition states are grouped into two sets S1=(P, NotInVehicle) and S2=(P,InVehicle), where P represents different possible posture states. In this example, the measurement probability is given by:
  • P ( X t | y t ) = P ( X t | InVehicle , y t ) P ( InVehicle | y t ) + P ( X t | NotInVehicle , y t ) P ( NotInVehicle | y t ) ( 9 )
  • Where, optionally, P(Xt|NotInVehicle,yt)=1−P(InVehicle). In this example, P(Xt|InVehicle,yt) and P(Xt|NotInVehicle,yt) are computed using sets of conditional device coupling classifiers and P(InVehicle|yt) and P(NotInVehicle|yt) are computed by a separate probability estimator for the transport state, such as a separate Transport Markov Model.
  • In some embodiments, Equation 9 is implemented under the assumption that P(InVehicle)=0, and, for a stable state, Stable-State Classifiers 408 produce P(Xt|NotInVehicle,yt) by combining conditional classifiers for three types of background stable states corresponding to different posture states: B1=(Walking,NotInVehicle), B2=(Standing,NotInVehicle), B3=(Sitting,NotInVehicle). Preclassifier 404 is configured to preclassify two sets of these stable states S1=B1 (e.g., a “walking stable-state”) and S2={B2, B3} (e.g., a “stationary stable-state”). In some situations, stable states are grouped together into sets based on whether they can be evaluated using the same set of features (e.g., states that can be evaluated using the same set of features can be grouped together because there will be minimal additional work to generate features). In some embodiments, when a respective set is preclassified as having a probability above a predefined threshold (e.g., 0, 5, 10, 15, 30 or 50 percent likely or some other reasonable threshold), Preclassifier 404 triggers a feature extraction and classification path corresponding to the respective set. For example, when preclassifier triggers generation of a walking stable state feature set and a stationary stable state feature set, the device generates feature sets for use in determining two probabilities P(Xt,Walking|NotInVehicle,yt) and P(Xt|B2,t
    Figure US20150012248A1-20150108-P00001
    B3,t) which are combined as shown in Equation 10.
  • P ( X t | NotInVehicle , y t ) = P ( X t , Walking | NotInVehicle , y t ) + P ( X t | B 2 , t B 3 , t ) P ( Standing Sitting | NotInVehicle , y t ) ( 10 )
  • In equation 10, above, the background probability for standing or sitting P(Standing
    Figure US20150012248A1-20150108-P00001
    Sitting|NotInVehicle,yt) is, optionally, computed by summing the total probability that the user is walking and subtracting that probability from 1. For example, as shown in Equations 11-12:
  • P ( Walking | NotInVehicle , y t ) = i P ( X i , t , Walking | NotInVehicle , y t ) ( 11 ) P ( Standing Sitting | NotInVehicle , y t ) = 1 - P ( Walking | NotInVehicle , y t ) ( 12 )
  • In some embodiments, Equation 9 is implemented without making the assumption that P(InVehicle)=0 by adding a classifier for in Vehicle (e.g., by adding classifiers to produce P(Xt|InVehicle,yt) and P(InVehicle|yt). Thus, in some embodiments, Preclassifier 404 is configured to preclassify a third set stable states S3={B4, B5, B6} (e.g., an “in vehicle stable-state”), where B4=(Walking, InVehicle), B5=(Standing, InVehicle), B6=(Sitting, InVehicle). In some embodiments, when a respective set is preclassified as having a probability above a predefined threshold (e.g., 0, 5, 10, 15, 30 or 50 percent likely or some other reasonable threshold), Preclassifier 404 triggers a feature extraction and classification path corresponding to the respective set of features. For example, when preclassifier triggers generation of an in vehicle stable state, the device generates feature sets for determining three probabilities: The probability that the device is in a particular coupling state and a user is walking in a vehicle P (Xt, Walking|InVehicle,yt), the conditional probability that the device is in a particular coupling state given that a user is standing or sitting in a vehicle P(Xt|Standing
    Figure US20150012248A1-20150108-P00001
    Sitting,InVehicle,yt)=P(Xt|B5
    Figure US20150012248A1-20150108-P00001
    B6), and the probability that a user is in a vehicle P(InVehicle)=P(B4
    Figure US20150012248A1-20150108-P00001
    B5
    Figure US20150012248A1-20150108-P00001
    B6) which are combined as shown in Equation 13, below.
  • P ( X t | InVehicle , y t ) = P ( X t , Walking | InVehicle , y t ) + P ( X t | B 5 , t B 6 , t ) P ( Standing Sitting | InVehicle , y t ) ( 13 )
  • Equation 13, above, the background probability for standing or sitting P(Standing
    Figure US20150012248A1-20150108-P00001
    Sitting|InVehicle,yt) is, optionally, computed by summing the total probability that the user is walking and subtracting that probability from 1. For example:
  • P ( Walking | InVehicle , y t ) = i P ( X i , t , Walking | InVehicle , y t ) ( 14 ) P ( Standing Sitting | InVehicle , y t ) = 1 - P ( Walking | InVehicle , y t ) ( 15 )
  • In many situations, generating different sets of features for a plurality of different conditions under which a device may be operating and combining probability estimations for the different sets of features, as described above, provides a more accurate estimation of the state associated with the device (e.g., posture or device coupling state) than attempting to classify the state associated with the device without taking into account the possible different conditions under which the device is operating.
  • Recovering from Misclassification
  • The system and method for determining a state associated with a device has a number of advantages over conventional methods for determining a state associated with a device, however an important advantage is that the method is able to efficiently and effectively recover from misclassifications (e.g., missed state-transition classifications and/or spurious stable-state misclassifications).
  • If State-Transition Classifiers 414 misclassify a transition with high confidence or Pre-Classifier 404 completely misses a transition, it is possible that the overall model will end up in the “wrong state” for a period of time. There are two ways that the system and method described herein mitigates this effect. First, is through use of priors from Markov Model 404. For example, if the Markov Model 404 is very confident that the phone is in On Table, but the detected transition starts from In Hand (In Hand at Side to Pocket, In Hand at Front to Table), that transition will have a low weight in the Markov Model. While this does not ensure that the new state will be correctly classified, it does prevent a confident misclassification and provides an indication of the uncertainty in the classification, which is more accurate representation of the state of the device than a confident misclassification. In particular, a misclassified transition originating from a low-probability state will typically lead to low confidence in all states (an “unknown” classification), which is more readily recoverable by the stable-state classifiers.
  • The use of both stable-state classifiers and state-transition classifiers in the state determination process accounts for the fact that no classifier is perfect. Even in the case that the device misses a transition or misclassifies a transition leading to a wrong state in Markov Model 410, the model can still recover and reach the correct state classification using the stable-state classifiers. In some embodiments, the latency of this recovery period is tied to both the degree of misclassification and the confidence of the stable-state measurements. If the misclassification leads to a situation where Markov Model 410 has low confidence in all states and the stable-state classifications have high confidence, Markov Model 410 can recover very quickly (within 1 to 2 stable-state measurements), thus in many situations is better for the model to have an “unknown” (low confidence) classification than to have a high confidence misclassification, because the device can recover from an “unknown” (low confidence) classification more quickly.
  • Even with well-trained classifiers, periodic misclassifications can occur (especially in the stable-state classifiers) due to things like poor data quality or data uncharacteristic of that contained in the training set. For example, if the device is a phone or media player in an arm band which is not included in the current data collected for user-device coupling, this state may be misclassified as a modeled device coupling state. There are several components of the described system and method that handle these types of spurious/transient misclassifications. First an infinite impulse response that is optionally used to filter classifier outputs helps to suppress spurious misclassifications. Second, in some embodiments, the output of classifiers (e.g., Stable-State Classifiers 408 and/or State-Transition Classifiers 414) is weighted. Rather than simply return a 1 or 0 from an individual classifier, the classifier outputs are converted to probabilities based on the strength of the classification. For a support vector machine model, the strength is determined based on the distance of the features from the margin. For a multi-layer perceptron model, the strength is determined based on the magnitude of the output neurons as well as the relative strength of the different neurons. Third, in some embodiments, multiple classifier models (support vector machines, multi-layer perceptron, decision trees) are used concurrently for the same classification problem, and their results combined to produce the overall classification. By using multiple classifier models concurrently, if one of the classifiers classifies incorrectly but the others do not, this single misclassification is significantly weakened. An advantage of this approach is that the overall system will only produce a strong misclassification if all classifiers agree on the misclassification, which is more unlikely than the occurrence of a misclassification in any single classifier. Finally, Markov Model 410 incorporates prior state information so that even if the voting machine (e.g., Stable-State Classifiers 408 and/or State-Transition Classifiers 414) produces a strong misclassification, this is moderated at Markov Model 410 by the prior state probabilities. For example, if the model is highly confident that the previous state was On Table, it will put less weight on a strong In Pocket classification. However, typically with more than one strong classification, the model will start to produce predicted probabilities of the states associated with the device that are in agreement with the classifiers. As such, the system and method described herein (and further explained below with reference to method 500) provides accurate classifications, efficient and quick recovery from misclassifications and low power usage and thus is particularly useful in mobile applications where quick recovery from misclassifications and low power usage are particularly important.
  • Attention is now directed to FIGS. 5A-5I, which illustrate a method 500 for determining a state associated with a device, in accordance with some embodiments. Method 500 is, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of one or more computer systems (e.g., Device 102, FIG. 6 or Host 101, FIG. 7). Each of the operations shown in FIGS. 5A-5I typically corresponds to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., Memory 1110 of Device 102 in FIG. 6 or Memory 1210 of Host 101 in FIG. 7). The computer readable storage medium optionally (and typically) includes a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the computer readable storage medium typically include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted or executed by one or more processors. In various embodiments, some operations in method 500 are combined and/or the order of some operations is changed from the order shown in FIGS. 5A-5I.
  • The following operations are performed at a processing apparatus having one or more processors and memory storing one or more programs that, when executed by the one or more processors, cause the respective processing apparatus to perform the method. In some embodiments, the processing apparatus is a component of Device 102 (e.g., the processing apparatus includes the one or more CPU(s) 1102 in FIG. 6). In some embodiments, the processing apparatus is separate from Device 102 (e.g., the processing apparatus includes the one or more CPU(s) 1202 in FIG. 7).
  • The processing apparatus receives (502) sensor measurements generated by one or more sensors of one or more devices (e.g., one or more sensors 220 of Navigation Sensing Device 102 and/or one or more sensors 230 of Auxiliary Device 106). In some embodiments, the one or more sensors include (504) a respective sensor of the respective device. After receiving the sensor measurements, the processing apparatus pre-classifies (506) the sensor measurements as belonging to one of a plurality of pre-classifications (e.g., coupling pre-classifications such as stable-state or state-transition). For example, as described above with reference to FIGS. 4A-4D, Pre-Classifier 404 determines whether to provide filtered signals to Stable-State Feature Generator 406 or to State-Transition Feature Generator 412 based on whether the pre-classification indicates that the filtered signals correspond to a stable-state or a state-transition.
  • In some embodiments, the plurality of pre-classifications include (508) a first pre-classification corresponding to a plurality of transition feature types (e.g., state-transition features) associated with identifying a transition between two different states of the respective device; a second pre-classification corresponding to a first subset of a plurality of device-state feature types (e.g., stable-state features) associated with identifying a state of the respective device; and/or a third pre-classification corresponding to a second subset of the plurality of device-state feature types (e.g., stable-state features), where the second subset of device-state feature types is different from the first subset of device-state feature types. In some embodiments, the first subset of device-state feature types enables identification of a device state of the respective device from a first subset of states of the respective device (e.g., device states corresponding to motion of the respective device) and the second subset of device-state feature types enables identification of a device state from a second subset of states of the respective device (e.g., stationary device states of the respective device). For example, processing apparatus has a set of state-transition classifiers (e.g., 414 in FIG. 4A) and two subsets of stable-state classifiers (e.g., 408 in FIG. 4A), where the two subsets of stable-state classifiers include a subset of stable-state classifiers for while the device is in motion (walking, running, driving) and a subset of stable-state classifiers for while the device is stationary (On Table, In Hand, In Pocket). In this example, Pre-Classifier 404 selects one of three different pathways (e.g., state-transition classifiers, device in motion stable-state classifiers, and device stationary stable-state classifiers).
  • In some embodiments, the processing apparatus obtains (510) context information indicative of a current context of the respective device (e.g., the processing apparatus retrieves or determines/generates context information), and pre-classifies (512) the sensor measurements based at least in part on the current context of the respective device. For example, if there is a high probability that the respective device is Off Body, then the processing apparatus would turn off all subsequent posture-type feature generation, as the processing apparatus will not typically be able to detect the posture of the user if the device is not physically associated with the user. More generally, if the respective device is in the trunk of a car, the processing apparatus can conserve energy by ceasing, at least temporarily, to extract features other than those that are helpful in determining whether or not the respective device has been removed from the trunk of the car. In some embodiments, the current context of the respective device is determined based at least in part on system signals. For example, a system signal indicating that the respective device is connected to a charger optionally prompts the processing apparatus to forgo generating features related to user body posture. In some embodiments, the current context of the respective device is determined based on stored information about a current user of the respective device (e.g., some classifiers are turned on, turned off, or modified if a user is male or is over 200 lbs).
  • In some embodiments, pre-classifying the sensor measurements includes (514) extracting features of one or more pre-classification feature types from the sensor measurements, and extracting features of the pre-classification feature types from the sensor measurements is more resource efficient than extracting features of the one or more selected feature types from the sensor measurements (e.g., the pre-classification features are generated using low-power algorithms to provide a rough estimate of the general type of behavior currently exhibited by the respective device). For example, extracting the same quantity of features of the pre-classification feature types would take a smaller amount of CPU resources than extracting a similar quantity of features of the selected feature types. In some embodiments, extracting features of the pre-classification feature types enables the processing apparatus to forgo extracting features of one or more unselected feature types, thereby reducing the CPU usage and power usage due to extracting features from the sensor measurements and conserving battery life if the respective device is battery operated. In some embodiments, the pre-classification feature types include peak detection on frequency filtered sensor measurements. In some embodiments, the pre-classification feature types include threshold detection on frequency filtered sensor measurements (e.g., the “Derivative Low Pass” signal in Table 1). In some embodiments, the pre-classification feature types include peak detection on envelope filtered sensor measurements (e.g., the “Envelope Low Pass” signal and the “Envelope High Pass” signal in Table 1). In some embodiments, the pre-classification feature types include threshold detection on envelope filtered sensor measurements (e.g., the “Envelope Low Pass” signal and the “Envelope High Pass” signal in Table 1).
  • After pre-classifying the sensor measurements, the processing apparatus selects (516) one or more feature types to extract from the sensor measurements based at least in part on the pre-classification of the sensor measurements (e.g., the processing apparatus determines whether to extract stable-state features or transition features). For example, as described above with respect to FIGS. 4A-4D, Pre-Classifier 404 determines whether to transmit the filtered signals to Stable-State Feature Generator 406 or State-Transition Feature Generator 412. In some embodiments, the processing apparatus selects (518) between a first set of feature types and a second set of feature types. In some implementations, the first set of feature types is (520) different from the second set of feature types. In some implementations, the first set of feature types includes (522) a respective feature type from the second set of feature types. In some implementations, the first set of feature types includes (524) a respective feature type not included in the second set of feature types. In some implementations, the first set of feature types and the second set of feature types are (526) mutually exclusive.
  • In some embodiments, the first set of feature types are feature types that enable (528) classification of a coupling state of the respective device (e.g., a Stable-State Feature Vector for processing by Stable-State Classifiers 408 as described in greater detail above with reference to FIGS. 4A-4D). In some embodiments, the first set of feature types include device orientation angle of the respective device (e.g., the “Tilt Angle” feature in Table 2c). In some embodiments, the first set of feature types include device orientation variation of the respective device (e.g., the “Tilt Variation and High Frequency Variation” feature in Table 2b). In some embodiments, the first set of feature types include correlation of movement of the respective device along two or more axes (e.g., the “Coordinated Movement” features in Table 2b). In some embodiments, the first set of feature types include bandwidth of a sensor measurement signal (e.g., the “Signal Bandwidth” feature in Table 2b). In some embodiments, the first set of feature types include variability in bandwidth of a sensor measurement signal (e.g., the “Variability in Signal Bandwidth” feature in Table 2b). In some embodiments, the first set of feature types include spectral energy of a sensor measurement signal (e.g., the “Spectral Energy” feature in Table 2a).
  • In some embodiments, the second set of feature types are feature types that enable (530) classification of a transition between two different coupling states of the respective device (e.g., a State-Transition Feature Vector for processing by State-Transition Classifiers 412 as described in greater detail above with reference to FIGS. 4A-4D). In some embodiments, the second set of feature types include device orientation angle of the respective device (e.g., the “Tilt Angle and Tilt Variation” feature in Table 3). In some embodiments, the second set of feature types include device orientation variation of the respective device (e.g., the “Tilt Angle and Tilt Variation” feature in Table 3). In some embodiments, the second set of feature types include mutual information between movement of the respective device along two or more axes (e.g., the “Coordinated Motion (high pass, mutual information) and Coordinated Motion (low pass, mutual information)” features in Table 3). In some embodiments, the second set of feature types include correlation of movement of the respective device along two or more axes (e.g., the “Coordinated Motion (high pass, correlation) and Coordinated Motion (low pass, correlation)” features in Table 3). In some embodiments, the second set of feature types include maximum energy of a sensor measurement signal (e.g., the “Max Energy and Time of Max Energy” feature in Table 3). In some embodiments, the second set of feature types include spectral energy of a sensor measurement signal (e.g., the “Spectral Energy” feature in Table 3). In some embodiments, the second set of feature types include device orientation variation extrema of the respective device (e.g., the “Tilt Variation Extrema and Associated Time” feature in Table 3).
  • In some embodiments, the first set of feature types includes (532) features adapted for determining a state of the respective device while the respective device is stationary and the second set of feature types includes features adapted for determining a state of the respective device while the respective device is in motion. In some embodiments, the feature selection process performed by the pre-classifier includes not just selecting between stable or transition features, but also selecting between different types of stable features or transition features. For example, if the user is walking, the processing apparatus generates a different set of “stable” features for device coupling than if the user is standing still. In some implementations, determining which set of features to use is based in part on other context information.
  • In some embodiments, the processing apparatus performs different operations based on whether the first sensor measurements correspond to a first pre-classification or a second pre-classification (e.g., whether the Filtered Signals correspond to a Stable-State or a State-Transition, as described above with reference to FIGS. 4A-4D). In accordance with a determination (e.g., during the pre-classification) that the sensor measurements correspond to a first (e.g., coupling-stable) pre-classification (534), the processing apparatus selects (536) a first set of feature types as the one or more selected feature types and, optionally, forgoes (538) extracting features of one or more of the second set of feature types while extracting features of the first set of feature types. In contrast, in accordance with a determination (e.g., during the pre-classification) that the set of sensor measurements correspond to a second (e.g., coupling-transition) pre-classification (540), the processing apparatus selects (542) a second set of feature types as the one or more selected feature types (different from the first set of feature types) and, optionally, forgoes (544) extracting features of one or more of the first set of feature types while extracting features of the second set of feature types. In other words, in some embodiments, the processing apparatus does not extract both stable-state and transition features at the same time, which reduces the processing and energy requirements of the processing apparatus, thereby conserving energy.
  • In some embodiments, prior to extracting features of the one or more feature types selected based on the pre-classification, the processing apparatus schedules (546) extraction of one or more features. In some embodiments, the processing apparatus schedules (548) extraction of features of a first subset of a plurality of feature types, where the first subset of feature types includes (550) the one or more feature types selected based on the pre-classification. In some embodiments, the processing apparatus also schedules (552) extraction of features of a second subset of the plurality of feature types, where the second subset of feature types includes (552) feature types other than the one or more feature types selected based on the pre-classification. In some embodiments, the extraction of features of the second subset of feature types is scheduled to occur after (556) the extraction of features of the first subset of feature types. In some embodiments, the extraction of features of the second subset of feature types is (558) subject to cancellation.
  • After selecting the one or more feature types to extract, the processing apparatus extracts (560) features of the one or more selected feature types from the sensor measurements. In some embodiments, the one or more selected feature types includes (562) a respective feature type that provides an approximation of frequency bandwidth information for a respective sensor-measurement signal that corresponds to sensor measurements collected over time, and extracting features of the one or more selected feature types from the sensor measurements includes extracting features of the respective feature type from the respective sensor-measurement signal in the time domain without converting the respective sensor-measurement signal into the frequency-domain. For example, the “Variability in Signal Bandwidth” and “Signal Bandwidth” features described with reference to Table 2b provide information about the sensor-measurement signal in the time domain without converting the respective sensor-measurement signal into the frequency-domain.
  • In some embodiments, the processing apparatus extracts the features of the one or more selected feature types by extracting (564) features of a first subset of a plurality of feature types, where the first subset includes the one or more feature types selected based on the pre-classification. After extracting the features of the first subset, the processing apparatus determines (566) whether the features extracted from the first subset of feature types are consistent with the pre-classification. In accordance with a determination that the features extracted from the first subset of feature types are not (568) consistent with the pre-classification, the processing apparatus extracts (569) features of a second subset of feature types that includes feature types other than the one or more feature types selected based on the pre-classification. In contrast, in accordance with a determination that the features extracted from the first subset of feature types are (570) consistent with the pre-classification, the processing apparatus forgoes (571) extracting features of the second subset of feature types. In some embodiments, the second subset of feature types start to be extracted before extraction of the second subset of feature types is cancelled. In some embodiments, the features of the first subset of feature types and the features of the second subset of feature types are features to be extracted from sensor measurements from a same respective measurement epoch.
  • Thus, in some implementations, instead of Pre-classifier 404 making a final determination as to whether to extract either stable-state features or state-transition features, Pre-Classifier 404, instead schedules the extraction of both stable-state features and state-transition features and schedule extraction of the type of features that are more likely to be useful based on the pre-classification of the filtered signals (e.g., by placing instructions to extract the features in a work queue of tasks assigned to one or more processors for execution). In some implementations, once the features of the first subset of feature types (e.g., stable-state features or state-transition features) have been extracted, if it is determined that the pre-classification was correct, then features of the second subset of features types (e.g., state-transition features or stable-state features, respectively) need not be extracted and extraction of features of the second subset of feature types is, optionally, cancelled (e.g., removed from the work queue) by the processing apparatus. In contrast, once the features of the first subset of feature types have been extracted, if it is determined that the pre-classification was incorrect, then extraction of features of the second subset of feature types proceeds without having to schedule extraction of features of the second subset of feature types (e.g., because the extraction of features of the second subset of feature types was previously scheduled). In some embodiments the pre-classification is determined to be correct when the features of the first subset of feature types can be successfully used (e.g., by Stable-State Classifiers 408 or State-Transition Classifiers 414) to generate information (e.g., transition probabilities or state probabilities for updating Markov Model 410) for determining a state of the device.
  • For example, if the pre-classification indicates that the device is likely undergoing a state-transition, the processing apparatus optionally schedules extraction of state-transition features followed by extraction of stable-state features. In this example, if the state-transition features indicate that the device is undergoing a recognizable transition, then the processing apparatus optionally cancels extraction of the stable-state features, while if the state-transition features indicate that the device is not undergoing a transition, then the processing apparatus optionally continues with extraction of the stable-state features, and, optionally use the stable-state features to attempt to identify a recognizable state of the device. As another example, if the pre-classification indicates that the device is likely in a stable-state, the processing apparatus optionally schedules extraction of stable-state features followed by extraction of state-transition features. In this example, if the stable-state features indicate that the device is in a recognizable state, then the processing apparatus optionally cancels extraction of the state-transition features, while if the stable-state features indicate that the device is not in a recognizable state, then the processing apparatus optionally continues with extraction of the state-transition features, and, optionally uses the state-transition features to attempt to identify a recognizable transition. An advantage of scheduling extraction of both state-transition features and stable-state features is that, when the processing apparatus mistakenly pre-classifies a state of the device and thus requests extraction of an incorrect type of features, there is a shorter delay to get the correct type of features than if the processing apparatus were to schedule extraction of the correct type of features after the error was identified. However, by scheduling extraction of the features in sequence, the computational efficiency (and energy efficiency) of the system can be preserved by cancelling extraction of features that are not needed when the pre-classification is determined to be correct based on previously extracted features (e.g., the features of the first type of features).
  • After extracting the features of the one or more selected feature types, the processing apparatus determines (572) a state of a respective device of the one or more devices in accordance with a classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements (e.g., using Markov Model 410 described in greater detail above with reference to FIGS. 4A-4D). In some embodiments, the state of the respective device is a sensor-interpretation state that is used to interpret sensor measurements from sensors of the respective device. In some embodiments, states for a plurality of the one or more devices are determined by the processing apparatus. In some embodiments, the state of the respective device is (573) a physical property of the respective device (e.g., a coupling state of the respective device or a navigational state of the respective device). In some embodiments, the state of the respective device is (574) a state of an environment of the respective device (e.g., a state of an entity associated with the respective device or a state of an environment surrounding the respective device such whether the respective device is in a car or an elevator). In some embodiments, the state of the respective device is (575) a state of an entity associated with the respective device (e.g., a posture of a user of the device).
  • In some embodiments, the one or more devices include (576) a first device (e.g., Navigation Sensing Device 102) and a second device (e.g., Auxiliary Device 106). For example, the two devices are optionally, a smartphone with a plurality of sensors and a Bluetooth headset with one or more sensors; two smartphones that are in communication with each other; or with a remote computer such as a set top box or home entertainment system. In some of these embodiments, the processing apparatus determines (578) a state of the first device in accordance with a classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements and determines (579) a state of the second device in accordance with the classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements. In other words, a state of the first device is determined based at least in part on sensor measurements from the second device (e.g., in addition to one or more sensor measurements from the first device). In some embodiments the second device provides the processing apparatus with processes signal features. In some embodiments, the second device provides the processing apparatus with raw sensor data. In some embodiments, the state of the second device is determined (580) based at least in part on the state of the first device and/or based on sensor measurements from one or more sensors of the first device.
  • In some embodiments, pre-classifying the sensor measurements includes (581) identifying the sensor measurements as belonging to a respective pre-classification that is associated with multiple feature sets, including a first set of features for use in identifying states of the respective device under a first set of conditions and a second set of features for use in identifying states of the respective device under a second set of conditions different from the first set of conditions; extracting the features includes extracting the first set of features and extracting the second set of features; and determining the state of the respective device includes determining the state of the respective device in accordance with the first set of features and the second set of features. In some embodiments, the first set of conditions is “InVehicle” and the second set of conditions is “NotInVehicle.” In some embodiments, the first set of conditions is “Walking” and the second set of conditions is “Stationary.” While the examples described herein specifically identify two feature sets associated with a respective pre-classification, in some situations a larger number of feature sets (e.g., 3, 4, 5 or more feature sets) are associated with a respective pre-classification (e.g., where the pre-classification corresponds to a larger number of possible conditions under which the device is operating). For example, in some situations, a pre-classification indicates that a device is in a stable state but indicates that it is uncertain whether the device is associated with a walking user or a stationary user and thus the pre-classification will trigger the generation of a set of “Walking” stable-state features and a set of “Stationary” stable-state features (e.g., as described in greater detail above with reference to Equations 7-13). As another example, in some situations, a pre-classification indicates that a device is in a state transition but indicates that it is uncertain whether the device is associated with a walking user or a stationary user and thus the pre-classification will trigger the generation of a set of “Walking” state-transition features and a set of “Stationary” state-transition features.
  • In some embodiments, pre-classifying the sensor measurements includes identifying the sensor measurements as belonging to one of a plurality of coupling pre-classifications and determining the state of the respective device includes determining (582) a coupling state of the respective device in accordance with a coupling classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements. In other words, at least some of the features that are used to pre-classify the sensor measurements are extracted from same sensor measurements from which the features that are used to classify the state of the device are made. For example, as described in greater detail above with reference to FIGS. 4A-4D, the filtered signals are used by both Pre-Classifier 404 and the classifiers (e.g., Stable-State Feature Generator 406 or State-Transition Feature Generator 412, depending on the pre-classification of the filtered signals).
  • In some embodiments, the coupling state of the respective device includes (583) a plurality of coupling-stable states corresponding to a coupling between the respective device and an entity (e.g., an owner/user of the respective device) associated with the respective device. For example, in FIG. 4B, Markov Model 410 includes at least four coupling states, “On Table,” “In Hand at Side,” “In Hand at Front,” and “In Pocket.” In some embodiments, the coupling state of the respective device includes (584) a plurality of coupling-transition states corresponding to a transition between two of the coupling-stable states. For example, in FIG. 4B, Markov Model 410 includes at least 12 transitions between different coupling states.
  • In some embodiments, determining the state of the respective device includes updating (586) a Markov Model, where the Markov Model includes: a plurality of model states corresponding to states of the respective device; and (e.g., coupling-stable states of the respective device) a plurality of sets of transition probabilities between the plurality of model states. For example, Markov Model 410 in FIG. 4B includes at least four model states (X1, X2, X3 and X4) and at least sixteen transitions (e.g., T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T14, T15, and T16). In some embodiments, in accordance with a determination (e.g., during the pre-classification) that the sensor measurements correspond to a first (e.g., coupling-stable) pre-classification, the processing apparatus updates (587) one or more model states in the Markov Model in accordance with a first set of transition probabilities (e.g., a first state transition matrix for the Markov Model corresponding to the first pre-classification). For example, as described above with reference to equations 3-4, when Pre-Classifier 404 determines that the state associated with the device is in stable-state, stable-state features are produced and used to generate a measurement probability P(Xi,t|yt) that is combined with a model-predicted probability {tilde over (P)}(Xi,t) (generated based on the first set of transition probabilities) to generate a combined probability of the various model states.
  • In some embodiments, in accordance with a determination (e.g., during the pre-classification) that the set of sensor measurements correspond to a second (e.g., coupling-transition) pre-classification, the processing apparatus updates (588) one or more model states in the Markov Model in accordance with a second set of transition probabilities different from the first set of transition probabilities (e.g., a second state transition matrix for the Markov Model corresponding to the second pre-classification). In some embodiments, the second set of transition probabilities is derived, at least in part, from the features selected by the pre-classifier. For example, as described above with reference to equation 5, when Pre-Classifier 404 determines that the state associated with the device is in transition, state-transition features are produced and used to generate a measurement-based state transition matrix (e.g., State-Transition Markov Model Transition Matrix 422 in FIG. 4B) for use in place of a stable-state transition matrix (e.g., Stable-State Transition Markov Model Transition Matrix 420 in FIG. 4B). After the measurement-based state transition matrix is generated, a model update is performed using the measurement-based state transition matrix (e.g., in accordance with equation 6). In other words, in some embodiments, the same model states are modeled in the Markov Model for both a stable-state mode and a state-transition mode, but the transition probabilities of the Markov Model are changed in accordance with depending on the pre-classification of the respective device.
  • In some embodiments, after determining the state of the respective device, the processing apparatus adjusts (590) operation of the respective device in accordance with the determined state of the respective device. For example, the processing apparatus turns a display on when a device is removed from a pocket or bag or turns a display off when the respective device is placed in a pocket or bag. As another example, the processing apparatus recalibrates sensors of the respective device when the respective is picked up from a table; locks the respective device when the respective device is placed on a table, in a pocket, or in a bag; and/or unlocks the respective device when the respective device is picked up from a table, removed from a pocket, or removed from a bag. As another example, the processing apparatus enables motion tracking features when the respective device is physically associated with the user and/or disables motion tracking features when the respective device is placed on a stationary object such as a table.
  • In some embodiments, the processing apparatus repeats (592) the receiving sensor measurements, pre-classifying sensor measurements, selecting one or more feature types, extracting features and determining the state of the respective device for a plurality of measurement epochs. In some embodiments, a state of the respective device determined in a prior measurement epoch is used (593) as a factor in determining a state in a current measurement epoch. In some embodiments, a frequency of the measurement epochs is variable and is determined (594) at least in part based on a coupling pre-classification of the sensor measurements (e.g., stable-state or transition), a probability of a coupling state of the respective device determined in a prior measurement epoch (e.g., a degree of certainty of a coupling state), and/or a stability of a current coupling state of the respective device (e.g., length of time that a coupling state has remained at high probability). In some embodiments, the processing apparatus increases an amount of time between measurement epochs when there is greater certainty and/or stability in the state associated with the device. In some embodiments, the processing apparatus decreases an amount of time between measurement epochs when there is less certainty and/or stability in the state associated with the device.
  • In some embodiments, the plurality of epochs include (596) a first epoch and a second epoch, and during the first epoch: the sensor measurements are determined to correspond to a coupling-stable pre-classification and a first set of feature types corresponding to the coupling-stable pre-classification are selected as the one or more selected feature types. In contrast, during the second epoch the sensor measurements are determined to correspond to a coupling-transition pre-classification and a second set of feature types corresponding to the coupling-transition pre-classification are selected as the one or more selected feature types, where the first set of feature types is different from the second set of feature types. In other words the processing apparatus selects between different features to generate based upon conditions at the device during successive measurement epochs. Thus, when the device is in a stable-state, stable-state features are generated and when the state of the device is in transition, state-transition features are generated, thereby conserving energy and processing power.
  • It should be understood that the particular order in which the operations in FIGS. 5A-5I have been described are merely exemplary and are not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein.
  • System Structure
  • FIG. 6 is a block diagram of Navigation sensing Device 102 (herein “Device 102”). Device 102 typically includes one or more processing units (CPUs) 1102, one or more network or other Communications Interfaces 1104 (e.g., a wireless communication interface, as described above with reference to FIG. 1), Memory 1110, Sensors 1168 (e.g., Sensors 220 such as one or more Accelerometers 1170, Magnetometers 1172, Gyroscopes 1174, Beacon Sensors 1176, Inertial Measurement Units 1178, Thermometers, Barometers, and/or Proximity Sensors, etc.), one or more Cameras 1180, and one or more Communication Buses 1109 for interconnecting these components. In some embodiments, Communications Interfaces 1104 include a transmitter for transmitting information, such as accelerometer and magnetometer measurements, and/or the computed navigational state of Device 102, and/or other information to Host 101. Communication buses 1109 typically include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device 102 optionally includes user interface 1105 comprising Display 1106 (e.g., Display 104 in FIG. 1) and Input Devices 1107 (e.g., keypads, buttons, etc.). Memory 1110 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 1110 optionally includes one or more storage devices remotely located from the CPU(s) 1102. Memory 1110, or alternately the non-volatile memory device(s) within Memory 1110, comprises a non-transitory computer readable storage medium. In some embodiments, Memory 1110 stores the following programs, modules and data structures, or a subset thereof:
      • Operating System 1112 that includes procedures for handling various basic system services and for performing hardware dependent tasks;
      • Communication Module 1113 that is used for connecting Device 102 to Host 101 and/or Device 106 via Communication Network Interface(s) 1104 (wired or wireless); Communication Module 1113 is optionally adapted for connecting Device 102 to one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
      • Sensor Measurements 1114 (e.g., data representing accelerometer measurements, magnetometer measurements, gyroscope measurements, global positioning system measurements, beacon sensor measurements, inertial measurement unit measurements, thermometer measurements, atmospheric pressure measurements, proximity measurements, etc.);
      • data representing Button Presses 1116;
      • State Determination Module 1120 for determining a state associated with Device 102 (e.g., a state of Device 102 such as a navigational state and/or a state of an environment in which Device 102 is currently located), optionally including:
        • Sensor Data Filters 1122 for filtering raw sensor data generated by the sensors (e.g., Sensor Data Filter(s) 402 in FIGS. 4A-4D);
        • Pre-Classifier 1124 for determining, based on the sensor data (e.g., as filtered by Sensor Data Filters 1122), which types of features to generate based on the sensor data (e.g., Pre-Classifier 404 in FIGS. 4A-4D) and, optionally, scheduling features to be extracted;
        • State Transition Module(s) 1126 for generating (e.g., via State-Transition Feature Generator(s) 412 in FIGS. 4A-4D) state-transition features associated with state transitions between different states associated with Device 102 based on the sensor data, and applying (e.g., via State-Transition Classifier(s) 414 in FIGS. 4A-4D) state-transition classifiers to the state-transition features (e.g., for use in updating model transition probabilities of a probabilistic model such as Markov Model 1130);
        • Stable State Module(s) 1128 for generating (e.g., via Stable-State Feature Generator(s) 406 in FIGS. 4A-4D) stable-state features associated with a stable state associated with Device 102 based on the sensor data, and applying (e.g., via Stable-State Classifier(s) 408 in FIGS. 4A-4D) stable-state classifiers to the features (e.g., for use in updating model state probabilities of a probabilistic model such as Markov Model 1130); and
        • Markov Model 1130 for updating probabilities of states associated with Device 102 in accordance with model state probabilities and model transition probabilities (e.g., Markov Model 410 in FIGS. 4A-4D);
      • Navigational State Compensator 1138 for determining a fixed compensation (e.g., a rotational offset) for compensating for drift in the navigational state estimate;
      • Navigation State Estimator 1140 for estimating navigational states of Device 102, optionally including:
        • Kalman Filter Module 1142 that determines the attitude of Device 102, as described in U.S. Pat. Pub. No. 2010/0174506 Equations 8-29, wherein the Kalman filter module includes: a sensor model (e.g., the sensor model described in Equations 28-29 of U.S. Pat. Pub. No. 2010/0174506), a dynamics model (e.g., the dynamics model described in Equations 15-21 of U.S. Pat. Pub. No. 2010/0174506), a predict module that performs the predict phase operations of the Kalman filter, an update module that performs the update operations of the Kalman filter, a state vector of the Kalman filter (e.g., the state vector 2 in Equation 10 of U.S. Pat. Pub. No. 2010/0174506), a mapping, Kalman filter matrices, and attitude estimates (e.g., the attitude estimates as obtained from the quaternion in the state vector 2 in Equation 10 of U.S. Pat. Pub. No. 2010/0174506);
        • Magnetic Field Residual 1144 that is indicative of a difference between a magnetic field detected based on measurements from Magnetometer(s) 1172 and a magnetic field estimated based on Kalman Filter Module 1142;
        • Pedestrian Dead Reckoning Module 1146, for determining a direction of motion of the entity and updating a position of the device in accordance with the direction of motion of the entity, stride length, and stride count (additional details regarding pedestrian dead reckoning can be found in A. Jimenez, F. Seco, C. Prieto, and J. Guevara, “A comparison of Pedestrian Dead-Reckoning algorithms using a low-cost MEMS IMU,” IEEE International Symposium on Intelligent Signal Processing 26-28 Aug. 2009, p. 37-42, which is incorporated herein by reference in its entirety); and
        • data representing Navigational State Estimate 1150 (e.g., an estimate of the position and/or attitude of Device 102).
      • optionally, User Interface Module 1152 that receives commands from the user via Input Device(s) 1107 and generates user interface objects in Display(s) 1106 in accordance with the commands and the navigational state of Device 102, User Interface Module 1152 optionally includes one or more of: a cursor position module for determining a cursor position for a cursor to be displayed in a user interface in accordance with changes in a navigational state of the navigation sensing device, an augmented reality module for determining positions of one or more user interface objects to be displayed overlaying a dynamic background such as a camera output in accordance with changes in a navigational state of the navigation sensing device, a virtual world module for determining a portion of a larger user interface (a portion of a virtual world) to be displayed in accordance with changes in a navigational state of the navigation sensing device, a pedestrian dead reckoning module for tracking movement of Device 102 over time, and other application specific user interface modules; and
      • optionally, Gesture Determination Module 1154 for determining gestures in accordance with detected changes in the navigational state of Device 102.
  • It is noted that in some of the embodiments described above, Device 102 does not include a Gesture Determination Module 1154, because gesture determination is performed by Host 101. In some embodiments described above, Device 102 also does not include State Determination Module 1120, Navigational State Estimator 1140 and User Interface Module because Device 102 transmits Sensor Measurements 1114 and, optionally, data representing Button Presses 1116 to a Host 101 at which a navigational state of Device 102 is determined.
  • Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and each of the above identified programs or modules corresponds to a set of instructions for performing a function described above. The set of instructions can be executed by one or more processors (e.g., CPUs 1102). The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, Memory 1110 may store a subset of the modules and data structures identified above. Furthermore, Memory 1110 may store additional modules and data structures not described above.
  • Although FIG. 6 shows a “Navigation sensing Device 102,” FIG. 6 is intended more as functional description of the various features which may be present in a navigation sensing device. In practice, and as recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated.
  • FIG. 7 is a block diagram of Host Computer System 101 (herein “Host 101”). Host 101 typically includes one or more processing units (CPUs) 1202, one or more network or other Communications Interfaces 1204 (e.g., any of the wireless interfaces described above with reference to FIG. 1), Memory 1210, and one or more Communication Buses 1209 for interconnecting these components. In some embodiments, Communication Interfaces 1204 include a receiver for receiving information, such as accelerometer and magnetometer measurements, and/or the computed attitude of a navigation sensing device (e.g., Device 102), and/or other information from Device 102. Communication Buses 1209 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Host 101 optionally includes a User Interface 1205 comprising a Display 1206 (e.g., Display 104 in FIG. 1) and Input Devices 1207 (e.g., a navigation sensing device such as a multi-dimensional pointer, a mouse, a keyboard, a trackpad, a trackball, a keypad, buttons, etc.). Memory 1210 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 1210 optionally includes one or more storage devices remotely located from the CPU(s) 1202. Memory 1210, or alternately the non-volatile memory device(s) within Memory 1210, comprises a non-transitory computer readable storage medium. In some embodiments, Memory 1210 stores the following programs, modules and data structures, or a subset thereof:
      • Operating System 1212 that includes procedures for handling various basic system services and for performing hardware dependent tasks;
      • Communication Module 1213 that is used for connecting Host 101 to Device 102 and/or Device 106, and/or other devices or systems via Communication Network Interface(s) 1204 (wired or wireless), and for connecting Host 101 to one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
      • Sensor Measurements 1214 (e.g., data representing accelerometer measurements, magnetometer measurements, gyroscope measurements, global positioning system measurements, beacon sensor measurements, inertial measurement unit measurements, thermometer measurements, atmospheric pressure measurements, proximity measurements, etc.);
      • data representing Button Presses 1216;
      • State Determination Module 1220 for determining a state associated with Device 102 (e.g., a state of Device 102 such as a navigational state and/or a state of an environment in which Device 102 is currently located), optionally including:
        • Sensor Data Filters 1222 for filtering raw sensor data generated by the sensors of Device 102 (e.g., Sensor Data Filter(s) 402 in FIGS. 4A-4D);
        • Pre-Classifier 1224 for determining, based on the sensor data (e.g., as filtered by Sensor Data Filters 1222), which types of features to generate based on the sensor data (e.g., Pre-Classifier 404 in FIGS. 4A-4D) and, optionally, scheduling features to be extracted;
        • State Transition Module(s) 1226 for generating (e.g., via State-Transition Feature Generator(s) 412 in FIGS. 4A-4D) state-transition features associated with state transitions between different states associated with Device 102 based on the sensor data, and applying (e.g., via State-Transition Classifier(s) 414 in FIGS. 4A-4D) state-transition classifiers to the state-transition features (e.g., for use in updating model transition probabilities of a probabilistic model such as Markov Model 1230);
        • Stable State Module(s) 1228 for generating (e.g., via Stable-State Feature Generator(s) 406 in FIGS. 4A-4D) stable-state features associated with a stable state associated with Device 102 based on the sensor data, and applying (e.g., via Stable-State Classifier(s) 408 in FIGS. 4A-4D) stable-state classifiers to the features (e.g., for use in updating model state probabilities of a probabilistic model such as Markov Model 1230); and
        • Markov Model 1230 for updating probabilities of states associated with Device 102 in accordance with model state probabilities and model transition probabilities (e.g., Markov Model 410 in FIGS. 4A-4D);
      • Navigational State Compensator 1238 for determining a fixed compensation (e.g., a rotational offset) for compensating for drift in the navigational state estimate of Device 102;
      • Navigation State Estimator 1240 for estimating navigational states of Device 102, optionally including:
        • Kalman Filter Module 1242 that determines the attitude of Device 102, as described in U.S. Pat. Pub. No. 2010/0174506 Equations 8-29, wherein the Kalman filter module includes: a sensor model (e.g., the sensor model described in Equations 28-29 of U.S. Pat. Pub. No. 2010/0174506), a dynamics model (e.g., the dynamics model described in Equations 15-21 of U.S. Pat. Pub. No. 2010/0174506), a predict module that performs the predict phase operations of the Kalman filter, an update module that performs the update operations of the Kalman filter, a state vector of the Kalman filter (e.g., the state vector 2 in Equation 10 of U.S. Pat. Pub. No. 2010/0174506), a mapping, Kalman filter matrices, and attitude estimates (e.g., the attitude estimates as obtained from the quaternion in the state vector 2 in Equation 10 of U.S. Pat. Pub. No. 2010/0174506);
        • Magnetic Field Residual 1244 that is indicative of a difference between a magnetic field detected based on measurements from Magnetometer(s) 1272 and a magnetic field estimated based on Kalman Filter Module 1242;
        • Pedestrian Dead Reckoning Module 1246, for determining a direction of motion of the entity and updating a position of the device in accordance with the direction of motion of the entity, stride length and stride count; and
        • data representing Navigational State Estimate 1250 (e.g., an estimate of the position and/or attitude of Device 102).
      • optionally, User Interface Module 1252 that receives commands from the user via Input Device(s) 1207 and generates user interface objects in Display(s) 1206 in accordance with the commands and the navigational state of Device 102, User Interface Module 1252 optionally includes one or more of: a cursor position module for determining a cursor position for a cursor to be displayed in a user interface in accordance with changes in a navigational state of the navigation sensing device, an augmented reality module for determining positions of one or more user interface objects to be displayed overlaying a dynamic background such as a camera output in accordance with changes in a navigational state of the navigation sensing device, a virtual world module for determining a portion of a larger user interface (a portion of a virtual world) to be displayed in accordance with changes in a navigational state of the navigation sensing device, a pedestrian dead reckoning module for tracking movement of Device 102 over time, and other application specific user interface modules; and
      • optionally, Gesture Determination Module 1254 for determining gestures in accordance with detected changes in the navigational state of Device 102.
  • It is noted that in some of the embodiments described above, Host 101 does not store data representing Sensor Measurements 1214, because sensor measurements of Device 102 are processed at Device 102, which sends data representing Navigational State Estimate 1250 to Host 101. In other embodiments, Device 102 sends data representing Sensor Measurements 1214 to Host 101, in which case the modules for processing that data are present in Host 101.
  • Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and each of the above identified programs or modules corresponds to a set of instructions for performing a function described above. The set of instructions can be executed by one or more processors (e.g., CPUs 1202). The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. The actual number of processors and software modules used to implement Host 101 and how features are allocated among them will vary from one implementation to another. In some embodiments, Memory 1210 may store a subset of the modules and data structures identified above. Furthermore, Memory 1210 may store additional modules and data structures not described above.
  • Note that method 500 described above is optionally governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of Device 102 or Host 101. As noted above, in some embodiments these methods may be performed in part on Device 102 and in part on Host 101, or on a single integrated system which performs all the necessary operations. Each of the operations shown in FIGS. 5A-5I optionally correspond to instructions stored in a computer memory or computer readable storage medium of Device 102 or Host 101. The computer readable storage medium optionally includes a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. In some embodiments, the computer readable instructions stored on the computer readable storage medium are in source code, assembly language code, object code, or other instruction format that is interpreted or executed by one or more processors.
  • The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (1)

What is claimed is:
1. A method comprising:
at a processing apparatus having one or more processors and memory storing one or more programs that, when executed by the one or more processors, cause the respective processing apparatus to perform the method:
receiving sensor measurements generated by one or more sensors of one or more devices;
pre-classifying the sensor measurements as belonging to one of a plurality of pre-classifications;
selecting one or more feature types to extract from the sensor measurements based at least in part on the pre-classification of the sensor measurements;
extracting features of the one or more selected feature types from the sensor measurements; and
determining a state of a respective device of the one or more devices in accordance with a classification of the sensor measurements determined based on the one or more features extracted from the sensor measurements.
US14/321,707 2012-11-07 2014-07-01 Selecting Feature Types to Extract Based on Pre-Classification of Sensor Measurements Abandoned US20150012248A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/321,707 US20150012248A1 (en) 2012-11-07 2014-07-01 Selecting Feature Types to Extract Based on Pre-Classification of Sensor Measurements

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201261723744P 2012-11-07 2012-11-07
US201361794032P 2013-03-15 2013-03-15
US13/939,126 US8775128B2 (en) 2012-11-07 2013-07-10 Selecting feature types to extract based on pre-classification of sensor measurements
US14/321,707 US20150012248A1 (en) 2012-11-07 2014-07-01 Selecting Feature Types to Extract Based on Pre-Classification of Sensor Measurements

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/939,126 Continuation US8775128B2 (en) 2012-11-07 2013-07-10 Selecting feature types to extract based on pre-classification of sensor measurements

Publications (1)

Publication Number Publication Date
US20150012248A1 true US20150012248A1 (en) 2015-01-08

Family

ID=50623153

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/939,126 Active US8775128B2 (en) 2012-11-07 2013-07-10 Selecting feature types to extract based on pre-classification of sensor measurements
US14/321,707 Abandoned US20150012248A1 (en) 2012-11-07 2014-07-01 Selecting Feature Types to Extract Based on Pre-Classification of Sensor Measurements

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/939,126 Active US8775128B2 (en) 2012-11-07 2013-07-10 Selecting feature types to extract based on pre-classification of sensor measurements

Country Status (2)

Country Link
US (2) US8775128B2 (en)
WO (1) WO2014074268A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160062470A1 (en) * 2014-09-02 2016-03-03 Stmicroelectronics International N.V. Instrument interface for reducing effects of erratic motion
US9500739B2 (en) 2014-03-28 2016-11-22 Knowles Electronics, Llc Estimating and tracking multiple attributes of multiple objects from multi-sensor data
US9668048B2 (en) 2015-01-30 2017-05-30 Knowles Electronics, Llc Contextual switching of microphones
US9772815B1 (en) 2013-11-14 2017-09-26 Knowles Electronics, Llc Personalized operation of a mobile device using acoustic and non-acoustic information
US9781106B1 (en) 2013-11-20 2017-10-03 Knowles Electronics, Llc Method for modeling user possession of mobile device for user authentication framework
US9807725B1 (en) 2014-04-10 2017-10-31 Knowles Electronics, Llc Determining a spatial relationship between different user contexts
US20180223626A1 (en) * 2017-02-09 2018-08-09 Baker Hughes Incorporated Interventionless Pressure Operated Sliding Sleeve with Backup Operation with Intervention
US10353495B2 (en) 2010-08-20 2019-07-16 Knowles Electronics, Llc Personalized operation of a mobile device using sensor signatures

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10145707B2 (en) * 2011-05-25 2018-12-04 CSR Technology Holdings Inc. Hierarchical context detection method to determine location of a mobile device on a person's body
US20150065919A1 (en) * 2013-08-27 2015-03-05 Jose Antonio Cuevas Posture training device
US9721212B2 (en) * 2014-06-04 2017-08-01 Qualcomm Incorporated Efficient on-device binary analysis for auto-generated behavioral models
US10371528B2 (en) * 2014-07-03 2019-08-06 Texas Instruments Incorporated Pedestrian navigation devices and methods
RU2622880C2 (en) * 2014-08-22 2017-06-20 Нокиа Текнолоджиз Ой Sensor information processing
US20160077166A1 (en) * 2014-09-12 2016-03-17 InvenSense, Incorporated Systems and methods for orientation prediction
US9763603B2 (en) * 2014-10-21 2017-09-19 Kenneth Lawrence Rosenblood Posture improvement device, system, and method
US11191453B2 (en) * 2014-10-21 2021-12-07 Kenneth Lawrence Rosenblood Posture improvement device, system, and method
US10064572B2 (en) * 2014-10-21 2018-09-04 Kenneth Lawrence Rosenblood Posture and deep breathing improvement device, system, and method
US11484261B2 (en) * 2014-12-19 2022-11-01 Koninklijke Philips N.V. Dynamic wearable device behavior based on schedule detection
US10271115B2 (en) * 2015-04-08 2019-04-23 Itt Manufacturing Enterprises Llc. Nodal dynamic data acquisition and dissemination
JP6940483B2 (en) 2015-08-31 2021-09-29 マシモ・コーポレイション Wireless patient monitoring system and method
US10113877B1 (en) * 2015-09-11 2018-10-30 Philip Raymond Schaefer System and method for providing directional information
WO2018071715A1 (en) 2016-10-13 2018-04-19 Masimo Corporation Systems and methods for patient fall detection
US10671925B2 (en) 2016-12-28 2020-06-02 Intel Corporation Cloud-assisted perceptual computing analytics
US10878342B2 (en) * 2017-03-30 2020-12-29 Intel Corporation Cloud assisted machine learning
CN109635617A (en) * 2017-10-09 2019-04-16 富士通株式会社 Recognition methods, device and the electronic equipment of action state
USD917550S1 (en) 2018-10-11 2021-04-27 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD916135S1 (en) 2018-10-11 2021-04-13 Masimo Corporation Display screen or portion thereof with a graphical user interface
US11406286B2 (en) 2018-10-11 2022-08-09 Masimo Corporation Patient monitoring device with improved user interface
USD917564S1 (en) 2018-10-11 2021-04-27 Masimo Corporation Display screen or portion thereof with graphical user interface
USD998630S1 (en) 2018-10-11 2023-09-12 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD998631S1 (en) 2018-10-11 2023-09-12 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD999246S1 (en) 2018-10-11 2023-09-19 Masimo Corporation Display screen or portion thereof with a graphical user interface
US20200150772A1 (en) * 2018-11-09 2020-05-14 Google Llc Sensing Hand Gestures Using Optical Sensors
US11482047B2 (en) * 2020-01-06 2022-10-25 Kaia Health Software GmbH ML model arrangement and method for evaluating motion patterns
USD974193S1 (en) 2020-07-27 2023-01-03 Masimo Corporation Wearable temperature measurement device
USD980091S1 (en) 2020-07-27 2023-03-07 Masimo Corporation Wearable temperature measurement device
US11821732B2 (en) * 2021-01-07 2023-11-21 Stmicroelectronics S.R.L. Electronic device including bag detection
US20220358670A1 (en) * 2021-05-04 2022-11-10 Varjo Technologies Oy Tracking method for image generation, a computer program product and a computer system
USD1000975S1 (en) 2021-09-22 2023-10-10 Masimo Corporation Wearable temperature measurement device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100174506A1 (en) * 2009-01-07 2010-07-08 Joseph Benjamin E System and Method for Determining an Attitude of a Device Undergoing Dynamic Acceleration Using a Kalman Filter

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1366712A4 (en) * 2001-03-06 2006-05-31 Microstone Co Ltd Body motion detector
US20050033200A1 (en) * 2003-08-05 2005-02-10 Soehren Wayne A. Human motion identification and measurement system and method
US20050172311A1 (en) * 2004-01-31 2005-08-04 Nokia Corporation Terminal and associated method and computer program product for monitoring at least one activity of a user
WO2006033104A1 (en) * 2004-09-22 2006-03-30 Shalon Ventures Research, Llc Systems and methods for monitoring and modifying behavior
WO2010028181A1 (en) * 2008-09-03 2010-03-11 Snif Labs Activity state classification
WO2010090867A2 (en) * 2009-01-21 2010-08-12 SwimSense, LLC Multi-state performance monitoring system
WO2011092549A1 (en) * 2010-01-27 2011-08-04 Nokia Corporation Method and apparatus for assigning a feature class value
US8756173B2 (en) * 2011-01-19 2014-06-17 Qualcomm Incorporated Machine learning of known or unknown motion states with sensor fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100174506A1 (en) * 2009-01-07 2010-07-08 Joseph Benjamin E System and Method for Determining an Attitude of a Device Undergoing Dynamic Acceleration Using a Kalman Filter

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10353495B2 (en) 2010-08-20 2019-07-16 Knowles Electronics, Llc Personalized operation of a mobile device using sensor signatures
US9772815B1 (en) 2013-11-14 2017-09-26 Knowles Electronics, Llc Personalized operation of a mobile device using acoustic and non-acoustic information
US9781106B1 (en) 2013-11-20 2017-10-03 Knowles Electronics, Llc Method for modeling user possession of mobile device for user authentication framework
US9500739B2 (en) 2014-03-28 2016-11-22 Knowles Electronics, Llc Estimating and tracking multiple attributes of multiple objects from multi-sensor data
US9807725B1 (en) 2014-04-10 2017-10-31 Knowles Electronics, Llc Determining a spatial relationship between different user contexts
US20160062470A1 (en) * 2014-09-02 2016-03-03 Stmicroelectronics International N.V. Instrument interface for reducing effects of erratic motion
US9880631B2 (en) * 2014-09-02 2018-01-30 Stmicroelectronics International N.V. Instrument interface for reducing effects of erratic motion
US9668048B2 (en) 2015-01-30 2017-05-30 Knowles Electronics, Llc Contextual switching of microphones
US20180223626A1 (en) * 2017-02-09 2018-08-09 Baker Hughes Incorporated Interventionless Pressure Operated Sliding Sleeve with Backup Operation with Intervention

Also Published As

Publication number Publication date
US20140129178A1 (en) 2014-05-08
WO2014074268A1 (en) 2014-05-15
US8775128B2 (en) 2014-07-08

Similar Documents

Publication Publication Date Title
US8775128B2 (en) Selecting feature types to extract based on pre-classification of sensor measurements
US9726498B2 (en) Combining monitoring sensor measurements and system signals to determine device context
CN108596976B (en) Method, device and equipment for relocating camera attitude tracking process and storage medium
KR102252269B1 (en) Swimming analysis system and method
WO2017215024A1 (en) Pedestrian navigation device and method based on novel multi-sensor fusion technology
US9804679B2 (en) Touchless user interface navigation using gestures
Henpraserttae et al. Accurate activity recognition using a mobile phone regardless of device orientation and location
EP2699983B1 (en) Methods and apparatuses for facilitating gesture recognition
CN106575150B (en) Method for recognizing gestures using motion data and wearable computing device
EP2447809B1 (en) User device and method of recognizing user context
US10989916B2 (en) Pose prediction with recurrent neural networks
US9235278B1 (en) Machine-learning based tap detection
US10540597B1 (en) Method and apparatus for recognition of sensor data patterns
He et al. A low power fall sensing technology based on FD-CNN
US20140278208A1 (en) Feature extraction and classification to determine one or more activities from sensed motion signals
US10748075B2 (en) Method and apparatus for energy efficient probabilistic context awareness of a mobile or wearable device user by switching between a single sensor and multiple sensors
Thiemjarus et al. A study on instance-based learning with reduced training prototypes for device-context-independent activity recognition on a mobile phone
US9195309B2 (en) Method and apparatus for classifying multiple device states
WO2014039552A1 (en) System and method for estimating the direction of motion of an entity associated with a device
US20160189534A1 (en) Wearable system and method for balancing recognition accuracy and power consumption
Windau et al. Situation awareness via sensor-equipped eyeglasses
KR20190109654A (en) Electronic device and method for measuring heart rate
US10551195B2 (en) Portable device with improved sensor position change detection
CN112527104A (en) Method, device and equipment for determining parameters and storage medium
EP3489802B1 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: SENSOR PLATFORMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MEDUNA, DEBORAH;WAITE, TOM;RAJNARAYAN, DEV;SIGNING DATES FROM 20130703 TO 20130709;REEL/FRAME:034089/0748

AS Assignment

Owner name: SENSOR PLATFORMS, INC., CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNOR'S NAME PREVIOUSLY RECORDED ON REEL 034089 FRAME 0748. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNOR'S INTEREST;ASSIGNORS:VITUS (MEDUNA), DEBORAH;WAITE, TOM;RAJNARAYAN, DEV;SIGNING DATES FROM 20130703 TO 20130709;REEL/FRAME:038451/0482

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION