EP2756658B1 - Detecting that a mobile device is riding with a vehicle - Google Patents
Detecting that a mobile device is riding with a vehicle Download PDFInfo
- Publication number
- EP2756658B1 EP2756658B1 EP12766550.3A EP12766550A EP2756658B1 EP 2756658 B1 EP2756658 B1 EP 2756658B1 EP 12766550 A EP12766550 A EP 12766550A EP 2756658 B1 EP2756658 B1 EP 2756658B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- state
- motion
- states
- vehicular
- motion states
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000033001 locomotion Effects 0.000 claims description 213
- 238000000034 method Methods 0.000 claims description 51
- 230000007704 transition Effects 0.000 claims description 50
- 238000001914 filtration Methods 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 20
- 230000003466 anti-cipated effect Effects 0.000 claims description 4
- 230000003139 buffering effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 14
- 238000001514 detection method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 238000000605 extraction Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 9
- 230000001133 acceleration Effects 0.000 description 8
- 238000013459 approach Methods 0.000 description 8
- 238000009826 distribution Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000013179 statistical model Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000006854 communication Effects 0.000 description 5
- 230000010365 information processing Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000007476 Maximum Likelihood Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013499 data model Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000004043 responsiveness Effects 0.000 description 2
- 206010048669 Terminal state Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01P—MEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
- G01P13/00—Indicating or recording presence, absence, or direction, of movement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
- H04W4/027—Services making use of location information using location based information parameters using movement velocity, acceleration information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/12—Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
Description
- Mobile devices are widespread in today's society. For example, people use cellular phones, smart phones, personal digital assistants, laptop computers, pagers, tablet computers, etc. to send and receive data wirelessly and perform other operations from countless locations. Moreover, advancements in mobile device technology have greatly increased the versatility of today's devices, enabling users to perform a wide range of tasks from a single, portable device that conventionally required either multiple devices or larger, non-portable equipment.
- Being able to robustly detect that a mobile device user is riding in a moving vehicle such as a car, bus, truck, etc., can be leveraged by a variety of applications for enhanced functionality. Thus, systems, methods, and devices for detecting that a mobile device is riding in a vehicle are desired.
- Further attention is drawn to document
WO 2011/083572 A1 which refers to a movement state estimation device with a sensor unit for detecting acceleration of a terminal in triaxial directions; a storage unit for storing a movement state estimation model including a movement state of a user of the terminal; a movement state estimation unit for estimating a certainty factor for each movement state; a terminal state estimation unit for calculating a direction of the terminal from the acceleration information; a calculation unit for calculating a reliability degree for each movement state; and a correction unit for correcting the certainty factor for each movement state in accordance with the reliability degree and obtaining a corrected movement state. - The
document EP 1 732 300 A2 refers to an information processing device connected via a network to another information processing device that recognizes action of a user on a basis of an output of a sensor. The device comprises a table DB configured to manage correspondence relation between each action recognizable in the other information processing device and communication tools and a communication means selection processing unit configured to select a communication tool corresponding to the action of the user of the other information processing device, the action of the user being indicated by the action information, as a tool used for communication with the user of the other information processing device on a basis of the correspondence relation managed by the table DB, and execute an application that manages the selected communication tool. - The
document EP 1 104 143 A2 refers to a mobile communications device which comprises means for determining the orientation of said device and trained signal processing means for processing an output of the determining means. - The document
US 2006/167647 A1 refers to methods and systems that determine automatically the likelihood that a device is inside or outside of a structure or building. The system uses one or more sensors to detect ambient conditions, and make the determination. The inference can be used to save power or suppress services from certain devices, which are irrelevant, cannot be used effectively, or do not function under certain circumstances. In support thereof, the system includes one or more context sensors that measure parameters associated probabilistically with the context of a device. A context computing component considers one or more context sensors and facilitates determination of ideal actions, policies, and situations associated with the device. - In accordance with the present invention, a method as set forth in
claim 1, and an apparatus as set forth inclaim 14 are provided. Further embodiments of the invention are claimed in the dependent claims. The embodiments and/or examples of the following description which are not covered by the appended claims are considered as not being part of the present invention. - Embodiments of the invention may solve these aforementioned problems and other problems according to the disclosures provided herein.
- Systems and methods herein enable a mobile device to detect that a user is traveling in association with a vehicle based at least on motion data. In some embodiments, accelerometer data is used. Motion data is leveraged in combination with various observations regarding vehicular movement to determine whether or not a mobile device is located in or on the vehicle. For instance, before entering the state of vehicular movement, it can be determined that the user is first in a walking state (e.g., walking to the car, bus, etc., and entering it). Likewise, after exiting the state of vehicular movement, the user re-enters the walking state (e.g., after stepping out of the car, bus, etc., the user again begins walking). Further, it can be determined that when the user is in the walking state, the accelerometer signals appear different to any accelerometer signals seen in the vehicular movement state.
- A state machine may be created that captures the above observations, and a Hidden Markov Model (HMM) may be built around the state machine to improve the accuracy of detecting vehicular movement. Various techniques for state machine and HMM construction are provided in further detail below. Further, systems and methods for using such a state machine and HMM are provided below.
-
-
FIG. 1 is a block diagram of a system for mobile device motion state classification according to some embodiments. -
FIGS. 2A ,2B , and3 are block diagrams of example device position classifiers according to some embodiments. -
FIGS. 4-5 illustrate example state diagrams for motion state classification according to some embodiments. -
FIGS. 6-7 illustrate example extended state machines for motion state classification according to some embodiments. -
FIG. 8 is a block diagram of another example device position classifier according to some embodiments. -
FIG. 9 illustrates example components of a computing device of some embodiments. -
FIG. 10 is a block flow diagram illustrating a process of classifying the motion state of a mobile device according to some embodiments. - The term "likelihood" may refer to a probability or chance of an event occurring, and may be expressed as a probability value, fraction, or percentage. The term "likelihood" as used herein may include the concept of a "probability" as used in mathematics and by persons of ordinary skill in the arts of statistical analysis and state machine modeling and implementations.
- The term "present state" or "present motion state" may refer to the current state among a plurality of states in, e.g. a state machine enumerating a series of motion states, e.g. a walk state, run state, autoStop state, autoMove state, etc. Thus, at different moments in time, the present motion state may change or may stay the same.
- The term "decision" or "classifier decision" may refer to a determination of what the present state or present motion state is of the apparatus, e.g. mobile device, utilizing the plurality of motion states.
- Techniques are described herein for classifying the motion and/or position state of a mobile device, e.g., to determine that a mobile device is located in a vehicle (e.g., car, bus, train, etc.) using a hidden Markov model (HMM) and associated state machine. In some cases this detection can be done using a satellite positioning system (SPS) such as the Global Positioning System (GPS) or the like, by leveraging data relating to the instantaneous speed, the coordinate (e.g., lat/long) trace associated with the device, and so on. However, the current drain associated with running a SPS receiver may make its use prohibitive for vehicular movement detection. Furthermore, SPS-based methods are prone to error in some scenarios. For instance, if the user is moving in traffic at slow speed, SPS information alone may not be sufficient to distinguish walking, running, bicycle riding, skateboarding, rollerblading, skiing, etc., from vehicular transport. Additionally, there are a number of cases where SPS-based detection cannot be performed. For instance, many devices may lack an SPS receiver, and for those devices that include an SPS receiver, the receiver may not be functional when a user has disabled satellite positioning, is in an environment such as an urban canyon in which satellite positioning accuracy and/or reception is poor, is driving in a covered area such as a tunnel, or is traveling in a vehicle in which accurate satellite positioning fixes cannot be obtained, such as on a bus, and/or in other scenarios. Thus, it can be desirable to use other sensors either in addition to, or in place of, SPS to detect vehicular movement.
- Embodiments herein may be implemented for any sort of vehicular or moving state, not just for automobiles. Automobiles as described herein are merely just one example of embodiments. For example, embodiments may also include detecting movements on a bicycle, riding a horse, driving a boat, etc. The exact data and methods to detect accelerations may be implemented differently compared to automobiles, but the principles described herein may remain the same.
- Some embodiments may determine that a mobile device is stopped based on data from one or more sensors associated with the mobile device. These sensors may include an accelerometer, GPS receiver, or other types of data mentioned in these disclosures. Embodiments may then disambiguate between whether the mobile device is stopped in a vehicular motion state or a pedestrian motion state based at least in part on a previous motion state of the mobile device. In some embodiments, disambiguation may include determining that the mobile device is in the vehicular motion state if an immediately previous state of the mobile device was the vehicular motion state. The disambiguation may be based on probabilistic reasoning from a Hidden Markov Model or other similar stochastic model. Embodiments may then operate an application of the mobile device based on the disambiguating.
- One additional sensor that may be used to classify the motion and/or position state of a mobile device, for example to determine whether the mobile device is riding in a vehicle, is an accelerometer. Accelerometer signals may be used in detecting vehicular movement, but it can be difficult to do this based on instantaneous accelerometer information. For instance, if a user is driving on a smooth straight road at constant speed, or is stopped at a traffic light, the accelerometer signals may look very much like they would if the user were sitting in a stationary chair or other location external to a vehicle.
- Some embodiments may receive data from one or more sensors. The data may be received at a mobile device or provide the data to the mobile device. Embodiments may then determine a sequence of motion states of the mobile device based on the received data. Embodiments may then determine that the mobile device is in a vehicular state based at least in part on the determined sequence. In some embodiments, the vehicular state comprises a stopped state. In some embodiments, determining that the mobile device is in the stopped vehicular state may include selecting the stopped vehicular state from a plurality of states, wherein the stopped vehicular state is selected only when a state immediately preceding the stopped vehicular state is in a predetermined subset of the plurality of states. In other embodiments, determining that the mobile device is in the vehicular state may include determining that the mobile device has entered a vehicular motion state during the determined sequence, wherein entry into the vehicular motion state is limited to when a state immediately preceding the entry comprises a walk state. In some embodiments, the stopped vehicular state may be selected when the immediately preceding state is a vehicular state where the mobile device is moving. In some embodiments, the one or more sensors may comprise only accelerometers.
-
FIG. 1 illustrates a system for motion state classification as described herein. One ormore motion detectors 12 associated with a mobile device provide raw motion data to a motionstate classifier module 18. The motion detectors may be any suitable device for detecting motion, which may provide any type of motion data. In some embodiments, the one ormore motion detectors 12 are one or more accelerometers providing accelerometer data. In some embodiments, optical sensors, infrared sensors, ultrasonic wave sensors and the like may be used as a motion detector or to augment motion detection of another sensor. Some embodiments may be implemented in a camera equipped with one or more gyros, the camera adapted to detect motion via accelerometer data or other kinds of motion data. In some embodiments, orientation sensors, described below, may be used as motion detectors. An orientation sensor may include an accelerometer and/or gyro, which may enable embodiments to properly orient itself in three dimensions as well as detect changes in orientation and position. In some embodiments, a camera or image input may be used as a motion detector. For example, a video, series of images, or other input derived from a camera may be used to determine motion of a device coupled to the camera. - The motion
state classifier module 18 utilizes the motion data in combination with an associatedmotion state machine 16 to infer the motion state (e.g., walk, stand, sit, in automobile and stopped, in automobile and moving, etc.) of the mobile device. The motionstate classifier module 18 can be configured to output an estimated state at regular intervals (e.g., every Is, etc.), continuously, and/or in any other manner. Further, the motionstate classifier module 18 can output a discrete state and/or a set of probabilities for respective possible states. - Further, while various examples described herein relate to a system of classifying the state of a mobile device based on accelerometer data alone, the motion
state classifier module 18 may also optionally utilize data from one or moreadditional device sensors 14, such as a GPS receiver, Wi-Fi receiver, an audio input device (e.g., a microphone, etc.), or the like. For instance, GPS and/or Wi-Fi data can be utilized in combination with accelerometer data to provide additional accuracy to position calculations. Additionally or alternatively, the motionstate classifier module 18 can be trained to identify audio signatures associated with one or more motion states, which may be utilized as a factor in estimating the motion state of the device. Audio recognition as performed in this manner can be based on a global set of audio signatures or a specific set of audio patterns generated from audio input obtained by the device. Thus, the device may be uniquely trained for audio patterns in one or more environments. In some embodiments the motionstate classifier module 18 can be trained to identify other signatures, such as optical, ultrasonic or microwave signatures, for example. Detecting changes in these mediums may also be utilized as a factor in estimating the motion state of the device. -
Motion state machine 16 may provide a definition of possible states (e.g., walk, stand, sit, in automobile and stopped, in automobile and moving, etc.) of the mobile device to motionstate classifier module 18. Additional examples of possible states may include descriptions inFIG.s 4 ,5 ,6 , and7 .Classifier module 18 may then keep track of what is the current state, based on data from motion detector(s) 12 and any additional data from sensor(s) 14.Module 18 may also determine a change in state based on motion data from motion detector(s) 12 and any additional sensor(s) 14, using the transitions and types of states provided bystate machine 16. - The motion
state classifier module 18 may infer a drive motion state associated with a mobile device using a Bayesian classifier for activity recognition. For instance, a Motion State Device Position (MSDP) classifier as illustrated inFIG. 2A may be utilized to obtain a state classification decision, or a present state, corresponding to a mobile device. As shown inFIG. 2A , features extracted from accelerometer data usingfeature extraction module 202 are processed using a set ofstatistical models 206 to compute a set oflikelihoods using module 204 for various device states. Example features extracted atmodule 202 may include acceleration values over time, standard deviation values (sa), ratios of the mean of normal to normal of mean values of the accelerometer data (rm), pitch values (pitch), rotation values (phi), etc. Example statistical models stored inmodule 206 may include a joint Gaussian Markov Model (GMM) over each motion state-device position combination, or other types of stochastic Markov Models suitable to those with ordinary skill in the art. - These models may apply the data from the feature extraction module to compute a set of likelihoods reflecting probabilities of what state the user is in, at
module 204. The output set of likelihoods may be a vector of probabilities encompassing each of the possible device states, where the individual probabilities sum to 1. Additionally or alternatively, the output set of likelihoods may include discrete values corresponding to most likely states at given periods of time. The likelihoods are then passed to afiltering block 208, which utilizes a latency factor L to stabilize the determined device states. For instance, thefiltering block 208 can set a state at a given time to a most frequently seen state over the past L seconds. Additionally or alternatively, weighted averages and/or other algorithms can be used to process vector likelihood data into a filtered set of states and/or state likelihoods. The filtered states are then provided to aconfidence test block 210 which removes low confidence decisions, as a result of which the device state is output as the most likely state identified by thefiltering block 208. In the event that a decision is removed for low confidence, the classifier may output a default state, an "unknown" state, and/or any other suitable state(s). After verifying the confidence of each state atblock 210, embodiments may output what state the user may be in. Examples may include walk, run, sit, stand, fiddle, rest, or driving motion states. - In addition to, or in place of, the classifier shown in
FIG. 2A , a classifier as illustrated inFIG. 2B can be utilized, in which a Hidden Markov Model (HMM) algorithm is utilized to process computed state likelihoods. Embodiments according toFIG. 2B may utilize afeature extraction module 252,statistical models module 256, and computelikelihoods module 254 that may function the same as or similar tomodules FIG. 2A , respectively. In some embodiments, these modules may function differently than described inFIG. 2A in that data is extracted relevant to a HMM algorithm and/or module. For example, the likelihoods computed atblock 254 may be emission probabilities used in a HMM algorithm and/or module. In other embodiments, a support vector machine (SVM) may be used inblock 254 to compute likelihoods. An SVM classifier may output hard decisions (e.g. either walk or not walk, drive or not drive, etc.), but may be modified to output soft decisions (e.g. likelihood values). Other variations may include thefeature extraction module 252 extracting most or all types of data received from the accelerometer, such that thefeature extraction module 252 performs minimal filtering operations while passing along a maximal amount of data for later modules to be able to consider. Other types of models, state machines, or classifiers may be used. For example, a HMM, GMM, etc. may be replaced with other techniques/modules, such as a Poisson hidden Markov model, hidden Bernoulli model, and other stochastic models readily apparent to those with ordinary skill in the art. - The outputs of
block 254 may be used in HMMAlgorithm block 258. HMM algorithms that can be utilized include, but are not limited to, a Viterbi algorithm (e.g., maximum likelihood (ML) sequence estimation), forward-backward chaining (e.g., ML state estimation), etc. In one implementation, in the event that discrete states are passed as input to the HMM algorithm or module, the HMM algorithm or module can estimate system noise and process the state inputs based on the estimated noise. At Test Confidence block 260, a similar check to remove states that fail the confidence check as described forblock 210 may be utilized. Differences may involve evaluating conclusions based on a HMM algorithm or module as opposed to other kinds of statistical models illustrated inFIG. 2A . After verifying which states have a high degree of confidence, the remaining outputs may be passed along as an HMM decision. - Referring to
FIG. 3 , yet another example classifier according to some embodiments is illustrated. Extractfeature vector block 302, compute likelihoods block 304,motion state model 306, and confidence test and out block 314 may be the same or similar to their respective blocks inFIGs. 2A and2B .Block 308 may considercomputed likelihoods 304 and previous probability distribution over motion states atblock 312, and compute a new probability distribution over the motion states using at least onerestricted transition model 310. Examples of a restricted transition model inblock 310 may include what is described inFIG. 4 , explained further below. In this example, embodiments may iteratively update a set of probabilities as new data is gathered from an accelerometer. - The classification techniques performed by the classifiers shown in
FIGS. 2A ,2B and3 above, as well as other classification techniques described herein, may be performed in real-time, or alternatively data may be saved for later execution. Further, the classifier can set the latency factor L to balance accuracy against responsiveness. For instance, large values of L will increase accuracy while smaller values of L will increase classification speed and responsiveness. For example, setting L to a processor-equivalent of infinity may yield the highest accuracy in some embodiments. If the data is saved for later execution, L may be adjusted to achieve the desired accuracy. Thus, data may be processed in real time, saved for later analysis, or analyzed using a combination of concurrent and post-processing. - As noted above, to detect that a user is riding in a vehicle, the following observations can be used. First, before entering the state of vehicular movement, it may be observed that the user is first in the walk state (e.g., the user walks to the car, bus, etc., and enters it). Likewise, after exiting the state of vehicular movement, the user re-enters the walk state (e.g., after stepping out of the car, bus, etc., the user again walks). Second, when the user is in the walk state, the accelerometer signals appear significantly different to any signals seen in the vehicular movement state.
- Thus, a state machine that captures the above observations may be utilized, and a HMM may be built around it to improve the accuracy of detecting vehicular movement. The HMM resides above a lower level classifier, which at each time instant, t = 1, 2, ..., outputs a likelihood value p(x(t)|ωi ) for lower level motion states ωi . For example, ω 1 = walk, ω 2 = run, ω3 = sit, ω 4 = stand, ω 5 = fiddle, ω 6 = rest, ω 7 = autoStop, ω 8 = autoMove. Each lower level motion state has a model p(·|ωi ) associated with it, which may be generated from training data. When the HMM detects the state is either autoStop or autoMove, it concludes that the user is in the higher level state of auto (vehicular movement).
- The model autoStop state can be created from training accelerometer data when a user is sitting in a car, bus, etc., that is either parked or stationary. Likewise, autoMove can be trained from accelerometer data collected when a user is moving in a car, bus, etc. Alternatively, the model for autoStop may be adapted from the model for sit.
- Training data for the autoStop and autoMove states can be collected, for example, by recording both accelerometer signals and regular GPS fixes (e.g., one per second) for a user riding in a vehicle, and then using the GPS fixes to determine, for each time instant, whether the ground truth is autoStop or autoMove. As another example, a user of the mobile device may manually select a state to associate with recorded accelerometer signals. An example training method may include calibrating a database of accelerometer signals to be associated with certain types of states, e.g. autoStop, autoMove, walk, run, etc., and then providing the database for users utilizing the mobile device containing the database. Another example may include a user manually training embodiments by specifying what state the user is actually in, while embodiments may note the types of accelerometer data being received during those states. Other examples may include some combination of these two example methods, including manual calibrations being made to a pre-calibrated database. Other types of training methods are certainly possible, and embodiments are not so limited.
- With respect to the model utilized for motion state classification as described herein, the model may be determined in various manners. For instance, the model may be determined at the mobile device, e.g., by way of training or observation. Additionally or alternatively, the model may be determined externally and loaded onto the mobile device. For instance, the model may be provided to the device by way of a processor-readable storage medium, received over a network (e.g., by downloading an associated application), created in advance and pre-installed at the device, etc.
- An example state machine according to some embodiments is illustrated in
FIG. 4 . The example state machine inFIG. 4 may be considered a restrictive transition model, in that the number of states are finite, all of the states are known, and the transitions between each state are also clearly defined. In this example, eight states are shown, with six states in pedestrian non-auto states (runstate 402, standstate 404,rest state 406,fiddle state 408, sitstate 410, and walk state 412), and twostates 418 in an automobile (autoStop state 414, and autoMove state 416). Among the non-auto states, any transitions are allowed - including back to the same state, but among the auto states 418, the only allowed transitions are autoMove-autoStop and autoStop-walk. Thus, in the example shown inFIG. 4 , the walk state always precedes entry into the higher level state of auto, and always immediately follows exit from the higher level state of auto. - The HMM of some embodiments may facilitate the disambiguation of sit and autoStop states by remembering the previous state. Thus, if the user is driving and comes to a stop at a traffic light, the HMM will report autoStop rather than sit, as the previous state was autoMove and it is not possible to transition from autoMove to sit. Similarly, when the user exits the vehicle, the HMM will report the sequence autoStop → walk, which can then be followed by sit.
- The transition probability matrix for the state machine in
FIG. 4 can be represented in various manners. For instance, the self-transition probability for each state can be represented as a constant, with the probability of transitioning to a different state being uniformly spread among the feasible transitions. As another example, the transition probability matrix can be based on training data by, e.g., storing examples of each state (e.g., oflength 1 second, etc.) in a local or remote database and comparing the accelerometer signals to the stored database samples to determine the probable states. Other transition probability matrices with different parameterizations, conforming toFIG. 4 , are also possible. - An example of a transition matrix that may be employed for the state machine in
FIG. 4 according to some embodiments is illustrated in Table 1 below.Table 1: Example transition matrix for a motion state classifier. P(an|an-1) walk run sit stand fiddle rest autoMove autoStop walk 0.9 0.0167 0.0167 0.0167 0.0167 0.0167 0 0.0167 run 0.02 0.9 0.02 0.02 0.02 0.02 0 0 sit 0.02 0.02 0.9 0.02 0.02 0.02 0 0 stand 0.02 0.02 0.02 0.9 0.02 0.02 0 0 fiddle 0.02 0.02 0.02 0.02 0.9 0.02 0 0 rest 0.02 0.02 0.02 0.02 0.02 0.9 0 0 autoMove 0 0 0 0 0 0 0.98 0.02 autoStop 0.05 0 0 0 0 0 0.05 0.9 - As shown in Table 1 above, self-transition probabilities are higher, since typically an activity persists for an extended period of time (e.g., longer than 1 second). Further, all transition probabilities from the auto states to the pedestrian states are 0 except for transitions between walk and autoStop, as the only way to enter the autoMove state is by transitioning from walk to autoStop, and the only way to exit the autoMove state is by transitioning from autoStop to walk. Further, Table 1 illustrates that self-transition probabilities are set to relatively high values to discourage rapid state oscillations, which may lead to classifier errors. The specific probabilities used can be based on, e.g., the anticipated time within a given state. For instance, the autoMove state is given the highest self-transition probability in Table 1 since it is anticipated that the user will remain in the autoMove state for long periods of time.
- Variants of the state machine in
FIG. 4 can also be used. For instance,FIG. 5 illustrates a variant state machine which differentiates between accelerating/braking and cruising, in autoAccel/Brake state 516 andautoCruise state 518, respectively. Other states as shown may be similar to those inFIG. 4 , includingrun state 502, standstate 504,rest state 506,fiddle state 508, sitstate 510, and walkstate 512, andautoStop state 514. As shown inFIG. 5 , restriction between auto states 520 and other states can be enforced similarly to that described above forFIG. 4 . Certainly, not all states as shown need be utilized or present in all embodiments, and the states shown are merely examples. Additionally, there could be more states or sub-states not shown, and embodiments are not so limited. For example, embodiments may include just a single state to represent a user is acting as a pedestrian, and then one or more vehicular states. In another example, there may be a restriction/gateway between a pedestrian motion and a vehicular motion. - The performance of an HMM-based driving detector according to some embodiments can be further improved through the use of an extended state machine. The extended state machine breaks one or more existing states into a set of consecutive sub-states which the system must pass through before exiting that state. This adds stickiness to the system by reshaping the distribution of the state duration (the amount of time spent in a state before transitioning). For instance, even in the presence of heavy biasing against state transitions, the state of a mobile device may still oscillate between different states. Thus, the extended state machine provides a number (e.g., 2, 3, 5, etc.) of sub-states within each individual state, each with its own rules for transitioning. Examples of extended state machines are given in
FIG. 6 andFIG. 7 . Here, each original state i is broken into Ni sub-states. This forces the HMM to pass through several intermediate sub-state transitions before changing state, which has the net effect of reducing or eliminating rapid fluctuations. - As shown in
FIGS. 6-7 , respective columns represent a single state. For example, referring toFIG. 6 ,column 602 may represent therun state column 604 may represent thewalk state column 606 may represent thesit state - Still referring to
FIG. 6 , in this example,column 602, representingrun state column 602 may be useful for determining whether the user is merely reducing speed while running as opposed to reducing speed to the point of walking or stopping. Similarly, thewalk state column 604 and sitstate column 606 may be sub-divided into a series of sub-states, each sub-state reflecting a series of acceleration readings or values indicative of transitioning from that state to another state. - Referring to
FIG. 7 , state transitions 702, 704, and 706 may include sub-state transitions as shown, and may also include the ability to transition back to the start of the sub-state transitions of the same state. Transitioning to the beginning of the sub-state transitions may be consistent with the ability of states shown inFIGs. 4 and5 to transition back to themselves. In other embodiments, like inFIG. 6 for example, the ability to transition back to the same state may be reflective simply in the fact that each sub-state may be able to transition back to itself. However, these are merely examples. - Additionally, states may have different numbers of sub-states depending on various factors, such as anticipated time within a given state or the like. For instance, a greater number of sub-states can be used for those states that users typically dwell in for longer durations. By way of specific, non-limiting example, 5 sub-states may be used for each of walk, run, sit, stand, fiddle, rest, and autoStop, and 12 sub-states for autoMove. Other sub-state configurations are also possible.
- Regardless of the particular state model used, at each time instant t = 1, 2, ..., the HMM takes as input a set of likelihood values p(x(t)|ωi ) for the current time t at times t = 1, 2, .... The HMM of some embodiments outputs K posterior values p(ωi |x(t-L)) corresponding to the probability of being in each state ωi at time t-L (i.e., L time steps ago), where K is the number of lower level states and L is a tunable parameter corresponding to the system latency. In order to do this, the HMM stores the L-1 previous values of p(x(t)|ωi ) for each state. It can be noted that storage and other computation requirements do not grow with time.
- With further reference to the techniques above according to embodiments, improved drive detection can be realized as follows. When a car stops at a stoplight, it is observed that the classifier as described above may output a result of sit or stand. Thus, to improve auto state detection in this instance, temporal consideration is introduced that looks at both current and past data to make a decision. This may be done using a duty cycled approach and/or a non-duty cycled approach. For the duty cycled approach, sensors are logged for only the first x seconds of every y minutes (e.g., where x = 15 sec and y = 2 min, etc.) in order to realize power savings on the device. In such an approach, a GPS sensor may also be used, which in some cases (e.g., associated with a non-duty cycled approach) would consume too much battery life.
- Alternatively, for a non-duty cycled approach, sensors are logged continuously. In this case, a state model, such as a HMM as discussed above, can be utilized based solely on accelerometer data. An example classifier for a non-duty cycled approach is illustrated by
FIG. 8 . Here,accelerometer data 802 is an input toMSDP classifier 804. Theclassifier 804 may be the same as or similar to classifiers described inFIGs. 2 or3 .Classifier 804 may output probabilities Pi that the user is detected to be in a current state Ot , given that the user is actually conducting activity i, for each i, expressed as P(Ot |activity = i), for each i. These probabilities may then be used as inputs toViterbi Decoder 806 and Forward-backward chaining module 810, each having an example latency L. Of course, embodiments are not limited to these examples. - Various algorithms can be performed for an input data stream to obtain output parameters corresponding to classified states. For instance, a forward-
backward chaining algorithm 810 can be utilized to provide a probability vector ofposterior values 812 for various states at each time instance. Alternatively, aViterbi algorithm 806 can be utilized to provide a mostlikely state sequence 808 given the input data fromblock 804. - Results provided in the form of state probabilities can be further improved through the use of confidence testing. For example, at each time t, the driving detection decision may be discarded if the posterior probabilities of the two most probable states are comparable to one another (corresponding to a high degree of uncertainty in the decision). Discarding of decisions is based on a confidence threshold, which can be based on a minimum acceptable difference between most probable states, and/or any other suitable factor(s). Examples of confidence testing may be shown in
FIGs. 2 and3 . - To improve drive detection as described above, various additional features and parameters can be introduced. For instance, spectral entropy (se) can be defined as the entropy of the distribution obtained by normalizing the FFT, e.g., se = -∑p(x)logp(x) , where
- Further, various GPS rules can be utilized to enhance classifier decisions in the duty cycled approach. First, an instantaneous velocity rule can be implemented, where for the instantaneous velocity v, (1) if v > 0.25 m/s, the likelihood of sit and stand is made small; (2) if v > 3 m/s, the likelihood of walk is made small; and (3) if v > 8 m/s, the likelihood of run is made small. Additionally, a distance rule can be implemented wherein, for a distance d between the current position and the average position during the last sampling run, if d > 200m, the likelihood of sit and stand is made small. The particular thresholds used can be based on general observations, training, and/or data pertaining to a particular user and may differ from those used above.
- As noted above with reference to the example state diagrams in
FIGS. 4-5 , the autoMove state can only be reached through autoStop, and the autoStop state can only be reached through walk. All other activities may freely transition between each other. However, in some cases autoStop and sit may be trained in the state classifier as the same class. Thus, a penalty factor can be applied to the probability of autoStop before giving it to the HMM in order to reduce the probability of said state prior to application of the HMM. - As a result of robust mobile device driving detection as described above, various automated applications are enabled. These include, but are not limited to, the following:
- 1) Switching of user interface to drive mode. For example, embodiments may detect that the user has entered a driving state, e.g. autoMove, and therefore implements preconfigured settings on the mobile device to enable voice-activated commands as default operations as opposed to touch commands.
- 2) Diversion of voice calls to mail box with appropriate message, e.g. "Can't answer phone as currently driving, please leave a message." This setting may be in addition to or included with the drive mode described in example (1), above.
- 3) Response to SMS, e.g., "Driving... can't reply now." This may be another example of how embodiments may respond to phone calls or other messages once it is determined the user has entered a driving state.
- 4) Monitoring/feedback on driving habits, notification of speeding, erratic driving, etc. For example, embodiments may additionally monitor motion state data, once the user has entered a driving state, and may make determinations based on the motion state data whether the user is driving erratically, e.g. the user is drunk. In other examples, embodiments may record statistics for a driver's habits, e.g. what is a typical driving speed, how quickly or slowly does the driver accelerate or decelerate, etc. Such data may be helpful to help improve driving habits and/or design vehicles more suitable to such driving habits. Further, such data may be used by insurance companies to determine or adjust a rate charged for coverage.
- 5) Enabling of in-car radio service and/or other services such as navigation. For example, an in-car radio may have the volume adjusted to be louder or quieter, depending on if it is determined the vehicle is in a moving state or a stopped state. In another example, navigation services may be turned on once it is determined the vehicle is in a moving state.
- 6) Reminder triggers (e.g., pick up listed items from the store). For example, visual or audio reminders may be posted to the user when it is determined the vehicle is starting to move from a parked state.
- 7) Social network updates (e.g., "Driver is currently driving to..."). For example, embodiments may be configured to automatically send updates to social network websites and the like, in order to share changes in statuses based on the driving state associated with the user.
- The examples above are merely illustrative, and embodiments are not so limited. Of course, other examples apparent to those with ordinary skill in the art may be included in embodiments.
- Thus, the determined state of the mobile device may be used to alter or toggle the operation of one or more applications, for example that are being executed by the mobile device, or system controls or configurations of the mobile device. The determined state may also affect the operation or state of devices or applications that are remote from the mobile device. For example, when the mobile device determines that is in a vehicular state, a signal or notification may be transmitted, for example over a wired or wireless network, to a remote location. The signal may indicate that a user of the mobile device is driving home and cause lights or a heater to turn on in the user's house; may cause a busy status to be set at the user's place of work; or may indicate to the user's child or to an administrator at the child's school, for example by text message or other alert, that the user is en route to pick the child up. Of course, the above circumstances are only examples and are not limiting.
- Referring to
FIG. 9 , anexample computing device 912 comprises aprocessor 920,memory 922 includingsoftware 924, input/output (I/O) device(s) 926 (e.g., a display, speaker, keypad, touch screen or touchpad, etc.), and one ormore orientation sensors 928, such as accelerometers. Additionally, thedevice 912 may include other components not illustrated inFIG. 9 , such as a network interface that facilitates bidirectional communication between thedevice 912 and one or more network entities, and/or any other suitable components). - The
processor 920 is an intelligent hardware device, e.g., a central processing unit (CPU) such as those made by Intel® Corporation or AMD®, a microcontroller, an application specific integrated circuit (ASIC), etc. Thememory 922 includes non-transitory storage media such as random access memory (RAM) and read-only memory (ROM). Thememory 922 stores thesoftware 924 which is computer-readable, computer-executable software code containing instructions that are configured to, when executed, cause theprocessor 920 to perform various functions described herein. Alternatively, thesoftware 924 may not be directly executable by theprocessor 920 but is configured to cause the computer, e.g., when compiled and executed, to perform the functions. - The
orientation sensors 928 are configured to collect data relating to motion, position and/or orientation of thedevice 912 as well as changes in such properties over time. Theorientation sensors 928 can include, e.g., one or more accelerometers, gyroscopes (gyros), magnetometers, or the like. Theorientation sensors 928 are configured to provide information from which the motion, position and/or orientation of adevice 912 can be determined.Respective orientation sensors 928 associated with adevice 912 can be employed to measure a single axis or multiple axes. For multi-axis measurement, multiple single-axis accelerometers and/or multi-axis (e.g., two-axis or three-axis) accelerometers can be employed to measure motion with respect to linear axes (e.g., x-y-z, north-east-down, etc.), and multiple single-axis gyroscopes and/or multi-axis gyroscopes can be employed to measure motion with respect to angular axes (e.g., roll, pitch or yaw). - The
orientation sensors 928 can provide information over time, e.g., periodically, such that present and past orientations, positions and/or motion directions can be compared to determine changes in the motion direction, position and/or orientation of thedevice 912. A gyroscope can provide information as to motion of thedevice 912 affecting the orientation. An accelerometer is configured to provide information as to gravitational acceleration such that the direction of gravity relative to thedevice 912 can be determined. A magnetometer is configured to provide an indication of the direction, in three dimensions, of magnetic north relative to thedevice 912, e.g., with respect to true north or magnetic north. Conversion mechanisms based on magnetic declination and/or other suitable means can be utilized to convert a direction with respect to true north to a direction with respect to magnetic north, and vice versa. - Various elements of the classifier systems illustrated and described herein can be performed by a computing device such as
device 912 inFIG. 9 . For instance, with reference toFIG. 2A , thefeature extraction block 202,likelihood computation block 204, filteringblock 208, andconfidence testing block 210 can be implemented by aprocessor 920 executing instructions stored assoftware 924 on amemory 922. Further, the accelerometer data and/orstatistical models 206 used as shown inFIG. 2A can also be stored on thememory 922. Further, with reference toFIG. 2B , thefeature extraction block 252,likelihood computation block 254, HMMalgorithm block 258, andconfidence testing block 260 can be implemented by aprocessor 920 in a similar manner to the blocks shown byFIG. 2A . Further, the accelerometer data andstatistical models 256 can be stored on thememory 922 in a similar manner to that described with respect toFIG. 2A . A similar construction may be able to implement the descriptions ofFIG. 3 . With reference toFIG. 8 , theMSDP classifier 804 can be implemented by aprocessor 920 executing instructions stored on amemory 922 in a similar manner to various elements shown byFIGS. 2A ,2B and/or 3. Similarly, theViterbi decoder 806 and forward-backward chaining blocks 810 shown inFIG. 8 can also be implemented via aprocessor 920. - The classifier implementations described in the preceding paragraph are provided as examples and are not intended to limit the subject matter described and claimed herein. For instance, one or more of the functional elements illustrated in
FIGS. 1 ,2A ,2B ,3 , and/or 9 may be implemented in hardware (e.g., using standalone hardware elements, etc.), software, or a combination of hardware and/or software in any suitable manner. For example, a hardware implementation according to some embodiments may use state restrictions to determine what the motion state is, and how states may transition from one state to the next. Date may be stored in non-volatile memory, the data representing a probability distribution of the states. Over time, the data may be updated to reflect current and previous states. For example, embodiments may store a previous state in hardware, then update one or more probability distributions using a probability distribution model as described in these disclosures. This way, only the current state and the previous state need be recorded. In some embodiments, thememory 922 may store amotion state machine 16 as described inFIG. 1 . I/O devices 926 may receive data frommotion detectors 12 and optionallyadditional device sensors 14. In other embodiments,orientation sensors 928 may correspond to themotion detectors 12 anddevice sensors 14.Processor 920 may include motionstate classifier module 18 and may process data received atmotion detector 12 andadditional sensors 14 in order to determine current and transitioning states as defined bystate machine 16. Other hardware or software techniques used to implement the disclosures herein may be readily apparent according to persons having ordinary skill in the art, and embodiments are not so limited. - Embodiments may be implemented at varying levels of computer architecture in hardware/software/firmware, etc. For example, embodiments may be implemented as a software application, which may be configured to access multiple motion sensor peripherals. In another example, embodiments may be implemented as a hardware implementation, such as with a series of hardware states in a state machine. An application program interface (API) layer may then access the hardware states. As another example, some embodiments may be implemented as part of a high level operating system (HLOS), or may be accessible to the HLOS, for example through and API. Other implementations are possible, and embodiments are not so limited.
- Referring to
FIG. 10 , with further reference toFIGS. 1-9 , a process 1000 of classifying the motion state of a mobile device includes the stages shown. The process 1000 is, however, an example only and not limiting. The process 1000 can be altered, e.g., by having stages added, removed, rearranged, combined, and/or performed concurrently. Still other alterations to the process 1000 as shown and described are possible. - At
stage 1002, one or more pedestrian motion states and one or more vehicular motion states are identified. The one or more pedestrian motion states include a walk state and the one or more vehicular motion states include an automobile stop state and at least one automobile move state. These states can be defined by, e.g., aprocessor 920 executing instructions stored on amemory 922, and/or by other means. Further, the motion states can be associated with, e.g., amotion state machine 16 stored on amemory 922, and/or by other means. - At
stage 1004, acceleration data are obtained from one ormore accelerometers 12. - At
stage 1006, likelihoods are computed for the one or more pedestrian motion states and the one or more vehicular motion states for respective time intervals based on the acceleration data. Likelihood computation can be performed by various elements of a motionstate classifier module 18, which can be implemented using, e.g., aprocessor 920 executing instructions stored on amemory 922, and/or other means. In particular, likelihood computation can be performed by a likelihood computation block as shown inFIGS. 2A ,2B , and/or 3, and/or by any other suitable mechanisms. - At
stage 1008, the computed likelihoods are filtered to obtain present motion states for the respective time intervals. The filtering may be based on a probabilistic model (e.g., a HMM) configured to restrict transitions from the one or more pedestrian motion states to the vehicular motion states to transitions from the walk state to the automobile stop state and to restrict transitions from the vehicular motion states to the one or more pedestrian motion states to transitions from the automobile stop state to the walk state. The filtering performed atstage 1008 can be performed by various elements of a motionstate classifier module 18, which can be implemented using, e.g., aprocessor 920 executing instructions stored on amemory 922, and/or other means. In particular, a filtering block as shown inFIG. 2A , a HMM algorithm block as shown inFIG. 2B , and/or any other suitable mechanisms, may be leveraged to perform the filtering. Those of skill in the art will appreciate that the present motion states for the respective time intervals are not limited to a present state at the time the likelihoods are calculated; rather, the present motion states may refer to the motion state that was present during the respective time interval or at least a portion of that interval. In some embodiments, a motion state for a respective time interval may be referred to as a respective motion. In certain embodiments, present motion state and respective motion state can be used interchangeably. - Some embodiments may be drawn to a mobile device with means for obtaining motion data from one or more motion-detecting devices. Example means for obtaining the motion data may be one or more accelerometers,
motion detectors 12,additional device sensors 14, ororientation sensors 928. Embodiments may also include means for filtering the motion data to obtain present motion states for respective time intervals based on the motion data. Each of the present motion states for respective time intervals may correspond to one or more pedestrian motion states or one or more vehicular motion states. The one or more pedestrian motion states may comprise a walk state, and the one or more vehicular motion states may comprise a vehicular stop state. Example means for filtering may includefilter probabilities module 208,feature extraction module state classifier module 18 viaprocessor 920 usingmemory 922 andsoftware 924. In some embodiments, in the means for filtering, transitions from the one or more pedestrian motion states to the one or more vehicular motion states are restricted to transitions from the walk state to the vehicular stop state and transitions from at least one of the one or more vehicular motion states to at least one of the one or more pedestrian motion states are restricted to transitions from the vehicular stop state to the walk state. - In some embodiments, a mobile device may include means for computing likelihoods for the one or more pedestrian motion states and the one or more vehicular motion states for the respective time intervals. Example means for computing the likelihoods may include the
processor 920 usingmemory 922 andsoftware 924,MSDP classifier 804, motionstate classifier module 18,module 204,module 254, ormodule 304. - In some embodiments, a mobile device may include means for obtaining sensor data from at least one of a Wi-Fi receiver, an audio input device or a GPS receiver, and means for computing the likelihoods for the one or more pedestrian motion states and the one or more vehicular motion states for respective time intervals based on the motion data and the sensor data. Example means for obtaining sensor data may include motion
state classifier module 18,feature extraction module 202,module 252,module 302, orprocessor 920. Example means for computing the likelihoods based on the motion data and the sensor data may include theprocessor 920 usingmemory 922 andsoftware 924,MSDP classifier 804, motionstate classifier module 18,module 204,module 254, ormodule 304. - One or more of the components, steps, features and/or functions illustrated in
FIGS. 1 ,2A ,2B ,3 ,4 ,5 ,6 ,7 ,8 ,9 and/or 10 may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from the invention. The apparatus, devices, and/or components illustrated inFIGS. 1 ,2A ,2B ,3 ,4 ,5 ,6 ,7 ,8 and/or 9 may be configured to perform one or more of the methods, features, or steps described inFIG. 10 . The novel algorithms described herein may also be efficiently implemented in software (e.g., implemented by a processor executing processor-readable instructions tangibly embodied on a non-transitory computer storage medium) and/or embedded in hardware. - Also, it is noted that at least some implementations have been described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
- Moreover, embodiments may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s). A processor may perform the necessary tasks. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
- The terms "machine-readable medium," "computer-readable medium," and/or "processor-readable medium" may include, but are not limited to portable or fixed storage devices, optical storage devices, and various other non-transitory mediums capable of storing, containing or carrying instruction(s) and/or data. Thus, the various methods described herein may be partially or fully implemented by instructions and/or data that may be stored in a "machine-readable medium," "computer-readable medium," and/or "processor-readable medium" and executed by one or more processors, machines and/or devices.
- The methods or algorithms described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executable by a processor, or in a combination of both, in the form of processing unit, programming instructions, or other directions, and may be contained in a single device or distributed across multiple devices. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
- Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
- The various features of the invention described herein can be implemented in different systems without departing from the invention. It should be noted that the foregoing embodiments are merely examples and are not to be construed as limiting the invention. The description of the embodiments is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art.
Claims (15)
- A method (1000) comprising:obtaining (1004) motion data from one or more motion-detecting devices (12); andfiltering (1008) the motion data to obtain present motion states for respective time intervals based on the motion data, the present motion states comprising one or more pedestrian motion states and one or more vehicular motion states, the one or more pedestrian motion states comprising a walk state, and the one or more vehicular motion states comprising a vehicular stop state;wherein, during the filtering (1008), transitions from the one or more pedestrian motion states to the one or more vehicular motion states are restricted to transitions from the walk state to the vehicular stop state and transitions from the one or more vehicular motion states to the one or more pedestrian motion states are restricted to transitions from the vehicular stop state to the walk state;wherein likelihoods are computed (1006) for the one or more pedestrian motion states and the one or more vehicular motion states for the respective time intervals according to an extended state machine that comprises one or more sub-states for each of the one or more pedestrian motion states and the one or more vehicular motion states, wherein each sub-state has an individual set of rules for transitioning and wherein a transition from one state into another only occurs within a final sub-state of a given state;determining a present motion state of the mobile device for the respective time intervals by filtering the computed likelihoods.
- The method (1000) of claim 1 wherein computing the likelihoods at least in part by advancing a device state through a plurality of sub-states of a motion state identified for the device state prior to transitioning the device state to a different motion state.
- The method (1000) of claim 2 wherein a number of sub-states to be associated with each of the one or more pedestrian motion states and vehicular motion states is based at least in part on anticipated time spent in the respective motion states.
- The method (1000) of claim 1 wherein the filtering (1008) comprises filtering the computed likelihoods using at least one of a forward-backward algorithm or a Viterbi algorithm.
- The method (1000) of claim 1 wherein computing (1006) the likelihoods comprises:obtaining sensor data from at least one of a Wi-Fi receiver, an audio input device or a GPS receiver; andcomputing (1006) the likelihoods for the one or more pedestrian motion states and the one or more vehicular motion states for respective time intervals based on the motion data and the sensor data.
- The method (1000) of claim 1 wherein the motion data is accelerometer data, and the one or more motion-detecting devices (12) are one or more accelerometers.
- The method (1000) of claim 6, wherein data from any inertial sensors other than the one or more accelerometers is omitted from the motion data.
- The method of claim 6, wherein the filtering (1008) is based on a probabilistic model including the one or more pedestrian states and the one or more vehicular motion states.
- The method (1000) of claim 1 wherein the filtering (1008) comprises:identifying a first transition from the walk state to an interim state and a second transition from the interim state to one of at least one vehicular move states; andclassifying the interim state as the vehicular stop state.
- The method (1000) of claim 1 wherein the filtering (1008) comprises:buffering the motion data for a buffer time interval; andobtaining the motion states for the respective time intervals based at least in part on the buffered motion data.
- The method (1000) of claim 1 wherein the motion states comprise a sequence of most likely motion states for the respective time intervals.
- The method of claim 1, wherein the motion states comprise estimated probabilities for respective motion states at the respective time intervals.
- The method (1000) of claim 12 further comprising calculating a confidence score associated with a motion state at a given time interval by comparing two or more highest motion state probabilities at the given time interval, and wherein:calculating the confidence score further comprises identifying a difference between a highest motion state probability and a second highest motion state probability at the given time interval; andthe method (1000) further comprises:comparing the difference to a confidence threshold; andif the difference is less than the confidence threshold, substituting the motion states for the given time interval with at least one of a default state or an unknown state.
- A mobile device comprising:means (18) for obtaining motion data from one or more means for detecting motion; andmeans (18) for filtering the motion data to obtain present motion states for respective time intervals based on the motion data, the present motion states comprising one or more pedestrian motion states and one or more vehicular motion states, the one or more pedestrian motion states comprising a walk state, and the one or more vehicular motion states comprising a vehicular stop state;wherein, within the means (18) for filtering, transitions from the one or more pedestrian motion states to the one or more vehicular motion states are restricted to transitions from the walk state to the vehicular stop state and transitions from the one or more vehicular motion states to the one or more pedestrian motion states are restricted to transitions from the vehicular stop state to the walk state;means (16) for computing likelihoods for the one or more pedestrian motion states and the one or more vehicular motion states for the respective time intervals according to an extended state machine that comprises one or more sub-states for each of the one or more pedestrian motion states and the one or more vehicular motion states, wherein each sub-state has an individual set of rules for transitioning and wherein a transition from one state into another only occurs within a final sub-state of a given state;means (18) for determining a present motion state of the mobile device for the respective time intervals by filtering the computed likelihoods.
- A non-transitory processor-readable medium comprising processor-readable instructions which when executed on a processor cause said processor to carry out the method steps of each of claims 1 to 13.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161535922P | 2011-09-16 | 2011-09-16 | |
PCT/US2012/055622 WO2013040493A1 (en) | 2011-09-16 | 2012-09-14 | Detecting that a mobile device is riding with a vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2756658A1 EP2756658A1 (en) | 2014-07-23 |
EP2756658B1 true EP2756658B1 (en) | 2018-10-31 |
Family
ID=50941988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12766550.3A Active EP2756658B1 (en) | 2011-09-16 | 2012-09-14 | Detecting that a mobile device is riding with a vehicle |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP2756658B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114594703A (en) * | 2020-12-04 | 2022-06-07 | 苏州易信安工业技术有限公司 | Transportation equipment control method, device and system and mobile equipment |
-
2012
- 2012-09-14 EP EP12766550.3A patent/EP2756658B1/en active Active
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
EP2756658A1 (en) | 2014-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10539586B2 (en) | Techniques for determination of a motion state of a mobile device | |
US9268399B2 (en) | Adaptive sensor sampling for power efficient context aware inferences | |
Kumar et al. | An IoT-based vehicle accident detection and classification system using sensor fusion | |
US9582755B2 (en) | Aggregate context inferences using multiple context streams | |
US9305317B2 (en) | Systems and methods for collecting and transmitting telematics data from a mobile device | |
KR101437757B1 (en) | Method and apparatus for providing context sensing and fusion | |
US11871313B2 (en) | System and method for vehicle sensing and analysis | |
US20150213555A1 (en) | Predicting driver behavior based on user data and vehicle data | |
JP6568842B2 (en) | Improved on-the-fly detection using low complexity algorithm fusion and phone state heuristics | |
US11699306B2 (en) | Method and system for vehicle speed estimation | |
KR102163171B1 (en) | Motion detection method, motion detection apparatus, device, and medium | |
CN103460221A (en) | Systems, methods, and apparatuses for classifying user activity using combining of likelihood function values in a mobile device | |
US8750897B2 (en) | Methods and apparatuses for use in determining a motion state of a mobile device | |
US11884225B2 (en) | Methods and systems for point of impact detection | |
US20170344123A1 (en) | Recognition of Pickup and Glance Gestures on Mobile Devices | |
US20220292974A1 (en) | Method and system for vehicle crash prediction | |
US20160223682A1 (en) | Method and device for activating and deactivating geopositioning devices in moving vehicles | |
EP2756658B1 (en) | Detecting that a mobile device is riding with a vehicle | |
CN107005809B (en) | Smart phone motion classifier | |
Nawaz et al. | Mobile and Sensor Systems | |
Mascolo | Mobile and Sensor Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20140326 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20161102 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602012052923 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: H04M0001725000 Ipc: G01P0013000000 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G01P 13/00 20060101AFI20180424BHEP Ipc: H04W 4/02 20090101ALI20180424BHEP Ipc: H04M 1/725 20060101ALI20180424BHEP |
|
INTG | Intention to grant announced |
Effective date: 20180524 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1060028 Country of ref document: AT Kind code of ref document: T Effective date: 20181115 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602012052923 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20181031 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1060028 Country of ref document: AT Kind code of ref document: T Effective date: 20181031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190131 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190228 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190131 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190301 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190201 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012052923 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20190801 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190930 Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190914 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190914 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190930 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20190930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20120914 Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181031 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20230810 Year of fee payment: 12 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230807 Year of fee payment: 12 Ref country code: DE Payment date: 20230808 Year of fee payment: 12 |