WO2018134646A1 - Training of classifiers for identifying activities based on motion data - Google Patents

Training of classifiers for identifying activities based on motion data Download PDF

Info

Publication number
WO2018134646A1
WO2018134646A1 PCT/IB2017/050330 IB2017050330W WO2018134646A1 WO 2018134646 A1 WO2018134646 A1 WO 2018134646A1 IB 2017050330 W IB2017050330 W IB 2017050330W WO 2018134646 A1 WO2018134646 A1 WO 2018134646A1
Authority
WO
WIPO (PCT)
Prior art keywords
exercise activity
motion data
activity
exercise
motion
Prior art date
Application number
PCT/IB2017/050330
Other languages
French (fr)
Inventor
Pratik SARAOGI
Tushar Patil
Kalpesh PATIL
Original Assignee
Oxstren Wearable Technologies Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oxstren Wearable Technologies Private Limited filed Critical Oxstren Wearable Technologies Private Limited
Priority to PCT/IB2017/050330 priority Critical patent/WO2018134646A1/en
Publication of WO2018134646A1 publication Critical patent/WO2018134646A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the present disclosure relates to a field of classifying activities performed by a user based on motion data.
  • an individual performs several activities on any given day.
  • the activities may include, but not limited to, walking, sleeping, sitting, climbing stairs, moving body parts and so on.
  • the activities may generally be classified as exercise and non-exercise activities based on movements involved in performing any particular activity.
  • the activities performed by the individual is measured using sensors such as fitness bands or associated machines.
  • the activities measured may be used to adjust exercise routine of the individual to complement activity levels.
  • the activity level may indicate quantitative and/or qualitative measure of the activity performed by the individual for a given time.
  • the individual or user may wear fitness bands to measure the activity level while performing an activity.
  • the fitness bands measure a total activity level irrespective of whether the activity is exercise or non-exercise.
  • the user may perform one exercise activity for a particular duration and may take rest indicating non- exercise activity before performing another exercise activity.
  • the fitness bands may consider the non-exercise activities along with exercise activities, which may lead to improper calculation of activity levels.
  • motion identification systems are generally used to identify activities performed by the user.
  • Existing motion identification system e.g., EP2987452A1, published on Feb 24, 2016 discloses a method of classifying a predefined collection of exercises in a database. The exercises are selected based on the signature of the user's movement while performing an exercise.
  • the signature of the user's movement includes the time course of at least one of the following and/or a combination of at least two of the following parameters: strength, power, speed, stability, response time, flight time, contact time, stiffness, responsiveness, asymmetry, tilt, fatigue, injury, gestural efficiency (motor and / or energy).
  • the signature of movement is compared to an expected temporal course according to the exercise performed.
  • an apparatus for identifying a type of motion and condition of a user includes a motion detection sensor operative to generate an acceleration signature based on sensed acceleration of the user, and a controller.
  • the controller determines what network connections are available to the motion detection device, and matches the acceleration signature with at least one of a plurality of stored acceleration signatures.
  • the controller matches each stored acceleration signatures with a type of motion of the user when processing capability is available to the motion detection device though available network connections, and identifies the type of motion of the user and identify a condition of the user based on the matching of the acceleration signature.
  • the existing motion identification systems employ a fixed logic to determine the type of activity performed by the user, which does not give accurate identification of the activity. This is because; the existing motion identification systems are not trained properly to identify the subsequent activities performed by the user. The above situation is true when user performs the activity that may be different from the logic pre-fed into the motion identification system. As a result, the motion identification system fails to identify activities performed by the user accurately. In other words, the motion identification system fails to identify the activity based on the motion data that are user-specific.
  • the motion identification system employing the fixed logic does not enable the user to train the system to identify new activities. This is because; whenever there is a new exercise performed by the user, the motion identification system tries to match the new exercise with one of the existing exercises leading to misidentification of the new exercise.
  • An example of a method for training classifiers to identify activities based on motion data comprises capturing, by a sensor, raw motion data of a user performing an exercise activity and a non-exercise activity over a period of time.
  • the exercise activity comprises at least a first exercise activity and a second exercise activity.
  • the method further comprises processing, by a processor, the raw motion data to train classifiers to identify a motion pattern of each of the first exercise activity and the second exercise activity.
  • the motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing with a master dataset or a test dataset.
  • the master dataset indicates a predetermined motion pattern corresponding to the first exercise activity and the second exercise activity.
  • the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity.
  • the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity respectively from the raw motion data.
  • the specific motion samples are segregated using an unsupervised learning technique.
  • Each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified.
  • the method further comprises receiving, by the sensor, a subsequent motion data of the user performing the exercise activity.
  • the method further comprises classifying, by the processor, the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset.
  • the subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique.
  • the method further comprises presenting the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
  • the device comprises at least one sensor communicatively coupled to the device.
  • the at least one sensor captures raw motion data of a user performing a first exercise activity and a second exercise activity over a period of time.
  • the device further comprises a memory and a processor coupled to the memory.
  • the processor executes program instructions stored in the memory, to process the raw motion data to train classifiers to identify a motion pattern of each of the first exercise activity and the second exercise activity.
  • the motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing with a master dataset or a test dataset.
  • the master dataset indicates a pre-determined motion pattern corresponding to the first exercise activity and the second exercise activity.
  • the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity.
  • the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data.
  • the specific motion samples are segregated using an unsupervised learning technique.
  • Each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified.
  • the processor further executes the program instructions stored in the memory, to receive a subsequent motion data of the user performing the exercise activity, from the at least one sensor.
  • the processor further executes the program instructions stored in the memory to classify the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset.
  • the subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique.
  • the processor further executes the program instructions stored in the memory to present the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
  • the system comprises at least one sensor to capture raw motion data of a user performing a first exercise activity and a second exercise activity over a period of time.
  • the system further comprises a device coupled to the at least one sensor for transmitting the raw motion data.
  • the system further comprises a server communicatively coupled to the device.
  • the server comprises a memory and a processor coupled to the memory.
  • the processor executes program instructions stored in the memory to process the raw motion data to train classifiers to identify a motion pattern of each of the first exercise activity and the second exercise activity.
  • the motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing with a master dataset or a test dataset.
  • the master dataset indicates a pre-determined motion pattern corresponding to the first exercise activity and the second exercise activity.
  • the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity.
  • the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data.
  • the specific motion samples are segregated using an unsupervised learning technique.
  • Each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified.
  • the processor further executes the program instructions stored in the memory, to receive a subsequent motion data of the user performing the exercise activity, from the at least one sensor.
  • the processor further executes the program instructions stored in the memory to classify the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset.
  • the subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique.
  • the processor further executes the program instructions stored in the memory to present the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
  • the exercise activity comprises at least a first exercise activity and a second exercise activity.
  • the method further comprises processing, by a processor, the raw motion data to train classifiers to identify a motion pattern of each of the first exercise activity and the second exercise activity by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data.
  • the specific motion samples are segregated using an unsupervised learning technique.
  • Each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified.
  • the method further comprises receiving, by the sensor, a subsequent motion data of the user performing the exercise activity.
  • the method further comprises deriving, by the processor, the motion pattern of the subsequent motion data into the first exercise activity or the second exercise activity.
  • the motion pattern of the subsequent motion data derived is used to improve identification of the motion pattern of each of the first exercise activity and the second exercise activity.
  • the method further comprises presenting the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
  • FIGS. 1A and IB illustrate an environment of a device for training of classifiers to identify activitiesbased on motion data, in accordance with one embodiment of the present disclosure
  • FIG. 2 illustrates a process of training the device by a user for classifying an exercise activity based on the motion data, in accordance with one embodiment of the present disclosure
  • FIG. 3 illustrates a process for deriving signal attribute file for an exercise activity, in accordance with one embodiment of the present disclosure
  • FIG. 4 illustrates a process of training a classifier, by the device for classifying an exercise activity, in accordance with one embodiment of the present disclosure
  • FIG. 5A and 5B illustrate a process for identifying an exercise activity performed by a user, in accordance with one embodiment of the present disclosure
  • FIG. 6 illustrates an environment of a device for classifying activities based on motion data, in accordance with another embodiment of the present disclosure
  • FIG. 7 illustrates a process of calculating signal attributes for training dataset, in accordance with one embodiment of the present disclosure.
  • FIG. 8 illustrates a method for training of classifiers to identify activities based on motion data, in accordance with another embodiment of the present disclosure.
  • the present disclosure relates to methods and systems of training of classifiers to identify activities based on motion data.
  • An activity indicates any movement that a user may perform a task using at least one body part such as a hand, leg and so on. For example, the user may lift a dumbbell using his arm.
  • the movement of the body part may be captured using raw motion data i.e., motion of the body part while performing the activity.
  • the movement of the arm lifting the dumbbell may be captured.
  • the raw motion data may be captured by motion sensors such as an accelerometer, a magnetometer or a gyroscope.
  • the raw motion data captured is sent to a device for classifying the activity.
  • the device segregates the raw motion data corresponding to exercise and non-exercise activities based on repetition of values in the raw motion data.
  • the device uses an unsupervised learning technique such as k-means clustering, mixture models and a hierarchical clustering for selecting specific samples of motion data.
  • muscle groups within the exercise activity are identified.
  • the motion data is further fed to a group of classifiers corresponding to the muscle group.
  • the exercise activity is identified/predicted using a voting based prediction. Further, the device determines whether the exercise activity is present in pre- created databases such as a test dataset or a master dataset.
  • the test dataset comprises user- specific motion data for the exercise activity and associated labels.
  • the master dataset comprises standard motion data and associated labels, in addition to user specific motion data. If the exercise activity is present in one of the pre-created databases, then a label corresponding to the classification of the exercise activity is presented to the user.
  • the classifier for the exercise activity is further trained using the motion data. The training is typically done using a supervised learning technique.
  • ' activity' may refer to exercise activity or non-exercise activity.
  • the activity may comprise walking, jogging, push-ups, lifting weights, lying down, eating, sleeping, sitting and so on.
  • the devicel05 may be one of an electronic device, a mobile phone, a laptop, a smart watch, a fitness equipment, a display device and a wearable garment.
  • the device 105 may include at least one processor 106, a memory 107 and an Input and Output (I/O) Interface 108.
  • I/O Input and Output
  • the at least one processor 106 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 106 is configured to fetch and execute computer-readable instructions stored in the memory 107.
  • the memory 107 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • volatile memory such as static random access memory (SRAM) and dynamic random access memory (DRAM)
  • DRAM dynamic random access memory
  • non-volatile memory such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • the I/O interface 108 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. Further, the I/O interface may enable the device 105 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface may facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface may include one or more ports for connecting a number of devices to one another or to another server.
  • the device 105 is communicatively coupled to at least one sensor llO.
  • the device 105 may communicate with the at least one sensor 110 through a wired or wireless technology such as Bluetooth, ZigBee, WI-FI, Internet of Things (IoT) and so on.
  • the at least one sensor 110 may be one of an accelerometer, a magnetometer, a gyroscope, a Micro Electronic Mechanical System (MEMS), and a Nano Electronic Mechanical System (NEMS).
  • MEMS Micro Electronic Mechanical System
  • NEMS Nano Electronic Mechanical System
  • the at least one sensor 110 is used for sensing the motion of the user when strapped on arm, chest, legs, abdomen, and so on, and therefore referred to as motion sensor 110 for the purpose of explanation.
  • the device 105 may detect movements performed by the user.
  • the motion sensor 110 may be incorporated in a wearable device such as a smart watch, a fitness band, a smart garment and so on.
  • the user may wear the smart watch comprising the motion sensor 110, as shown in FIG. IB, while performing an activity, e.g., lifting weights.
  • the motion sensor 110 may also be strapped onto the body of the user, using Velcro or other suitable clothing, to capture the raw motion data.
  • the motion sensors 110 capture raw motion data continuously in real-time.
  • the raw motion data may be captured when there is a change in values corresponding to acceleration of the body part performing the activity.
  • accelerometers may be used to sense a change in acceleration of a body corresponding to change in body mass and/or gravity.
  • the accelerometer is employed to measure the acceleration in horizontal as well as vertical directions.
  • a single accelerometer may be used to measure both body mass as well as gravitational acceleration.
  • one accelerometer may be used to measure a change in body mass acceleration and another accelerometer may be used to measure a change in gravitational acceleration.
  • the gyroscope may be used to measure a change in orientation of the body of the user.
  • the device 105 may be a mobile phone.
  • the mobile phone comprises a processor, a memory and an I/O interface.
  • the mobile phone may include motion sensors 110 such as accelerometers.
  • the motion sensor 110 within the mobile phone may be used to capture raw motion data from the user, in addition to other external motion sensors 110.
  • the user may train the device 105 during a training session by providing training dataset.
  • the training dataset comprises raw motion data of the user performing an activity. For example, consider the user is performing the hammer curls for a particular duration. At the time of performing the hammer curls, the raw motion data corresponding to the activity is captured using the motion sensors 110 attached to the body of the user. The process of training the device 105 is explained using FIG. 2.
  • FIG. 2in conjunction with FIGS. 3 and 4 a process 200 of training the device 105 based on the motion data is shown, in accordance with one embodiment of the present disclosure.
  • the user wears the wearable device housing the motion sensor 110.
  • the user initiates a training session on the device 105.
  • the user initiates the training session using the I/O interface 108 of the device 105.
  • the user initiates the training session by operating the wearable device.
  • the memory 107 may store information corresponding to a list of pre-fed exercise activities such as bicep curl, hammer curls, cycling, lifting weights, pain rehabilitation, and so on.
  • the user may have an option to train the device 105 i.e., if the device 105 has information about the pre-fed exercise activities, then the user may train the device 105, to improve identification of an activity.
  • the user may train the device 105 to identify the new activity. For example, if the user wishes to train the device 105 for hammer curl activity, then the user may select 'hammer curl activity' from the list and may train the device 105. Similarly, if the user wishes to train the device 105 for bicep curl activity, then the user may select bicep curl activity from the list and may train the device 105. It should be understood that the user may select any activity such as bicep curl, hammer curls, cycling, lifting weights, pain rehabilitation, and so on to the device 105 for identifying the activity whenever the activity is performed by the user.
  • the user may create a new activity and train the device 105.
  • the user may provide raw motion data corresponding to the new activity for training the device 105 to identify the new activity.
  • the device 105 may receive the raw motion data from the motion sensor 110.
  • the motion sensor 110 continuously feeds the raw motion to the device 105, as shown at step 210. It should be understood that motion sensor 110 may send the raw motion data to the device 105 in real-time or with a time-delay, e.g., 2 minutes.
  • the device 105 After receiving the raw motion data, the device 105 performs sampling on the raw motion data. Specifically, the device 105 performs sampling on the raw motion data at a sampling rate higher than a Nyquist sampling rate in order to avoid aliasing effects. As known, the Nyquist sampling rate refers to the sampling rate at which the sampling frequency is twice the highest frequency of the raw motion data. Although the sampling is shown to be carried out in the device 105, it must be understood that the sampling may be carried out within the wearable device or in a data acquisition unit (not shown) attached to the device 105. Further, the device 105 may filter unwanted noise signals from the raw motion data, using signal filters such as Kalman filters.
  • signal filters such as Kalman filters.
  • the device 105 calculates the number of samples of raw motion data received from the motion sensor 110. Specifically, the device 105 calculates number of times the user repeats an activity. In one example, the device 105 may be pre-configured to receive a predefined number of samples such that the samples received may be used for further processing of the raw motion data. In one example, the device 105 may be pre-configured to receive 30 samples. In another example, the device 105 may be pre-configured to receive 50 samples from the motion sensor 110. It should be understood that the number of samples that the device receives is given for explanation purpose only and should not construed in limited sense.
  • the device 105 checks whether the number of samples of raw motion data has reached the predefined number, as shown in step 215.1f the predefined number of samples is reached, then step 220 is performed. Otherwise, step 210 is performed, i.e., the user repeats the exercise activity till the device 105 receives the predefined number of samples of raw motion data. For example, in case of lifting weights, the user may be requested to repeat the weight lifting 30 times. Each instance of lifting the weights may be considered to be one sample, leading to a total of 30 samples.
  • the device 105 may display a notification to the user either on the I/O interface 108 or on a display portion (not shown) of the motion sensor 110, as shown at step 220.
  • the notification may include a message notifying the user to stop performing the exercise activity.
  • the device 105 may create a raw motion data file comprising details of the samples collected.
  • the device 105 may store the raw motion data file in the memory 107.
  • the device 105 processes the raw motion data and segregates exercise activity and non-exercise activity. Specifically, if the activity shows repeated spikes in the data, then the device 105 considers that activity to be an exercise activity. Similarly, if the activity has hiatus, then the device 105 considers that activity to be a non-exercise activity.
  • the device 105 may allow the user to provide a label depending on the activity performed. For example, if the user is training the device 105 to identify the activity of bicep curl, then the user may provide the label 'BICEP CURL' for that activity. The label provided may be used to present the classification of the exercise activity to the user, upon training.
  • the device 105 when the user performs the exercise activity of bicep curl, after the training session, the device 105 presents the label 'BICEP CURL' upon identifying the exercise activity. Similarly, for each of the exercise activity, the device 105 may allow the user to label the exercise activity as a first exercise activity, a second exercise activity, a third exercise activity and so on. For example, the user may label bicep curl as the first exercise activity, single hand triceps extension as the second exercise activity, lifting dumbbells as the third exercise activity and so on.
  • first exercise activity e.g., bicep curls
  • second exercise activity e.g., single hand triceps extension.
  • the device 105 may be used to identify more than two exercise activities using the present disclosure.
  • the device 105 may further associate muscle group identifiers with the exercise activity. For example, in case of bicep curl where the muscle group used comprises biceps, the muscle group identifier may be 'BICEPS'. Similarly, in case of single hand triceps extension, where the muscle group used comprises triceps, the muscle group identifier may be 'TRICEPS'. In one embodiment, the muscle group identifiers may be provided to the device 105 by the user. In another embodiment, the device 105 may assign the muscle group identifiers based on the location of motion sensors 110 or based on the type of exercise activity selected by the user. In one example, the device 105 may further create associate files for the exercise activity and the muscle group identifiers respectively. In another example, the device 105 may use the associate files corresponding to the exercise activity and the muscle group identifiers already present in the memory 107.
  • the device 105 derives an attribute set file from the raw motion data file, as explained using FIG. 3.
  • a process 300 for deriving attribute set file for an exercise activity is shown, in accordance with one embodiment of the present invention.
  • device 105 removes raw motion data corresponding to non-exercise activities from the raw motion data file.
  • the device 105 processes the raw motion data file to identify frames.
  • the frame comprises a set of samples selected from the raw motion data file.
  • the frame may comprise a pre-defined number of consecutive samples.
  • the frame is identified based on parameters such as overlap, frame size and number of samples of raw motion data.
  • the overlap step may represent a degree of overlap between two consecutive frames.
  • the frame size indicates the number of samples within the frame.
  • the number of samples of raw motion data may represent a total number of samples in the raw motion data file.
  • the raw motion data file comprises 5000 values of raw motion data, with a moderate frame size of 150 values and a moderate overlap step of 80%.
  • the overlap may be calculated as:
  • Overlap frame size - floor ((overlap step*frame size)/100)
  • the number of frames may be calculated as:
  • the raw motion data file comprising 5000 values is divided into 165 frames.
  • Each of the frames comprise 150 values. Based on the number of frames and the frame size, the start and end of a frame are identified. Further, the frame is extracted and duplicated.
  • the device 105 extracts a signal attribute set from each frame.
  • the signal attribute set may comprise signal attributes in time domain, frequency domain and wavelet domain attributes.
  • the calculation of signal attributes help in separating the exercise activity from non-exercise activities.
  • the exercise activities are assumed to be repetitive in nature, e.g., jogging is a repetitive motion of arms and legs.
  • non-exercise activities such as bathing, sitting, and so on, the motion of arms or legs are non-repetitive in nature.
  • a discrete Fourier series may be used.
  • the discrete Fourier transform may be used to identify non-repetitive samples related to non-exercise activities and transients due to noise.
  • the device 105 Based on the values of the signal attributes, the device 105 separates samples related to the non-exercise activities from samples related to the non- exercise activity. Upon removing samples related to the non-exercise activities, the signal attribute set of the frame is stored in the memory 107 of the device 105.
  • the device 105 determines whether the end of the raw motion data file is reached, as shown at step 315. In other words, the device 105 determines whether the signal attribute sets for all the frames in the raw motion data file is calculated. If the end of the raw motion data file is reached, then step 320 is performed. Otherwise, step 305 is performed to identify the start and end of the next frame. [043] At step 320, the device 105 combines the signal attribute sets corresponding to all the frames to form an attribute set file. [044] Further, the attribute set file is associated with the label for the exercise activity and attribute set file is stored in the associate files corresponding to the exercise activity on the device 105.
  • the attribute set file is further associated with the muscle group identifier and stored in the associate files corresponding to the muscle group identifier.
  • the attribute set file may be stored under two associate files namely, 'BICEP CURL', and 'BICEPS'.
  • the muscle group associate file 'BICEPS' may comprise attribute set files related to exercise activities associated with the muscle group, other than bicep curl, for e.g., dumbbell curl.
  • the device 105 may display a status 'Exercise processed' to the user on the I/O interface 108.
  • the memory 107 may comprise a master dataset and a test dataset.
  • the master dataset may indicate a data store storing standardized motion data for various exercise activities along with corresponding labels.
  • the pre -determined motion pattern corresponding to the first exercise activity and the second exercise activity are stored in the master dataset.
  • the master dataset comprises information or data points of all the exercises.
  • raw data for each exercise activity is fetched.
  • features of each exercise activity are extracted and the features corresponding to each exercise activity is stored in the memory 107.
  • the standardized motion data may be pre-fed into the memory 107 as master dataset at the time of manufacturing the device 105.
  • the master dataset may comprise data corresponding to bicep curl as first exercise activity, single hand triceps extension as second exercise activity and so on.
  • the standardized motion data may be built using motion data collected from experts such as fitness trainers.
  • the master dataset is built in such a way that whenever the user performs the activity, the user performance may be compared with standardized motion data and the corresponding exercise activity is automatically identified and presented to the user.
  • the test dataset may indicate a data store that stores and updates the data based on the activities performed by the user in real time.
  • test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity. For example, at the time of training, the user may specify that lifting dumbbells is the first exercise activity and bicep curl is the second exercise activity.
  • the user may define or label the activities of other types.
  • the device 105 While performing the activity, the device 105 updates the master dataset and the test dataset by adding the attribute set file for the exercise activity. In other words, the master dataset and the test dataset are modified based on the latest raw motion data for the exercise activity, such as the first exercise activity and the second exercise activity.
  • the device 105 tries to identify the activity and classifies the activity into one of the first exercise activity and the second exercise activity.
  • the device 105 is trained using classifiers. Specifically, the device 105 utilizes the attribute set file to train the classifiers.
  • the attribute set file is associated with the label for the exercise activity and is stored in the associate files corresponding to the exercise activity on the device 105. The data in the attribute set file are used to train the classifier, as explained using FIG. 4.
  • step 235 training of the classifier for classifying the exercise activity starts automatically using the data in the attribute set file.
  • the device 105 builds a learning network for the exercise activity.
  • the process of training the classifier for the exercise activity, by the device 105, using the motion data is explained in detail using FIG. 4.
  • FIG. 4 a process 400 of training a classifier, by the device 105 for classifying an exercise activity is shown, in accordance with one embodiment of the present disclosure.
  • the motion data for training the classifier is obtained from the attribute set file stored in the master dataset.
  • the device 105 implements a machine learning technique.
  • the device 105 implements one of an unsupervised learning technique and a supervised learning technique.
  • the unsupervised learning technique may include, but not limited to a k-means clustering, mixture models and a hierarchical clustering.
  • the device 105 may select specific samples from the motion data received, as shown in step 405.
  • a neural network model may be used. For example, if k-means clustering is used, then the specific samples of the motion data may be obtained by calculating a centroid for the motion data in a cluster.
  • the cluster may comprise a plurality of motion data and initial means.
  • the initial means refer to a random mean initially assigned to the cluster of motion data.
  • the specific samples of motion data selected comprise a training dataset and a testing dataset.
  • the training dataset indicates data that is fed to the device 105 at the time of manufacturing.
  • the training dataset indicates the data corresponding to the exercise activities that are classified to identify the exercise activities as the first exercise activity, second exercise activity at the initial stage.
  • the training dataset comprises a set of input values and a corresponding set of desired output values.
  • the testing dataset comprises a set of values for validating properties of the classifiers after training.
  • the properties of the classifier may comprise accuracy of training and classification errors.
  • the device 105 may select 70% of the motion data as the training dataset and remaining 30% of the motion data as the testing dataset.
  • the device 105 may select 90% of the motion data as the training dataset and the remaining 10% of the motion data as the testing dataset.
  • the training dataset is fed to the classifier, as shown at step 410.
  • the classifier may use the supervised learning network that includes one of an Artificial Neural Network (ANN), boosted tree networks and a Random forest network.
  • ANN Artificial Neural Network
  • the device 105 uses the artificial neural network for the classification.
  • the artificial neural network may be a multi-layer perceptron (MLP).
  • MLP multi-layer perceptron
  • the multilayer- perceptron comprises a first layer comprising input nodes, a second layer comprising hidden nodes and a third layer comprising output nodes.
  • Each input node is connected to each of the hidden nodes. Further, each hidden node is connected to each of the output nodes.
  • the hidden nodes in the second layer are used as processing elements. More specifically, the hidden nodes perform a non-linear activation function on a weighted sum of inputs received from the input nodes.
  • the output of the non-linear activation function, available at the output node is a class label, i.e., the label corresponding to the exercise activity. For example, if the exercise activity includes Zottman curl and bicep curl, then the device 105 may perform the unsupervised learning technique followed by the supervised learning technique and may derive motion pattern to classify the Zottman curl activity as first exercise activity and bicep curl activity as the second exercise activity.
  • the device 105 may label the Zottman curl activity as first exercise activity and bicep curl activity as the second exercise activity.
  • unsupervised learning technique is performed followed by the supervised learning technique for classifying the exercise activity
  • the supervised learning technique alone can be performed for classifying the exercise activity, and such implementations is within the scope of the present disclosure.
  • the device 105 may train the classifier using the training dataset with the help of supervised learning technique, as shown in step 415.
  • the building of the ANN for classifying the exercise activity may be explained using backpropagation algorithm.
  • the classifier uses the training dataset of the exercise activity as examples for training. Further, the classifier is trained to produce the classification corresponding to the exercise activity as output. In each iteration of the training, the output of the ANN is compared with the correct classification. If the output of the ANN is different from the correct classification, then the weights in the ANN are adjusted to produce the correct classification as output.
  • the classifier is tested for accuracy of training and classification errors using another testing dataset.
  • the above process of training the classifier is performed for each exercise activity.
  • a classifier is maintained for each of the exercise activities, thereby leading to a plurality of classifiers for a plurality of exercise activities.
  • the classifiers such as Artificial Neural Network (ANN), boosted tree networks and a Random forest network may be used to classify the exercise activity of bicep curl.
  • the classifiers such as Artificial Neural Network (ANN), and boosted tree networks may be used to classify the exercise activity of hammer curls.
  • ANN Artificial Neural Network
  • ANN Artificial Neural Network
  • boosted tree networks may be used to classify the exercise activity of hammer curls.
  • one or more classifiers may be used to classify one exercise activity.
  • a single classifier may be used to classify one or more exercise activity.
  • the device 105 Upon training the classifier, the device 105 starts tracking the activities performed by the user. Subsequently, whenever the user performs a subsequent activity, the device 105 identifies a subsequent motion data and analyzes the subsequent motion data to identify the type of exercise activity, i.e., first exercise activity and second exercise activity. After identifying the type of the device 105presents the label corresponding to the exercise activity to the user. [055] At step 240, the device 105 determines whether the user wishes to train the device 105 for classifying another exercise activity. If the user wishes to train the device 105 for classifying another exercise activity, then step 210 is performed. Otherwise, step 240 is performed. The user may train the device 105 to classify a plurality of exercise activities such as walking, jogging, running, lifting weights, doing push-ups, planks and so on.
  • a plurality of exercise activities such as walking, jogging, running, lifting weights, doing push-ups, planks and so on.
  • the device 105 continues tracking the activities performed by the user.
  • a process 500 for identifying an exercise activity performed by a user is shown, in accordance with one exemplary embodiment of the present disclosure.
  • the process 500 is presented to identify a label for a subsequent exercise activity performed by the user.
  • the memory 107 comprises the master dataset and the test dataset comprising standardized motion data and user-defined motion data, respectively.
  • two or more exercise activities identified along with labelling of the exercise activities are stored in the master dataset and the test dataset.
  • the exercise activity performed is identified and presence of the exercise activity in one of the master dataset and the test dataset is determined.
  • the label corresponding to the exercise activity is displayed to the user.
  • the user wears a wearable device comprising at least one motion sensor.
  • the motion sensors 110 attached to the body of the user capture raw motion data of subsequent activity.
  • the raw motion data of the subsequent activity i.e., subsequent motion data may correspond to an exercise activity or non-exercise activity.
  • the exercise activity may be the user lifting weights in a gym. In between lifting weights, the user may perform a non-exercise activity, for e.g., drink water.
  • the raw motion data is further sent to the device 105 for further sampling and processing.
  • the device 105 separates the exercise activity and the non-exercise activity from the subsequent activity.
  • the device 105 combines the raw motion data of the subsequent activity to form a raw motion data file. Further, the device 105 calculates signal attributes from raw motion data file of the subsequent activity as explained earlier using FIG. 3. Based on the signal attributes, the device 105 separates the exercise activity and the non-exercise activities from the subsequent activity. In other words, the device 105 removes non-repetitive samples from repetitive samples to retain only the samples, i.e., the motion data related to the exercise activity.
  • the device 105 determines the muscle group of the exercise activity.
  • the muscle group of the exercise activity is specified by the user.
  • the user may specify the muscle group of the exercise as 'biceps'.
  • the muscle group is determined by identifying the motion sensor 110 from which raw motion data of the subsequent activity is received. For each muscle group there is a corresponding set of classifiers for related exercise activities.
  • the exercise activities related to biceps may comprise chin-up, hammer curl, Zottman curl and single-arm curl, i.e., corresponding to biceps, there may be a first classifier for chin up, a second classifier for hammer curl, a third classifier for Zottman curl and so on.
  • the motion data of the exercise activity is fed to the classifiers corresponding to each of the exercise activities of the muscle group. For instance, if the exercise activity performed by the user is chin-up, then the motion data is fed to classifiers corresponding to biceps, i.e., to the classifiers corresponding to chin-up, hammer curl, Zottman curl and single- arm curl. Subsequently, the classifiers produce classifications for the motion data.
  • the best classification among all the classifications produced by the classifiers is selected based on voting based prediction.
  • the voting based prediction selects the classification most often predicted by the classifiers.
  • the device 105 predicts the labelling of the motion pattern or motion data as the first exercise activity or the second exercise activity, by analyzing the specific motion data samples using the voting based prediction.
  • the motion data corresponding to the user performing chin-up is provided to, say, seven classifiers within the muscle group 'biceps'.
  • each of the classifier is trained to identify and classify the type of activity into different exercise activities.
  • the motion data of the user performing chin-up exercise is provided to the seven classifiers.
  • Each of the seven classifiers predict the exercise activity based on a correlation coefficient.
  • the correlation coefficient indicates a measure of the degree of closeness of the exercise activity to a pre-fed exercise activity. If the exercise activity performed by the user is similar to a pre-fed exercise activity, then the motion data has high correlation coefficient with the pre-fed exercise activity. Consequently, the classifier may predict the exercise activity with high accuracy.
  • the classifiers After providing the motion data to each classifier, the classifiers identify or predict the type of activities based on the correlation coefficient with respect to chin-up exercise.
  • the classifiers that are trained may not classify the activities accurately due to several reasons such as insufficient data points, noise in raw motion data and so on.
  • the device 105 employs all the available classifiers to identify and classify the exercise activities. After employing the classifiers, the type of exercise activities that most of the available classifiers predict is selected. In order to explain voting based prediction an example may be used.
  • the motion data is fed to available classifiers e.g., seven classifiers.
  • seven classifiers Upon employing seven classifiers, consider four out of the seven classifiers predict or classify the activity as 'chin-up' with accuracies of 60%, 80%, 95% and 72%. Two out of the remaining three classifiers predict the activity as hammer curl with accuracies of 53% and 45%. The remaining one classifier predicts the activity as single-arm curl with 67% accuracy.
  • the device 105 selects the label of the activity predicted by the highest number of classifiers. In the present example, four out of seven classifiers predicted the exercise activity as chin-up. Consequently, the device 105 classifies the exercise activity as 'chin-up' most number of classifiers voted or identified the chin-up exercise activity.
  • the device 105 may classify or predict the exercise activity as 'chin-up'.
  • the voting- based prediction is useful when the activity performed is similar to the activities defined in the master dataset.
  • the correlation coefficient of the certain exercise activity may be low.
  • the device 105 invokes all the classifiers. Further, the device 105 checks which classifiers provide highest correlation coefficient, and accordingly chooses the label identified by the classifier.
  • the device 105 tries to classify the exercise in the test dataset such that the exercises labelled by the user are identified and the classification of the exercise is improved. If the exercise performed is not close to the exercises present in the test dataset, then the device 105 tries to classify the exercise based on the master dataset. In order to classify the exercise, the device 105 invokes the classifiers that are trained to identify the exercise activity.
  • the device 105 checks whether the exercise activity is present in the test dataset. In other words, the subsequent exercise activity is checked to determine whether subsequent exercise activity is similar to the exercise activity present in the test dataset or the master dataset. The exercise activity is looked up in the test dataset using the classification to identify the corresponding label. If the exercise activity is present in the test dataset, then step 535 and 540 are performed. Otherwise, step 545 through 570 is performed.
  • the device 105 After checking the subsequent exercise activity with the exercise activity present in the test dataset, the device 105 presents the label corresponding to the exercise activity if the subsequent exercise activity matches with the exercise activity, as shown at step 535. Subsequently, the device 105 displays the label to the user on the I/O interface 108. For example, if the exercise activity is chin up, then the label corresponding to the classification obtained from the classifier i.e., 'CHIN UP' may be presented to the user.
  • the device 105 uses the subsequent motion data of the subsequent exercise activity, to train the classifier corresponding to the subsequent exercise activity.
  • the subsequent motion data is added to the existing motion data and the combined data is used to train the classifier.
  • the classifiers may be trained using a supervised learning technique. Referring to the previous example of performing chin-up, the classifiers corresponding to chin-up are trained using the subsequent motion data of the user performing chin-up. Specifically, the training of the classifier is performed considering the subsequent motion data along with the exercise activity available in the test dataset, as explained using FIG. 4.
  • the device 105 checks whether the exercise activity corresponding to the subsequent motion data is present in the master dataset, as shown at step 545. In one embodiment, the device 105, at first, checks whether the subsequent exercise activity matches with the exercise activity in the test dataset, as shown at step 535. If the subsequent exercise activity does not match with the exercise activity in the test dataset, then the subsequent exercise activity is matched with the exercise activity in the master dataset. If the exercise activity is found in the master dataset, then steps 550 through 560 are performed. Otherwise steps 565 and 570 are performed.
  • the device 105 presents the label corresponding to the exercise activity, as obtained from the master dataset, to the user. Further, the device 105 may provide a notification to the user, such as 'We detected the activity as PUSH-UPS from the master data'.
  • the device 105 trains the classifier using the subsequent exercise activity i.e., subsequent motion data and the label presented by employing the supervised learning technique.
  • the classifier is trained to classify an exercise activity e.g., hammer curls after receiving 30 samples of raw motion data.
  • the subsequent exercise activity say 31 st sample, of hammer curl performed at a later time, e.g., after seven days, the device 105 employs the classifiers to identify and classify the subsequent exercise activity as hammer curl activity.
  • the device 105 may add the 31 st sample to previous 30 samples to identify next sample i.e., 32 nd sample. In other words, the device 105 considers all the available samples to predict type of next activity and improves the accuracy of the classifiers.
  • the exercise activity with the subsequent motion data and the label obtained from the test dataset or the master dataset are added to the master dataset. Further, the device 105 provides an option for the user to train the classifier, for identifying the exercise activity with improved accuracy. The classifier is trained by the user as explained previously, using FIGS. 2,3 and 4.
  • the device 105 detects that the exercise activity is not identified either in the test dataset or the master dataset. As a result, the device 105 presents the activity as a new activity. Further, the device 105may prompt the user to assign a label for the new activity as shown in step 570. After the user assigns the label, steps 555 and 560 are performed and a new exercise activity is added in to test dataset. [075] It must be understood that every subsequent exercise activity added to the test dataset are automatically added to the master dataset. Further, if the motion data corresponding to the exercise activity in the test dataset is found to be more accurate, then the motion data in the master dataset is updated using the more accurate motion data. The process of updating the master dataset using the more accurate motion data from the test dataset may be performed on a regular basis, for e.g. every month or every year.
  • the device 105 is said to be trained for exercise activities, it must be understood that in addition to the exercise activities, the device 105 may also be trained for non-exercise activities such as activities related to pain rehabilitation, physiotherapy and so on.
  • FIG. 6 an environment 600 of a device 605 for classifying activities based on motion data is shown, in accordance with another embodiment of the present disclosure.
  • the device 605 is communicatively coupled to at least one motion sensor 610.
  • the device 605 is further communicatively coupled to a server 615.
  • the server 615 may include at least one processor (not shown) and a memory (not shown).
  • the at least one processor may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions.
  • the at least one processor is configured to fetch and execute computer-readable instructions stored in the memory.
  • the memory may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • volatile memory such as static random access memory (SRAM) and dynamic random access memory (DRAM)
  • non-volatile memory such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • ROM read only memory
  • ROM erasable programmable ROM
  • flash memories such as hard disks, optical disks, and magnetic tapes.
  • the device 605 communicates with the server 615 over a network 620.
  • the server 615 may also be implemented in a variety of computing systems, such as a mainframe computer, a network server, cloud, and the like.
  • the network 620 may be
  • the network 620 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like.
  • the network 620 may either be a dedicated network or a shared network.
  • the shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another.
  • the network 620 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like. [081]
  • the process of training classifiers and classifying of exercise activities performed by a user are implemented in the server 615.
  • the processing of the motion data is performed on the server 615, rather than on the device 605.
  • the device 605 receives the raw motion data from the motion sensor 610. Further, the device 605 sends the raw motion data to the server 615 over the network 620 for further processing.
  • the server 615 maintains the classifiers, the test dataset and the master dataset corresponding to the user.
  • the server 615 may maintain classifiers, and master dataset corresponding to a plurality of users.
  • the processing is shown to occur in real-time, the device 605 may store the raw motion data from the user, in an offline storage manner and upload the raw motion data to the server 615 when the next online session is detected.
  • step 710 the user selects exercise activity to be trained on the device 605. Further, the device 605 sends a user identification number, a muscle group identifier and a label corresponding to the exercise activity to the server 615. As explained using FIG. 2, the label may be provided by the user at the time of training. Further, in order to train the device 605, the user wears the motion sensor 610 and repeatedly performs the exercise activity. The user performs the exercise activity till the device 605 receives enough samples of motion data for training. The device 605 collects the motion data corresponding to the exercise activity from the motion sensor 610. Based on the motion data, the device 605 creates a raw motion data file.
  • the device 605 uploads the raw motion data file to the server 615.
  • the server 615 reads the motion data from the raw motion data file.
  • the server 615 processes the raw motion data file to identify a frame. The frame is identified based on overlap, frame size and number of samples of motion data, as explained earlier using FIG. 3. Further, the frame is extracted and duplicated.
  • the server 615 extracts a signal attribute set from the frame.
  • the signal attribute set may comprise signal attributes may comprise time domain attributes, frequency domain attributes and wavelet domain attributes. Further, the signal attribute set is used to separate exercise activity from non-exercise activity as explained using FIG. 3. Further the signal attribute sets corresponding to the exercise activity are stored in the memory of the server 615.
  • the server 615 determines whether the end of the raw motion data file is reached. In other words, the server 615 determines whether the signal attribute sets are derived for all the frames in the raw motion data file. If the end of the file is reached then step 740 is performed. Otherwise, 725 is performed to identify the next frame. [090] At step 740, the server 615 combines the signal attribute sets corresponding to all the frames to form an attribute set file. Further, the attribute set file is associated with the label for the exercise activity and stored in a folder corresponding to the exercise activity on the server 615. The attribute set file is further associated with the muscle group identifier and stored in a folder corresponding to the muscle group identifier.
  • the server 615 sends the status 'Exercise processed' to the device 605.
  • the server 615 updates the master dataset and test dataset using the attribute set file.
  • the master dataset and the test dataset are modified based on the latest motion data for the exercise activity.
  • test dataset for storing user-specific motion data and further using the test dataset for training classifiers ensure accurate classification of the exercise activity performed by the user. Further, the master dataset helps in classification of exercise activities, previously untrained by the user. Continuous training of classifiers using subsequent motion data from the user improve the accuracy of classification. Further, based on the motion data obtained from the user and the classification, suggestions may be given to the user. In addition, based on the muscle group and the motion data, an exercise equipment may be controlled.
  • a method 800 for training of classifiers to identify an activity based on motion data is shown, in accordance with one embodiment of the present disclosure.
  • the process begins at step 805.
  • step 810 raw motion data of a user performing an exercise activity and a non-exercise activity over a period of time are captured.
  • the raw motion data are processed to identify a motion pattern of each of the first exercise activity and the second exercise activity.
  • a subsequent motion data of the user performing the exercise activity is received.
  • the motion pattern of the subsequent motion data is derived into the first exercise activity or the second exercise activity.
  • the first exercise activity or the second exercise activity derived is classified into one of the test dataset and the master dataset.
  • step 835 the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data is presented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A method of classifying activities performed by a user using motion data is disclosed. At first, raw motion data of a user performing an exercise activity and a non-exercise activity over a period of time is captured. The exercise activity comprises at least a first exercise activity and a second exercise activity. The first exercise activity or the second exercise activity are classified using classifiers. The classifiers are trained beforehand using motion data stored in a test dataset or a master dataset. The test dataset indicates a real-time motion pattern, specified by the user, and the master dataset indicates a pre-determined motion pattern, corresponding to the first exercise activity and the second exercise activity. Further, labels corresponding to the first exercise activity and the second exercise activities are presented to the user. The classifiers, the test dataset and the master dataset are further improved using the motion data.

Description

TRAINING OF CLASSIFIERS FOR IDENTIFYING ACTIVITIES BASED ON
MOTION DATA
FIELD OF INVENTION
[01] The present disclosure relates to a field of classifying activities performed by a user based on motion data.
BACKGROUND
[02] As known, an individual performs several activities on any given day. The activities may include, but not limited to, walking, sleeping, sitting, climbing stairs, moving body parts and so on. The activities may generally be classified as exercise and non-exercise activities based on movements involved in performing any particular activity. Typically, the activities performed by the individual is measured using sensors such as fitness bands or associated machines. The activities measured may be used to adjust exercise routine of the individual to complement activity levels. The activity level may indicate quantitative and/or qualitative measure of the activity performed by the individual for a given time. [03] In one example, the individual or user may wear fitness bands to measure the activity level while performing an activity. Typically, the fitness bands measure a total activity level irrespective of whether the activity is exercise or non-exercise. In other words, the user may perform one exercise activity for a particular duration and may take rest indicating non- exercise activity before performing another exercise activity. At the time of measuring individual and total activity level, the fitness bands may consider the non-exercise activities along with exercise activities, which may lead to improper calculation of activity levels. In order to measure the activity levels, motion identification systems are generally used to identify activities performed by the user. [04] Existing motion identification system, e.g., EP2987452A1, published on Feb 24, 2016 discloses a method of classifying a predefined collection of exercises in a database. The exercises are selected based on the signature of the user's movement while performing an exercise. The signature of the user's movement includes the time course of at least one of the following and/or a combination of at least two of the following parameters: strength, power, speed, stability, response time, flight time, contact time, stiffness, responsiveness, asymmetry, tilt, fatigue, injury, gestural efficiency (motor and / or energy). In another embodiment, the signature of movement is compared to an expected temporal course according to the exercise performed.
[05] In another example of motion identification system is disclosed in US 20130346014 Al. In the published patent application, an apparatus for identifying a type of motion and condition of a user is disclosed. The apparatus includes a motion detection sensor operative to generate an acceleration signature based on sensed acceleration of the user, and a controller. The controller determines what network connections are available to the motion detection device, and matches the acceleration signature with at least one of a plurality of stored acceleration signatures. The controller matches each stored acceleration signatures with a type of motion of the user when processing capability is available to the motion detection device though available network connections, and identifies the type of motion of the user and identify a condition of the user based on the matching of the acceleration signature. [06] It should be understood that the existing motion identification systems employ a fixed logic to determine the type of activity performed by the user, which does not give accurate identification of the activity. This is because; the existing motion identification systems are not trained properly to identify the subsequent activities performed by the user. The above situation is true when user performs the activity that may be different from the logic pre-fed into the motion identification system. As a result, the motion identification system fails to identify activities performed by the user accurately. In other words, the motion identification system fails to identify the activity based on the motion data that are user-specific.
[07] Further, the motion identification system employing the fixed logic does not enable the user to train the system to identify new activities. This is because; whenever there is a new exercise performed by the user, the motion identification system tries to match the new exercise with one of the existing exercises leading to misidentification of the new exercise.
SUMMARY [08] This summary is provided to introduce concepts related to a training of classifiers for identifying activities based on motion data and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
[09] An example of a method for training classifiers to identify activities based on motion data is disclosed. The method comprises capturing, by a sensor, raw motion data of a user performing an exercise activity and a non-exercise activity over a period of time. The exercise activity comprises at least a first exercise activity and a second exercise activity. The method further comprises processing, by a processor, the raw motion data to train classifiers to identify a motion pattern of each of the first exercise activity and the second exercise activity. The motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing with a master dataset or a test dataset. The master dataset indicates a predetermined motion pattern corresponding to the first exercise activity and the second exercise activity. Further, the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity. Further, the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity respectively from the raw motion data. The specific motion samples are segregated using an unsupervised learning technique. Each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified. The method further comprises receiving, by the sensor, a subsequent motion data of the user performing the exercise activity. The method further comprises classifying, by the processor, the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset. The subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique. The method further comprises presenting the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
[010] An example of a device for training of classifiers to identify activities based on motion data is disclosed. The device comprises at least one sensor communicatively coupled to the device. The at least one sensor captures raw motion data of a user performing a first exercise activity and a second exercise activity over a period of time. The device further comprises a memory and a processor coupled to the memory. The processor executes program instructions stored in the memory, to process the raw motion data to train classifiers to identify a motion pattern of each of the first exercise activity and the second exercise activity. The motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing with a master dataset or a test dataset. The master dataset indicates a pre-determined motion pattern corresponding to the first exercise activity and the second exercise activity. Further, the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity. Further, the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data. The specific motion samples are segregated using an unsupervised learning technique. Each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified. The processor further executes the program instructions stored in the memory, to receive a subsequent motion data of the user performing the exercise activity, from the at least one sensor. The processor further executes the program instructions stored in the memory to classify the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset. The subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique. The processor further executes the program instructions stored in the memory to present the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
[Oil] An example of a system for training of classifiers to identify activities based on motion data is disclosed. The system comprises at least one sensor to capture raw motion data of a user performing a first exercise activity and a second exercise activity over a period of time. The system further comprises a device coupled to the at least one sensor for transmitting the raw motion data. The system further comprises a server communicatively coupled to the device. The server comprises a memory and a processor coupled to the memory. The processor executes program instructions stored in the memory to process the raw motion data to train classifiers to identify a motion pattern of each of the first exercise activity and the second exercise activity. The motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing with a master dataset or a test dataset. The master dataset indicates a pre-determined motion pattern corresponding to the first exercise activity and the second exercise activity. Further, the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity. Further, the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data. The specific motion samples are segregated using an unsupervised learning technique. Each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified. The processor further executes the program instructions stored in the memory, to receive a subsequent motion data of the user performing the exercise activity, from the at least one sensor. The processor further executes the program instructions stored in the memory to classify the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset. The subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique. The processor further executes the program instructions stored in the memory to present the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data. [012] An example of a method for deriving motion pattern of activities based on motion data is disclosed. The method comprises capturing, by a sensor, raw motion data of a user performing an exercise activity and a non-exercise activity over a period of time. The exercise activity comprises at least a first exercise activity and a second exercise activity. The method further comprises processing, by a processor, the raw motion data to train classifiers to identify a motion pattern of each of the first exercise activity and the second exercise activity by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data. The specific motion samples are segregated using an unsupervised learning technique. Each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified. The method further comprises receiving, by the sensor, a subsequent motion data of the user performing the exercise activity. The method further comprises deriving, by the processor, the motion pattern of the subsequent motion data into the first exercise activity or the second exercise activity. The motion pattern of the subsequent motion data derived is used to improve identification of the motion pattern of each of the first exercise activity and the second exercise activity. The method further comprises presenting the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
BRIEF DESCRIPTION OF FIGURES
[013] In the following drawings like reference numbers are used to refer to like elements. Although the following figures depict various examples of the disclosure, and the disclosure is not limited to the examples depicted in the figures.
[014] FIGS. 1A and IB illustrate an environment of a device for training of classifiers to identify activitiesbased on motion data, in accordance with one embodiment of the present disclosure; [015] FIG. 2 illustrates a process of training the device by a user for classifying an exercise activity based on the motion data, in accordance with one embodiment of the present disclosure; [016] FIG. 3 illustrates a process for deriving signal attribute file for an exercise activity, in accordance with one embodiment of the present disclosure;
[017] FIG. 4 illustrates a process of training a classifier, by the device for classifying an exercise activity, in accordance with one embodiment of the present disclosure;
[018] FIG. 5A and 5B illustrate a process for identifying an exercise activity performed by a user, in accordance with one embodiment of the present disclosure;
[019] FIG. 6 illustrates an environment of a device for classifying activities based on motion data, in accordance with another embodiment of the present disclosure;
[020] FIG. 7 illustrates a process of calculating signal attributes for training dataset, in accordance with one embodiment of the present disclosure; and [021] FIG. 8 illustrates a method for training of classifiers to identify activities based on motion data, in accordance with another embodiment of the present disclosure.
DETAILED DESCRIPTION [022] The following detailed description is intended to provide example implementations to one of ordinary skill in the art, and is not intended to limit the disclosure to the explicit disclosure, as one or ordinary skill in the art will understand that variations can be substituted that are within the scope of the disclosure as described. [023] The present disclosure relates to methods and systems of training of classifiers to identify activities based on motion data. An activity indicates any movement that a user may perform a task using at least one body part such as a hand, leg and so on. For example, the user may lift a dumbbell using his arm. The movement of the body part may be captured using raw motion data i.e., motion of the body part while performing the activity. For the above example, the movement of the arm lifting the dumbbell may be captured. The raw motion data may be captured by motion sensors such as an accelerometer, a magnetometer or a gyroscope. The raw motion data captured is sent to a device for classifying the activity. At first, the device segregates the raw motion data corresponding to exercise and non-exercise activities based on repetition of values in the raw motion data. [024] Specifically, the device uses an unsupervised learning technique such as k-means clustering, mixture models and a hierarchical clustering for selecting specific samples of motion data. Subsequently, muscle groups within the exercise activity are identified. The motion data is further fed to a group of classifiers corresponding to the muscle group. Based on the output of the classifiers, the exercise activity is identified/predicted using a voting based prediction. Further, the device determines whether the exercise activity is present in pre- created databases such as a test dataset or a master dataset. The test dataset comprises user- specific motion data for the exercise activity and associated labels. The master dataset comprises standard motion data and associated labels, in addition to user specific motion data. If the exercise activity is present in one of the pre-created databases, then a label corresponding to the classification of the exercise activity is presented to the user. Further, the classifier for the exercise activity is further trained using the motion data. The training is typically done using a supervised learning technique.
[025] Referring to FIGS. 1A and IB, an environment 100 of a device 105 for classifying activities based on motion data is shown, in accordance with one embodiment of the present disclosure. It should be understood that ' activity' may refer to exercise activity or non-exercise activity. For example, the activity may comprise walking, jogging, push-ups, lifting weights, lying down, eating, sleeping, sitting and so on. [026] In one embodiment, the devicel05 may be one of an electronic device, a mobile phone, a laptop, a smart watch, a fitness equipment, a display device and a wearable garment. The device 105 may include at least one processor 106, a memory 107 and an Input and Output (I/O) Interface 108. The at least one processor 106 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 106 is configured to fetch and execute computer-readable instructions stored in the memory 107.
[027] The memory 107may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
[028] The I/O interface 108 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. Further, the I/O interface may enable the device 105 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface may facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface may include one or more ports for connecting a number of devices to one another or to another server.
[029] The device 105 is communicatively coupled to at least one sensor llO.The device 105 may communicate with the at least one sensor 110 through a wired or wireless technology such as Bluetooth, ZigBee, WI-FI, Internet of Things (IoT) and so on. The at least one sensor 110 may be one of an accelerometer, a magnetometer, a gyroscope, a Micro Electronic Mechanical System (MEMS), and a Nano Electronic Mechanical System (NEMS). In the present disclosure, the at least one sensor 110 is used for sensing the motion of the user when strapped on arm, chest, legs, abdomen, and so on, and therefore referred to as motion sensor 110 for the purpose of explanation. In order to classify the activities performed by a user, at first, the device 105 may detect movements performed by the user. In order to detect the movements, the motion sensor 110 may be incorporated in a wearable device such as a smart watch, a fitness band, a smart garment and so on. For example, the user may wear the smart watch comprising the motion sensor 110, as shown in FIG. IB, while performing an activity, e.g., lifting weights. In another example, the motion sensor 110 may also be strapped onto the body of the user, using Velcro or other suitable clothing, to capture the raw motion data. The motion sensors 110 capture raw motion data continuously in real-time. The raw motion data may be captured when there is a change in values corresponding to acceleration of the body part performing the activity. For example, accelerometers may be used to sense a change in acceleration of a body corresponding to change in body mass and/or gravity. In other words, the accelerometer is employed to measure the acceleration in horizontal as well as vertical directions. It should be understood that a single accelerometer may be used to measure both body mass as well as gravitational acceleration. Alternatively, one accelerometer may be used to measure a change in body mass acceleration and another accelerometer may be used to measure a change in gravitational acceleration. Similarly, the gyroscope may be used to measure a change in orientation of the body of the user. In one example, the device 105 may be a mobile phone. As known, the mobile phone comprises a processor, a memory and an I/O interface. In addition to implementing the functionalities of the device 105, the mobile phone may include motion sensors 110 such as accelerometers. The motion sensor 110 within the mobile phone may be used to capture raw motion data from the user, in addition to other external motion sensors 110.
[030] In order to explain training of classifiers to identify activities performed by the user, an example may be used. Consider that the user wants the device 105 to identify an activity, e.g. performing hammer curls. As known, the user may perform the hammer curls and may take smaller breaks before repeating the subsequent/successive hammer curls. It should be understood that when the user performs an exercise, the user exerts an amount of energy. Further, when the user takes the break before repeating the exercise, the time for the hiatus is considered as a non-exercise activity. As such, the exercise and non-exercise activities need to be segregated and identified separately for processing information on the exercise activities. In order to identify the activity as one of the exercise activity and the non-exercise activity, the user may train the device 105 during a training session by providing training dataset. The training dataset comprises raw motion data of the user performing an activity. For example, consider the user is performing the hammer curls for a particular duration. At the time of performing the hammer curls, the raw motion data corresponding to the activity is captured using the motion sensors 110 attached to the body of the user. The process of training the device 105 is explained using FIG. 2.
[031] Referring to FIG. 2in conjunction with FIGS. 3 and 4, a process 200 of training the device 105 based on the motion data is shown, in accordance with one embodiment of the present disclosure. In order to train the device 105, at first, the user wears the wearable device housing the motion sensor 110.
[032] At step 205, the user initiates a training session on the device 105. The user initiates the training session using the I/O interface 108 of the device 105. In another embodiment, the user initiates the training session by operating the wearable device. It should be understood that the memory 107 may store information corresponding to a list of pre-fed exercise activities such as bicep curl, hammer curls, cycling, lifting weights, pain rehabilitation, and so on. The user may have an option to train the device 105 i.e., if the device 105 has information about the pre-fed exercise activities, then the user may train the device 105, to improve identification of an activity. Further, if the device 105 does not have information about the pre-fed exercise activities, then the user may train the device 105 to identify the new activity. For example, if the user wishes to train the device 105 for hammer curl activity, then the user may select 'hammer curl activity' from the list and may train the device 105. Similarly, if the user wishes to train the device 105 for bicep curl activity, then the user may select bicep curl activity from the list and may train the device 105. It should be understood that the user may select any activity such as bicep curl, hammer curls, cycling, lifting weights, pain rehabilitation, and so on to the device 105 for identifying the activity whenever the activity is performed by the user. If the user wishes to perform an activity that is not present or stored in the memory 107, then the user may create a new activity and train the device 105. In other words, the user may provide raw motion data corresponding to the new activity for training the device 105 to identify the new activity. After selecting one of the pre-fed exercise activity or new activity from the list, the device 105 may receive the raw motion data from the motion sensor 110. [033] At the time of the user performing the activity, the motion sensor 110 continuously feeds the raw motion to the device 105, as shown at step 210. It should be understood that motion sensor 110 may send the raw motion data to the device 105 in real-time or with a time-delay, e.g., 2 minutes. After receiving the raw motion data, the device 105 performs sampling on the raw motion data. Specifically, the device 105 performs sampling on the raw motion data at a sampling rate higher than a Nyquist sampling rate in order to avoid aliasing effects. As known, the Nyquist sampling rate refers to the sampling rate at which the sampling frequency is twice the highest frequency of the raw motion data. Although the sampling is shown to be carried out in the device 105, it must be understood that the sampling may be carried out within the wearable device or in a data acquisition unit (not shown) attached to the device 105. Further, the device 105 may filter unwanted noise signals from the raw motion data, using signal filters such as Kalman filters. [034] Further, the device 105 calculates the number of samples of raw motion data received from the motion sensor 110. Specifically, the device 105 calculates number of times the user repeats an activity. In one example, the device 105 may be pre-configured to receive a predefined number of samples such that the samples received may be used for further processing of the raw motion data. In one example, the device 105 may be pre-configured to receive 30 samples. In another example, the device 105 may be pre-configured to receive 50 samples from the motion sensor 110. It should be understood that the number of samples that the device receives is given for explanation purpose only and should not construed in limited sense. At the time of receiving the raw motion data, the device 105 checks whether the number of samples of raw motion data has reached the predefined number, as shown in step 215.1f the predefined number of samples is reached, then step 220 is performed. Otherwise, step 210 is performed, i.e., the user repeats the exercise activity till the device 105 receives the predefined number of samples of raw motion data. For example, in case of lifting weights, the user may be requested to repeat the weight lifting 30 times. Each instance of lifting the weights may be considered to be one sample, leading to a total of 30 samples. After receiving the pre-defined number of samples, the device 105 may display a notification to the user either on the I/O interface 108 or on a display portion (not shown) of the motion sensor 110, as shown at step 220. For example, the notification may include a message notifying the user to stop performing the exercise activity. For the particular set of raw motion data collected, the device 105 may create a raw motion data file comprising details of the samples collected. The device 105 may store the raw motion data file in the memory 107.
[035] At step 225, the device 105 processes the raw motion data and segregates exercise activity and non-exercise activity. Specifically, if the activity shows repeated spikes in the data, then the device 105 considers that activity to be an exercise activity. Similarly, if the activity has hiatus, then the device 105 considers that activity to be a non-exercise activity. For the exercise activity, the device 105 may allow the user to provide a label depending on the activity performed. For example, if the user is training the device 105 to identify the activity of bicep curl, then the user may provide the label 'BICEP CURL' for that activity. The label provided may be used to present the classification of the exercise activity to the user, upon training. In other words, when the user performs the exercise activity of bicep curl, after the training session, the device 105 presents the label 'BICEP CURL' upon identifying the exercise activity. Similarly, for each of the exercise activity, the device 105 may allow the user to label the exercise activity as a first exercise activity, a second exercise activity, a third exercise activity and so on. For example, the user may label bicep curl as the first exercise activity, single hand triceps extension as the second exercise activity, lifting dumbbells as the third exercise activity and so on. For the purpose of explanation, the present disclosure is explained considering two exercise activities i.e., first exercise activity e.g., bicep curls, and the second exercise activity e.g., single hand triceps extension. It should be understood that the device 105 may be used to identify more than two exercise activities using the present disclosure.
[036] The device 105 may further associate muscle group identifiers with the exercise activity. For example, in case of bicep curl where the muscle group used comprises biceps, the muscle group identifier may be 'BICEPS'. Similarly, in case of single hand triceps extension, where the muscle group used comprises triceps, the muscle group identifier may be 'TRICEPS'. In one embodiment, the muscle group identifiers may be provided to the device 105 by the user. In another embodiment, the device 105 may assign the muscle group identifiers based on the location of motion sensors 110 or based on the type of exercise activity selected by the user. In one example, the device 105 may further create associate files for the exercise activity and the muscle group identifiers respectively. In another example, the device 105 may use the associate files corresponding to the exercise activity and the muscle group identifiers already present in the memory 107.
[037] At step 230, the device 105 derives an attribute set file from the raw motion data file, as explained using FIG. 3. Referring to FIG. 3, a process 300 for deriving attribute set file for an exercise activity is shown, in accordance with one embodiment of the present invention. In order to derive the attribute set file, at first, device 105 removes raw motion data corresponding to non-exercise activities from the raw motion data file.
[038] At step 305, the device 105 processes the raw motion data file to identify frames. The frame comprises a set of samples selected from the raw motion data file. In one example, the frame may comprise a pre-defined number of consecutive samples. The frame is identified based on parameters such as overlap, frame size and number of samples of raw motion data. The overlap step may represent a degree of overlap between two consecutive frames. The frame size indicates the number of samples within the frame. The number of samples of raw motion data may represent a total number of samples in the raw motion data file. Consider that the raw motion data file comprises 5000 values of raw motion data, with a moderate frame size of 150 values and a moderate overlap step of 80%. For the above example, the overlap may be calculated as:
Overlap = frame size - floor ((overlap step*frame size)/100)
i.e., overlap = 150 - floor ((80* 150)/100)
= 150 - floor (120)
= 150 - 120
= 30 values
[039] Further, the number of frames may be calculated as:
Number of frames=floor (Number of samples/Overlap)- 1
i.e., number of Frames=floor (5000/30)-! =floor (l 66.666)- 1
= 166 - 1
=165 [040] In other words, the raw motion data file comprising 5000 values is divided into 165 frames. Each of the frames comprise 150 values. Based on the number of frames and the frame size, the start and end of a frame are identified. Further, the frame is extracted and duplicated.
[041] At step 310, the device 105 extracts a signal attribute set from each frame. The signal attribute set may comprise signal attributes in time domain, frequency domain and wavelet domain attributes. The calculation of signal attributes help in separating the exercise activity from non-exercise activities. In order to differentiate the exercise activity from the non- exercise activities, the repetitive nature of raw motion data is considered. The exercise activities are assumed to be repetitive in nature, e.g., jogging is a repetitive motion of arms and legs. In case of non-exercise activities such as bathing, sitting, and so on, the motion of arms or legs are non-repetitive in nature. In order to identify the repetitive samples related to the exercise activity in the frame, a discrete Fourier series may be used. Similarly, the discrete Fourier transform may be used to identify non-repetitive samples related to non-exercise activities and transients due to noise. Based on the values of the signal attributes, the device 105 separates samples related to the non-exercise activities from samples related to the non- exercise activity. Upon removing samples related to the non-exercise activities, the signal attribute set of the frame is stored in the memory 107 of the device 105.
[042] After extracting each frame, the device 105 determines whether the end of the raw motion data file is reached, as shown at step 315. In other words, the device 105 determines whether the signal attribute sets for all the frames in the raw motion data file is calculated. If the end of the raw motion data file is reached, then step 320 is performed. Otherwise, step 305 is performed to identify the start and end of the next frame. [043] At step 320, the device 105 combines the signal attribute sets corresponding to all the frames to form an attribute set file. [044] Further, the attribute set file is associated with the label for the exercise activity and attribute set file is stored in the associate files corresponding to the exercise activity on the device 105. The attribute set file is further associated with the muscle group identifier and stored in the associate files corresponding to the muscle group identifier. In the previous example of bicep curl, the attribute set file may be stored under two associate files namely, 'BICEP CURL', and 'BICEPS'. It must be understood that the muscle group associate file 'BICEPS' may comprise attribute set files related to exercise activities associated with the muscle group, other than bicep curl, for e.g., dumbbell curl. After completion of the processing, the device 105 may display a status 'Exercise processed' to the user on the I/O interface 108.
[045] In one embodiment, the memory 107 may comprise a master dataset and a test dataset. The master dataset may indicate a data store storing standardized motion data for various exercise activities along with corresponding labels. In other words, the pre -determined motion pattern corresponding to the first exercise activity and the second exercise activity are stored in the master dataset. The master dataset comprises information or data points of all the exercises. In order to build the master dataset, raw data for each exercise activity is fetched. Subsequently, features of each exercise activity are extracted and the features corresponding to each exercise activity is stored in the memory 107. In one example, the standardized motion data may be pre-fed into the memory 107 as master dataset at the time of manufacturing the device 105. For example, the master dataset may comprise data corresponding to bicep curl as first exercise activity, single hand triceps extension as second exercise activity and so on. The standardized motion data may be built using motion data collected from experts such as fitness trainers. The master dataset is built in such a way that whenever the user performs the activity, the user performance may be compared with standardized motion data and the corresponding exercise activity is automatically identified and presented to the user. On the other hand, the test dataset may indicate a data store that stores and updates the data based on the activities performed by the user in real time. In other words, test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity. For example, at the time of training, the user may specify that lifting dumbbells is the first exercise activity and bicep curl is the second exercise activity. Similarly, the user may define or label the activities of other types.
[046] While performing the activity, the device 105 updates the master dataset and the test dataset by adding the attribute set file for the exercise activity. In other words, the master dataset and the test dataset are modified based on the latest raw motion data for the exercise activity, such as the first exercise activity and the second exercise activity. Whenever the user performs certain activity, the device 105 tries to identify the activity and classifies the activity into one of the first exercise activity and the second exercise activity. In order to identify the activity, the device 105 is trained using classifiers. Specifically, the device 105 utilizes the attribute set file to train the classifiers. As explained above, the attribute set file is associated with the label for the exercise activity and is stored in the associate files corresponding to the exercise activity on the device 105. The data in the attribute set file are used to train the classifier, as explained using FIG. 4.
[047] At step 235, training of the classifier for classifying the exercise activity starts automatically using the data in the attribute set file. In other words, the device 105 builds a learning network for the exercise activity. The process of training the classifier for the exercise activity, by the device 105, using the motion data is explained in detail using FIG. 4. Referring to FIG. 4, a process 400 of training a classifier, by the device 105 for classifying an exercise activity is shown, in accordance with one embodiment of the present disclosure. As explained earlier using FIGS. 2 and 3, the motion data for training the classifier is obtained from the attribute set file stored in the master dataset. [048] In order to identify and classify the exercise activity into one of first exercise activity and the second exercise activity, the device 105 implements a machine learning technique. Specifically, the device 105 implements one of an unsupervised learning technique and a supervised learning technique. The unsupervised learning technique may include, but not limited to a k-means clustering, mixture models and a hierarchical clustering. [049] When the unsupervised learning technique is used, the device 105 may select specific samples from the motion data received, as shown in step 405. In order to select the specific samples of the motion data, a neural network model may be used. For example, if k-means clustering is used, then the specific samples of the motion data may be obtained by calculating a centroid for the motion data in a cluster. Specifically, the cluster may comprise a plurality of motion data and initial means. The initial means refer to a random mean initially assigned to the cluster of motion data. The specific samples of motion data selected comprise a training dataset and a testing dataset. The training dataset indicates data that is fed to the device 105 at the time of manufacturing. In other words, the training dataset indicates the data corresponding to the exercise activities that are classified to identify the exercise activities as the first exercise activity, second exercise activity at the initial stage. The training dataset comprises a set of input values and a corresponding set of desired output values. The testing dataset comprises a set of values for validating properties of the classifiers after training. The properties of the classifier may comprise accuracy of training and classification errors. In one example, the device 105 may select 70% of the motion data as the training dataset and remaining 30% of the motion data as the testing dataset. In another example, the device 105 may select 90% of the motion data as the training dataset and the remaining 10% of the motion data as the testing dataset. [050] In order to train the classifier to classify the exercise activity as one of the first exercise activity and the second exercise activity, the training dataset is fed to the classifier, as shown at step 410. The classifier may use the supervised learning network that includes one of an Artificial Neural Network (ANN), boosted tree networks and a Random forest network. Consider for example, the device 105 uses the artificial neural network for the classification. The artificial neural network may be a multi-layer perceptron (MLP). The multilayer- perceptron comprises a first layer comprising input nodes, a second layer comprising hidden nodes and a third layer comprising output nodes. Each input node is connected to each of the hidden nodes. Further, each hidden node is connected to each of the output nodes. The hidden nodes in the second layer are used as processing elements. More specifically, the hidden nodes perform a non-linear activation function on a weighted sum of inputs received from the input nodes. The output of the non-linear activation function, available at the output node, is a class label, i.e., the label corresponding to the exercise activity. For example, if the exercise activity includes Zottman curl and bicep curl, then the device 105 may perform the unsupervised learning technique followed by the supervised learning technique and may derive motion pattern to classify the Zottman curl activity as first exercise activity and bicep curl activity as the second exercise activity. Subsequently, the device 105 may label the Zottman curl activity as first exercise activity and bicep curl activity as the second exercise activity. Although the present disclosure is explained considering that unsupervised learning technique is performed followed by the supervised learning technique for classifying the exercise activity, it should be understood that the supervised learning technique alone can be performed for classifying the exercise activity, and such implementations is within the scope of the present disclosure.
[051] Based on the training dataset and the testing dataset, the device 105 may train the classifier using the training dataset with the help of supervised learning technique, as shown in step 415. Referring to the above example, the building of the ANN for classifying the exercise activity may be explained using backpropagation algorithm. At first, the classifier uses the training dataset of the exercise activity as examples for training. Further, the classifier is trained to produce the classification corresponding to the exercise activity as output. In each iteration of the training, the output of the ANN is compared with the correct classification. If the output of the ANN is different from the correct classification, then the weights in the ANN are adjusted to produce the correct classification as output.
[052] At step 420, the classifier is tested for accuracy of training and classification errors using another testing dataset. [053] The above process of training the classifier is performed for each exercise activity. In other words, a classifier is maintained for each of the exercise activities, thereby leading to a plurality of classifiers for a plurality of exercise activities. For example, the classifiers such as Artificial Neural Network (ANN), boosted tree networks and a Random forest network may be used to classify the exercise activity of bicep curl. In another example, the classifiers such as Artificial Neural Network (ANN), and boosted tree networks may be used to classify the exercise activity of hammer curls. It should be understood one or more classifiers may be used to classify one exercise activity. Further, a single classifier may be used to classify one or more exercise activity.
[054] Upon training the classifier, the device 105 starts tracking the activities performed by the user. Subsequently, whenever the user performs a subsequent activity, the device 105 identifies a subsequent motion data and analyzes the subsequent motion data to identify the type of exercise activity, i.e., first exercise activity and second exercise activity. After identifying the type of the device 105presents the label corresponding to the exercise activity to the user. [055] At step 240, the device 105 determines whether the user wishes to train the device 105 for classifying another exercise activity. If the user wishes to train the device 105 for classifying another exercise activity, then step 210 is performed. Otherwise, step 240 is performed. The user may train the device 105 to classify a plurality of exercise activities such as walking, jogging, running, lifting weights, doing push-ups, planks and so on.
[056] At step 245, the device 105 continues tracking the activities performed by the user.
[057] Referring to FIGS. 5A and 5B, in conjunction with FIG. 1 a process 500 for identifying an exercise activity performed by a user is shown, in accordance with one exemplary embodiment of the present disclosure. Specifically, the process 500 is presented to identify a label for a subsequent exercise activity performed by the user. As presented earlier, the memory 107 comprises the master dataset and the test dataset comprising standardized motion data and user-defined motion data, respectively. Further, two or more exercise activities identified along with labelling of the exercise activities are stored in the master dataset and the test dataset. At first, the exercise activity performed is identified and presence of the exercise activity in one of the master dataset and the test dataset is determined. Subsequently, the label corresponding to the exercise activity is displayed to the user. The process of identifying and labelling the subsequent exercise activity based on the subsequent motion data is explained using FIGS. 5A and 5B. [058] As mentioned earlier, at first, the user wears a wearable device comprising at least one motion sensor. At step 505, the motion sensors 110 attached to the body of the user capture raw motion data of subsequent activity. The raw motion data of the subsequent activity, i.e., subsequent motion data may correspond to an exercise activity or non-exercise activity. For example, the exercise activity may be the user lifting weights in a gym. In between lifting weights, the user may perform a non-exercise activity, for e.g., drink water. The raw motion data is further sent to the device 105 for further sampling and processing.
[059] At step 510, the device 105 separates the exercise activity and the non-exercise activity from the subsequent activity. In order to separate the exercise activity and the non-exercise activity, the device 105 combines the raw motion data of the subsequent activity to form a raw motion data file. Further, the device 105 calculates signal attributes from raw motion data file of the subsequent activity as explained earlier using FIG. 3. Based on the signal attributes, the device 105 separates the exercise activity and the non-exercise activities from the subsequent activity. In other words, the device 105 removes non-repetitive samples from repetitive samples to retain only the samples, i.e., the motion data related to the exercise activity.
[060] At step 515, the device 105 determines the muscle group of the exercise activity. In one embodiment, the muscle group of the exercise activity is specified by the user. For example, the user may specify the muscle group of the exercise as 'biceps'. In another embodiment, the muscle group is determined by identifying the motion sensor 110 from which raw motion data of the subsequent activity is received. For each muscle group there is a corresponding set of classifiers for related exercise activities. For example, the exercise activities related to biceps may comprise chin-up, hammer curl, Zottman curl and single-arm curl, i.e., corresponding to biceps, there may be a first classifier for chin up, a second classifier for hammer curl, a third classifier for Zottman curl and so on.
[061] At step 520, the motion data of the exercise activity is fed to the classifiers corresponding to each of the exercise activities of the muscle group. For instance, if the exercise activity performed by the user is chin-up, then the motion data is fed to classifiers corresponding to biceps, i.e., to the classifiers corresponding to chin-up, hammer curl, Zottman curl and single- arm curl. Subsequently, the classifiers produce classifications for the motion data.
[062] At step 525, the best classification among all the classifications produced by the classifiers is selected based on voting based prediction. The voting based prediction selects the classification most often predicted by the classifiers. In other words, the device 105 predicts the labelling of the motion pattern or motion data as the first exercise activity or the second exercise activity, by analyzing the specific motion data samples using the voting based prediction. Consider for example, that the user is performing chin-up. The motion data corresponding to the user performing chin-up is provided to, say, seven classifiers within the muscle group 'biceps'. As explained above, each of the classifier is trained to identify and classify the type of activity into different exercise activities. Accordingly, the motion data of the user performing chin-up exercise is provided to the seven classifiers. Each of the seven classifiers predict the exercise activity based on a correlation coefficient. The correlation coefficient indicates a measure of the degree of closeness of the exercise activity to a pre-fed exercise activity. If the exercise activity performed by the user is similar to a pre-fed exercise activity, then the motion data has high correlation coefficient with the pre-fed exercise activity. Consequently, the classifier may predict the exercise activity with high accuracy. [063] After providing the motion data to each classifier, the classifiers identify or predict the type of activities based on the correlation coefficient with respect to chin-up exercise. As known, the classifiers that are trained may not classify the activities accurately due to several reasons such as insufficient data points, noise in raw motion data and so on. In order to avoid inaccurate classification of the exercise activities, the device 105 employs all the available classifiers to identify and classify the exercise activities. After employing the classifiers, the type of exercise activities that most of the available classifiers predict is selected. In order to explain voting based prediction an example may be used.
[064] Consider the above example of motion data of the chin-up exercise activity, the motion data is fed to available classifiers e.g., seven classifiers. Upon employing seven classifiers, consider four out of the seven classifiers predict or classify the activity as 'chin-up' with accuracies of 60%, 80%, 95% and 72%. Two out of the remaining three classifiers predict the activity as hammer curl with accuracies of 53% and 45%. The remaining one classifier predicts the activity as single-arm curl with 67% accuracy. Subsequently, the device 105 selects the label of the activity predicted by the highest number of classifiers. In the present example, four out of seven classifiers predicted the exercise activity as chin-up. Consequently, the device 105 classifies the exercise activity as 'chin-up' most number of classifiers voted or identified the chin-up exercise activity.
[065] From the above example, consider three out of the seven classifiers predict or vote for the exercise activity as 'chin-up' with accuracies of 60%, 80%, and 72%, three out of the remaining four classifiers predict the activity as hammer curl with accuracies of 95%, 53% and 45%, and the remaining one classifier predicts the activity as single-arm curl with 67% accuracy. For the above case, the percentage accuracy of prediction associated with each of the classifiers may be analyzed. In the present example, if three classifiers predict the activity as chin-up and another three classifiers predict the activity e.g., hammer curls, then the percentage accuracies are analyzed to determine the correct classification for the exercise activity. Based on the percentage accuracy of prediction associated with each of the classifiers i.e., three classifiers with 60%, 80%, and 72% being more than 95%, 53% and 45%, the device 105 may classify or predict the exercise activity as 'chin-up'.
[066] The voting- based prediction is useful when the activity performed is similar to the activities defined in the master dataset. As can be understood from above that the correlation coefficient of the certain exercise activity may be low. In order to predict the exercise activity that has low correlation coefficient value, the device 105 invokes all the classifiers. Further, the device 105 checks which classifiers provide highest correlation coefficient, and accordingly chooses the label identified by the classifier.
[067] Whenever the user performs the exercises, the device 105 tries to classify the exercise in the test dataset such that the exercises labelled by the user are identified and the classification of the exercise is improved. If the exercise performed is not close to the exercises present in the test dataset, then the device 105 tries to classify the exercise based on the master dataset. In order to classify the exercise, the device 105 invokes the classifiers that are trained to identify the exercise activity. At step 530, the device 105 checks whether the exercise activity is present in the test dataset. In other words, the subsequent exercise activity is checked to determine whether subsequent exercise activity is similar to the exercise activity present in the test dataset or the master dataset. The exercise activity is looked up in the test dataset using the classification to identify the corresponding label. If the exercise activity is present in the test dataset, then step 535 and 540 are performed. Otherwise, step 545 through 570 is performed.
[068] After checking the subsequent exercise activity with the exercise activity present in the test dataset, the device 105 presents the label corresponding to the exercise activity if the subsequent exercise activity matches with the exercise activity, as shown at step 535. Subsequently, the device 105 displays the label to the user on the I/O interface 108. For example, if the exercise activity is chin up, then the label corresponding to the classification obtained from the classifier i.e., 'CHIN UP' may be presented to the user.
[069] At step 540, the device 105 uses the subsequent motion data of the subsequent exercise activity, to train the classifier corresponding to the subsequent exercise activity. In other words, the subsequent motion data is added to the existing motion data and the combined data is used to train the classifier. As explained above, the classifiers may be trained using a supervised learning technique. Referring to the previous example of performing chin-up, the classifiers corresponding to chin-up are trained using the subsequent motion data of the user performing chin-up. Specifically, the training of the classifier is performed considering the subsequent motion data along with the exercise activity available in the test dataset, as explained using FIG. 4.
[070] Similarly, the device 105 checks whether the exercise activity corresponding to the subsequent motion data is present in the master dataset, as shown at step 545. In one embodiment, the device 105, at first, checks whether the subsequent exercise activity matches with the exercise activity in the test dataset, as shown at step 535. If the subsequent exercise activity does not match with the exercise activity in the test dataset, then the subsequent exercise activity is matched with the exercise activity in the master dataset. If the exercise activity is found in the master dataset, then steps 550 through 560 are performed. Otherwise steps 565 and 570 are performed.
[071] At step 550, the device 105 presents the label corresponding to the exercise activity, as obtained from the master dataset, to the user. Further, the device 105 may provide a notification to the user, such as 'We detected the activity as PUSH-UPS from the master data'.
[072] At step 555, the device 105 trains the classifier using the subsequent exercise activity i.e., subsequent motion data and the label presented by employing the supervised learning technique. In order to explain training of the classifier using subsequent motion data, an example may be used. Consider, the classifier is trained to classify an exercise activity e.g., hammer curls after receiving 30 samples of raw motion data. For the subsequent exercise activity, say 31st sample, of hammer curl performed at a later time, e.g., after seven days, the device 105 employs the classifiers to identify and classify the subsequent exercise activity as hammer curl activity. After identifying the subsequent exercise activity, i.e., 31st, the device 105 may add the 31st sample to previous 30 samples to identify next sample i.e., 32nd sample. In other words, the device 105 considers all the available samples to predict type of next activity and improves the accuracy of the classifiers. [073] At step 560, the exercise activity with the subsequent motion data and the label obtained from the test dataset or the master dataset are added to the master dataset. Further, the device 105 provides an option for the user to train the classifier, for identifying the exercise activity with improved accuracy. The classifier is trained by the user as explained previously, using FIGS. 2,3 and 4.
[074] At step 565, the device 105 detects that the exercise activity is not identified either in the test dataset or the master dataset. As a result, the device 105 presents the activity as a new activity. Further, the device 105may prompt the user to assign a label for the new activity as shown in step 570. After the user assigns the label, steps 555 and 560 are performed and a new exercise activity is added in to test dataset. [075] It must be understood that every subsequent exercise activity added to the test dataset are automatically added to the master dataset. Further, if the motion data corresponding to the exercise activity in the test dataset is found to be more accurate, then the motion data in the master dataset is updated using the more accurate motion data. The process of updating the master dataset using the more accurate motion data from the test dataset may be performed on a regular basis, for e.g. every month or every year.
[076] Although in the present disclosure, the device 105 is said to be trained for exercise activities, it must be understood that in addition to the exercise activities, the device 105 may also be trained for non-exercise activities such as activities related to pain rehabilitation, physiotherapy and so on.
[077] Referring to FIG. 6, an environment 600 of a device 605 for classifying activities based on motion data is shown, in accordance with another embodiment of the present disclosure. The device 605 is communicatively coupled to at least one motion sensor 610. The device 605 is further communicatively coupled to a server 615. The server 615 may include at least one processor (not shown) and a memory (not shown). The at least one processor may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor is configured to fetch and execute computer-readable instructions stored in the memory.
[078] The memory may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. [079] The device 605 communicates with the server 615 over a network 620. It may be understood that the server 615 may also be implemented in a variety of computing systems, such as a mainframe computer, a network server, cloud, and the like. [080] In one implementation, the network 620 may be a wireless network, a wired network or a combination thereof. The network 620 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 620 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 620 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like. [081] In the present embodiment, the process of training classifiers and classifying of exercise activities performed by a user are implemented in the server 615. Specifically, the processing of the motion data is performed on the server 615, rather than on the device 605. In order to process the motion data on the server 615, the device 605 receives the raw motion data from the motion sensor 610. Further, the device 605 sends the raw motion data to the server 615 over the network 620 for further processing. In addition, the server 615 maintains the classifiers, the test dataset and the master dataset corresponding to the user. Similarly, the server 615 may maintain classifiers, and master dataset corresponding to a plurality of users. Although in the present embodiment, the processing is shown to occur in real-time, the device 605 may store the raw motion data from the user, in an offline storage manner and upload the raw motion data to the server 615 when the next online session is detected.
[082] Referring to FIG. 7, a process 700 of updating master dataset and test dataset during a training session for an exercise activity is explained using another embodiment of the present disclosure.
[083] The process begins at step 705. [084] At step 710, the user selects exercise activity to be trained on the device 605. Further, the device 605 sends a user identification number, a muscle group identifier and a label corresponding to the exercise activity to the server 615. As explained using FIG. 2, the label may be provided by the user at the time of training. Further, in order to train the device 605, the user wears the motion sensor 610 and repeatedly performs the exercise activity. The user performs the exercise activity till the device 605 receives enough samples of motion data for training. The device 605 collects the motion data corresponding to the exercise activity from the motion sensor 610. Based on the motion data, the device 605 creates a raw motion data file.
[085] At step 715, the device 605 uploads the raw motion data file to the server 615.
[086] At step 720, the server 615 reads the motion data from the raw motion data file. [087] At step 725, the server 615 processes the raw motion data file to identify a frame. The frame is identified based on overlap, frame size and number of samples of motion data, as explained earlier using FIG. 3. Further, the frame is extracted and duplicated.
[088] At step 730, the server 615 extracts a signal attribute set from the frame. The signal attribute set may comprise signal attributes may comprise time domain attributes, frequency domain attributes and wavelet domain attributes. Further, the signal attribute set is used to separate exercise activity from non-exercise activity as explained using FIG. 3. Further the signal attribute sets corresponding to the exercise activity are stored in the memory of the server 615.
[089] At step 735, the server 615 determines whether the end of the raw motion data file is reached. In other words, the server 615 determines whether the signal attribute sets are derived for all the frames in the raw motion data file. If the end of the file is reached then step 740 is performed. Otherwise, 725 is performed to identify the next frame. [090] At step 740, the server 615 combines the signal attribute sets corresponding to all the frames to form an attribute set file. Further, the attribute set file is associated with the label for the exercise activity and stored in a folder corresponding to the exercise activity on the server 615. The attribute set file is further associated with the muscle group identifier and stored in a folder corresponding to the muscle group identifier.
[091] At step 745, the server 615 sends the status 'Exercise processed' to the device 605.
[092] At step 750, the server 615 updates the master dataset and test dataset using the attribute set file. In other words, the master dataset and the test dataset are modified based on the latest motion data for the exercise activity.
[093] The process ends at step 755. [094] The use of test dataset for storing user-specific motion data and further using the test dataset for training classifiers, ensure accurate classification of the exercise activity performed by the user. Further, the master dataset helps in classification of exercise activities, previously untrained by the user. Continuous training of classifiers using subsequent motion data from the user improve the accuracy of classification. Further, based on the motion data obtained from the user and the classification, suggestions may be given to the user. In addition, based on the muscle group and the motion data, an exercise equipment may be controlled.
[095] Referring to FIG. 8, a method 800 for training of classifiers to identify an activity based on motion data is shown, in accordance with one embodiment of the present disclosure. The order in which the method 700 is described and is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 800 or alternate methods. Additionally, individual blocks may be deleted from the method 800 without departing from the spirit and scope of the disclosure described herein. [096] The process begins at step 805. [097] At step 810, raw motion data of a user performing an exercise activity and a non-exercise activity over a period of time are captured.
[098] At step 815, the raw motion data are processed to identify a motion pattern of each of the first exercise activity and the second exercise activity.
[099] At step 820, a subsequent motion data of the user performing the exercise activity is received. [0100] At step 825, the motion pattern of the subsequent motion data is derived into the first exercise activity or the second exercise activity.
[0101] At step 830, the first exercise activity or the second exercise activity derived is classified into one of the test dataset and the master dataset.
[0102] At step 835, the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data is presented.
[0103] The process ends at step 840.
[0104] Although embodiments of a system and method for training of classifiers to identify an activity based on motion data have been described in language specific to features and/or methods, it is to be understood that the description is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations of a system and method for training of classifiers to identify an activity based on motion data.

Claims

CLAIMS:
1. A method for training of classifiers for identifying activities based on motion data, the method comprising:
capturing, by a sensor, raw motion data of a user performing an exercise activity and a non- exercise activity over a period of time, wherein the exercise activity comprises at least a first exercise activity and a second exercise activity;
processing, by a processor, the raw motion data to train classifiers, wherein the classifiers are trained to identify a motion pattern of each of the first exercise activity and the second exercise activity, wherein the motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing the motion pattern with a master dataset or a test dataset, wherein the master dataset indicates a pre-determined motion pattern corresponding to the first exercise activity and the second exercise activity, wherein the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity, wherein the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data, and wherein each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified;
receiving, by the sensor, a subsequent motion data of the user performing the exercise activity;
classifying, by the processor, the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset, wherein the subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique; and
presenting, by the processor, the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
2. The method as claimed in claim 1, wherein the raw motion data and the subsequent motion data of the user performing the first exercise activity and the second exercise activity are captured by tracking acceleration, angular velocity, and altimeter data from the sensor worn by the user.
3. The method as claimed in claim 1, wherein the specific motion samples are segregated using an unsupervised learning technique, comprising one of a k-means clustering, mixture models and a hierarchical clustering.
4. The method as claimed in claim 3, wherein each of the specific motion sample is processed using a supervised learning technique.
5. The method as claimed in claim 4, wherein the supervised learning technique comprises one of a neural network, a boosted tree network, and a random forest network.
6. The method as claimed in claim 5, further comprising predicting labelling of the motion pattern as the first exercise activity or the second exercise activity, by analyzing the supervised learning technique using a voting based prediction.
7. A device for training of classifiers for identifying activities based on motion data, the device comprising:
at least one sensor communicatively coupled to the device, wherein the at least one sensor captures raw motion data of a user performing a first exercise activity and a second exercise activity over a period of time, and
wherein the device further comprises a memory and a processor coupled to the memory, wherein the processor executes program instructions stored in the memory, to:
process the raw motion data to train classifiers, wherein the classifiers are trained to identify a motion pattern of each of the first exercise activity and the second exercise activity, wherein the motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing the motion pattern with a master dataset or a test dataset, wherein the master dataset indicates a pre-determined motion pattern corresponding to the first exercise activity and the second exercise activity, wherein the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity, wherein the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data, and wherein each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified;
receive a subsequent motion data of the user performing the exercise activity, from the at least one sensor;
classify the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset, wherein the subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique; and
present the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
8. The device as claimed in claim 7, wherein the at least one sensor comprises one of an accelerometer, a magnetometer, a gyroscope, a Micro Electronic Mechanical System (MEMS), and a Nano Electronic Mechanical System (NEMS).
9. The device as claimed in claim 7, wherein the device is one of an electronic device, a mobile phone, a laptop, a smart watch, a fitness equipment, and a wearable garment.
10. The device as claimed in claim 7, wherein the device predicts the labelling of the motion pattern as the first exercise activity or the second exercise activity, by analyzing the specific motion samples using a voting based prediction.
11. A system for training of classifiers for identifying activities based on motion data, the system comprising:
at least one sensor to capture raw motion data of a user performing a first exercise activity and a second exercise activity over a period of time;
a device coupled to the at least one sensor for transmitting the raw motion data; and a server communicatively coupled to the device, wherein the server comprises a memory and a processor coupled to the memory, wherein the processor executes program instructions stored in the memory, to:
process the raw motion data to train classifiers, wherein the classifiers are trained to identify a motion pattern of each of the first exercise activity and the second exercise activity, wherein the motion pattern of each of the first exercise activity and the second exercise activity is identified by comparing the motion pattern with a master dataset or a test dataset, wherein the master dataset indicates a pre-determined motion pattern corresponding to the first exercise activity and the second exercise activity, wherein the test dataset indicates a real-time motion pattern, specified by the user, corresponding to the first exercise activity and the second exercise activity, wherein the first exercise activity and the second exercise activity are identified by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data, wherein the specific motion samples are segregated using an unsupervised learning technique, and wherein each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified;
receive a subsequent motion data of the user performing the exercise activity, from the at least one sensor;
classify the subsequent motion data corresponding to a subsequent exercise activity into the first exercise activity or the second exercise activity in one of the test dataset and the master dataset, wherein the subsequent motion data is used to train the classifiers for identifying the subsequent exercise activity into one of the first exercise activity or the second exercise activity using a machine learning technique; and
present the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
12. The system as claimed in claim 11, wherein the at least one sensor comprises one of an accelerometer, a magnetometer, a gyroscope, a Micro Electronic Mechanical System (MEMS), and a Nano Electronic Mechanical System (NEMS).
13. The system as claimed in claim 11 , wherein the device is one of an electronic device, a mobile phone, a laptop, a smart watch, a fitness equipment, a display device and a wearable garment.
14. The system as claimed in claim 11, wherein the processor further executes the program instructions to predict labelling of the motion pattern as the first exercise activity or the second exercise activity, by analyzing the specific motion samples using a voting based prediction.
15. A method for deriving motion pattern of activities based on motion data, the method comprising:
capturing, by a sensor, raw motion data of a user performing an exercise activity and a non- exercise activity over a period of time, wherein the exercise activity comprises at least a first exercise activity and a second exercise activity;
processing, by a processor, the raw motion data to train classifiers, wherein the classifiers are trained to identify a motion pattern of each of the first exercise activity and the second exercise activity by segregating specific motion data samples corresponding to the first exercise activity and the second exercise activity, respectively from the raw motion data, wherein the specific motion samples are segregated using an unsupervised learning technique, and wherein each of the specific motion sample is further processed to label the first exercise activity and the second exercise activity corresponding to the motion pattern identified;
receiving, by the sensor, a subsequent motion data of the user performing the exercise activity;
deriving, by the processor, the motion pattern of the subsequent motion data into the first exercise activity or the second exercise activity, wherein the motion pattern of the subsequent motion data derived is used to improve identification of the motion pattern of each of the first exercise activity and the second exercise activity; and
presenting, by the processor, the label corresponding to the first exercise activity or the second exercise activity for the subsequent motion data.
PCT/IB2017/050330 2017-01-23 2017-01-23 Training of classifiers for identifying activities based on motion data WO2018134646A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/050330 WO2018134646A1 (en) 2017-01-23 2017-01-23 Training of classifiers for identifying activities based on motion data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/050330 WO2018134646A1 (en) 2017-01-23 2017-01-23 Training of classifiers for identifying activities based on motion data

Publications (1)

Publication Number Publication Date
WO2018134646A1 true WO2018134646A1 (en) 2018-07-26

Family

ID=62907814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/050330 WO2018134646A1 (en) 2017-01-23 2017-01-23 Training of classifiers for identifying activities based on motion data

Country Status (1)

Country Link
WO (1) WO2018134646A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021243152A1 (en) * 2020-05-28 2021-12-02 Resmed Inc. Systems and methods for monitoring user activity

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6272479B1 (en) * 1997-07-21 2001-08-07 Kristin Ann Farry Method of evolving classifier programs for signal processing and control

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6272479B1 (en) * 1997-07-21 2001-08-07 Kristin Ann Farry Method of evolving classifier programs for signal processing and control

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021243152A1 (en) * 2020-05-28 2021-12-02 Resmed Inc. Systems and methods for monitoring user activity

Similar Documents

Publication Publication Date Title
JP6403696B2 (en) Physical activity monitoring device and method
CN108244744A (en) A kind of method of moving state identification, sole and footwear
US20160345869A1 (en) Automatic recognition, learning, monitoring, and management of human physical activities
WO2012119126A2 (en) Apparatus, system, and method for automatic identification of sensor placement
US20200191643A1 (en) Human Activity Classification and Identification Using Structural Vibrations
Baca Methods for recognition and classification of human motion patterns–a prerequisite for intelligent devices assisting in sports activities
CN108549856A (en) A kind of human action and road conditions recognition methods
US11216766B2 (en) System and method for generalized skill assessment using activity data
Preatoni et al. Supervised machine learning applied to wearable sensor data can accurately classify functional fitness exercises within a continuous workout
CN108021888A (en) A kind of fall detection method
CN104586402B (en) A kind of feature extracting method of physical activity
Acikmese et al. Towards an artificial training expert system for basketball
CN108309304A (en) A method of generating freezing of gait intelligent monitor system
CN108717548B (en) Behavior recognition model updating method and system for dynamic increase of sensors
WO2023035093A1 (en) Inertial sensor-based human body behaviour recognition method
Beily et al. A sensor based on recognition activities using smartphone
Hao et al. Invariant feature learning for sensor-based human activity recognition
CN111931568A (en) Human body falling detection method and system based on enhanced learning
WO2018134646A1 (en) Training of classifiers for identifying activities based on motion data
Kailas Basic human motion tracking using a pair of gyro+ accelerometer MEMS devices
Bai et al. Application and research of MEMS sensor in gait recognition algorithm
CN115273237B (en) Human body posture and action recognition method based on integrated random configuration neural network
Huang et al. Activity classification and analysis during a sports training session using a fuzzy model
Lim et al. Gait analysis and classification on subjects with Parkinson’s disease
Mahajan et al. Using machine learning approaches to identify exercise activities from a triple-synchronous biomedical sensor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17892586

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/06/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17892586

Country of ref document: EP

Kind code of ref document: A1