CN109620241B - Wearable device and motion monitoring method based on same - Google Patents

Wearable device and motion monitoring method based on same Download PDF

Info

Publication number
CN109620241B
CN109620241B CN201811367264.5A CN201811367264A CN109620241B CN 109620241 B CN109620241 B CN 109620241B CN 201811367264 A CN201811367264 A CN 201811367264A CN 109620241 B CN109620241 B CN 109620241B
Authority
CN
China
Prior art keywords
action
data
motion
interval
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811367264.5A
Other languages
Chinese (zh)
Other versions
CN109620241A (en
Inventor
苏鹏程
张一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN201811367264.5A priority Critical patent/CN109620241B/en
Publication of CN109620241A publication Critical patent/CN109620241A/en
Application granted granted Critical
Publication of CN109620241B publication Critical patent/CN109620241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1123Discriminating type of movement, e.g. walking or running
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • A61B5/1122Determining geometric values, e.g. centre of rotation or angular range of movement of movement trajectories

Abstract

The invention discloses wearable equipment and an action monitoring method based on the wearable equipment, wherein the method comprises the following steps: acquiring motion data of a user in real time by utilizing a motion sensor on wearable equipment; detecting an action interval of the collected motion data according to a longest public subsequence algorithm, and extracting the action interval; and performing motion recognition on the motion data corresponding to the extracted motion interval to obtain a recognition result. The user activity can be monitored and analyzed anytime and anywhere by using the inertial sensor on the wearable device, the operation is simple, the surrounding environment and the space are not limited, the weakness of a video analysis technology is overcome, and the user experience is enhanced.

Description

Wearable device and motion monitoring method based on same
Technical Field
The invention relates to the technical field of wearable equipment, in particular to wearable equipment and an action monitoring method based on the wearable equipment.
Background
In modern society, with the improvement of living standard and safety consciousness of people, healthy life and personal safety become more and more important to people, especially the elderly, and therefore, there is a great demand for products capable of monitoring daily activities and recognizing abnormal behaviors and actions. The existing behavior monitoring technical scheme generally adopts a video analysis method, the method is complex in operation, has high requirements on environmental background, light and the like, and has great space limitation because a user needs to be kept in a capturing range of a camera.
Disclosure of Invention
The invention provides wearable equipment and an action monitoring method based on the wearable equipment, which are used for monitoring and analyzing human activities anytime and anywhere based on an inertial sensor of the wearable equipment, are simple in operation, have no limitation on the surrounding environment and space, effectively overcome the defect of video analysis and enhance the user experience of the wearable equipment.
According to an aspect of the application, a wearable device-based motion monitoring method is provided, including:
acquiring motion data of a user in real time by utilizing a motion sensor on wearable equipment;
detecting an action interval of the collected motion data according to a longest public subsequence algorithm, and extracting the action interval;
and performing motion recognition on the motion data corresponding to the extracted motion interval to obtain a recognition result.
According to another aspect of the present application, there is provided a wearable device, a motion sensor, a processor connected with the motion sensor;
the motion sensor is used for acquiring motion data of a user in real time;
the processor is used for detecting the action interval of the collected motion data according to the longest public subsequence algorithm, extracting the action interval, and identifying the action of the motion data corresponding to the extracted action interval to obtain an identification result.
By applying the technical scheme of the embodiment of the invention, the sensor on the wearable equipment is utilized to collect and analyze the data of the daily behavior and the action of people, possible action intervals are extracted from continuous sensor data, and the action recognition is carried out on the motion data in the action intervals to obtain the recognition result. Compared with the existing monitoring scheme of video analysis, the method has the advantages that the background such as environment and light is not limited, the space range of a user is not required to be limited, the application is more flexible, and the user experience is improved. In addition, continuous sensor data are calibrated by adopting the longest common subsequence algorithm, a real action interval is extracted, interference of other action data at the boundary is inhibited, and the reliability is improved while the operation complexity is low.
Drawings
Fig. 1 is a flow chart of a wearable device-based motion monitoring method in an embodiment of the present invention;
fig. 2 is a flow chart of a wearable device-based motion monitoring method in an embodiment of the present invention;
FIG. 3 is a flow chart of training an action interval template in an embodiment of the present invention;
FIG. 4 is a flow chart of action interval extraction in an embodiment of the present invention;
FIG. 5 is a flow chart of action recognition in an embodiment of the present invention;
fig. 6 is a block diagram of a wearable device in an embodiment of the invention.
Detailed Description
The design concept of the invention is as follows: wearable devices such as smart watches are rapidly developed at present, have own computing capability and resources, are generally embedded into various MEMS sensors such as accelerometers, gyroscopes and the like, provide software and hardware support for data operation and signal processing, and can sense the actions of human bodies through existing sensors such as acceleration sensors in the wearable devices. In addition, wearable equipment generally can wear on one's body for a long time, if on wearable equipment is integrated to daily activity monitoring function, in time remind and remote alarm when unusual situation then appears, ensured people's healthy life and personal safety. The human body activity can be monitored and analyzed anytime and anywhere by using the inertial sensor on the wearable device, the operation is simple, the limitation on the surrounding environment and space is avoided, the weakness of a video method is effectively overcome, and the user experience is enhanced.
Fig. 1 is a flowchart of a wearable device-based motion monitoring method in an embodiment of the present invention, and referring to fig. 1, the wearable device-based motion monitoring method in the embodiment includes the following steps:
step S101, acquiring motion data of a user in real time by using a motion sensor on wearable equipment;
step S102, detecting an action interval of the collected motion data according to a longest public subsequence algorithm, and extracting the action interval;
and step S103, performing motion recognition on the motion data corresponding to the extracted motion section to obtain a recognition result.
As shown in fig. 1, in the method of the embodiment, the motion sensor on the wearable device is used to collect the motion data of the user, so that the user action can be conveniently detected at any time and any place, the requirements on environment and light are low, and the application is more flexible. And detecting the action interval of the collected motion data by combining the longest common subsequence algorithm, extracting the action interval, and identifying the action of the motion data corresponding to the extracted action interval to obtain an identification result. The real action interval is extracted through the longest common subsequence algorithm, so that the interference of other action data at the boundary is suppressed, the reliability is improved, and the operation complexity is low.
In order to realize the monitoring of daily activities of a human body in practical application, the following technical problems need to be considered and solved: (1) since the user's activities are continuous in daily activities, it is necessary to detect motion data of interest from a continuous sensor data stream and remove interference from other activities. (2) Based on inertial sensors such as acceleration and the like, an effective action type detection algorithm is designed to identify actions. In addition, the wearable device has limited resources, and the reliability of motion recognition needs to be ensured while the complexity of the algorithm is reduced. (3) And detecting abnormal behaviors or dangerous conditions, and timely reminding or informing emergency contacts.
The technical means adopted in the embodiment of the present invention to solve the above technical problems will be described below with reference to fig. 2 to 5.
As shown in fig. 2, the method for monitoring daily activities of a user wearing a smart watch according to an embodiment of the present invention mainly includes steps of motion interval detection, motion recognition, abnormal behavior processing, and the like.
Step S201, three-axis acceleration data acquisition.
It should be noted that a variety of motion sensors are provided on the smart watch, for example, an accelerometer and a gyroscope are exemplified in this embodiment to acquire data. When a user wears the intelligent watch, the accelerometer on the intelligent watch collects the motion data of the user in real time. Since the user's motion is continuous and the acceleration data sequence is also continuous, it is necessary to extract the motion interval of the acceleration data when performing the daily behavior motion monitoring.
Step S202, operation section detection.
In this step, an action interval calibration method of a Longest Common Subsequence (LCS) is adopted, sensor data is quantized and represented as character strings, and in practical application, the Longest Common Subsequence between the character strings is obtained through dynamic programming, so as to further obtain a start point and an end point of an action interval of a user.
The longest common subsequence LCS refers to the longest common subsequence between two strings. For example, A, B is two character strings with lengths of n and m, and L (i, j) is the length of the longest common subsequence between the first i symbols of A and the first j symbols of B, and the length of the longest common subsequence between A and B is L (n, m). L (n, m) and the matching points between the two sequences (i.e., the same elements) are obtained by dynamic programming as shown in the following equation:
Figure BDA0001868912220000041
step S203, action recognition.
And further adopting a machine learning method such as SVM and the like to identify the corresponding action type aiming at the extracted action interval.
Step S204, determining whether the behavior is abnormal or dangerous, if yes, executing step S205.
When the identification result indicates that abnormal or dangerous actions occur, an alarm on the wearable device is used for alarming and/or an intelligent terminal connected with the wearable device is used for remotely alarming. Specifically, the action data of the user is compared with the abnormal or dangerous behavior template data to judge whether the action data is abnormal or dangerous behavior.
Step S205, abnormal behavior processing.
And if abnormal behaviors occur, alarming and reminding on the intelligent watch in time, or sending a notice to a preset emergency contact person.
Therefore, the method of the embodiment utilizes the characteristics of wearing and portability of the watch, and can monitor daily activities of people at any time and any place. Compared with an activity monitoring system adopting a video method, the method is relatively simple in calculation, is not limited by conditions such as environment, space and the like, is more flexible in application, and improves user experience. In addition, continuous sensor data are calibrated by adopting an LCS algorithm, a real action interval is extracted, so that the interference of other action data at the boundary is inhibited, the reliability is improved, and the calculation complexity is low. Moreover, the extracted action section data is further identified by adopting a machine learning method, so that the accuracy and reliability of identification are improved. Through the designed algorithm, the interested action can be reliably identified, or the occurrence of abnormal or dangerous behaviors can be detected, and the emergency contact can be reminded or sent to notify in time, so that the health and the safety of people are guaranteed.
In this embodiment, before performing the motion interval detection, training a motion interval template is further performed, which specifically includes: for each preset action type, collecting a plurality of groups of triaxial acceleration data as sample examples, performing cluster analysis on the sample examples of all the action types by adopting a K-means algorithm, and expressing each acceleration data point in the sample examples by using a cluster center closest to the acceleration data point to obtain a sample character string sequence corresponding to each sample example; wherein each cluster has a cluster center and is represented by a letter;
and calculating the value of the longest common subsequence between each sample instance and other sample instances under the same action type, calculating the sum of the values of the longest common subsequence, and taking the sample instance corresponding to the maximum sum value as an action interval template of the action type.
Specifically, the detecting the action interval of the collected motion data according to the longest common subsequence algorithm, and the extracting the action interval includes: for acceleration data acquired in a primary monitoring process, quantizing the acceleration data by using a clustering center during motion interval template training and converting the acceleration data into a corresponding character string sequence; storing the character string sequence and the corresponding filtered acceleration data in a cache; according to a preset action type, intercepting time of the character string sequence until a current time point, taking an interval with the length equal to the maximum length in a sample example of the action as an observation window, and calculating the value of the longest public subsequence between a sequence in the observation window and an action interval template of the action; if the value of the longest public subsequence is larger than a discrimination threshold value corresponding to the type of action, the action of the user is considered to be matched with the type of action, and the starting time and the ending time of an action interval are extracted according to the longest public subsequence to obtain an action interval detection result; and the judgment threshold is equal to the minimum value in the values of the longest common subsequence between the action interval template corresponding to the preset action type and other sample examples of the type. And if the action of the user is matched with the actions of the multiple types, selecting the action type corresponding to the maximum value in the values of the longest common subsequence as a final action interval detection result.
Referring to fig. 3, the template training process begins and proceeds to step S301.
And S301, acquiring triaxial acceleration data.
For each predefined action type to be identified, a plurality of sets of triaxial acceleration data are collected as sample instances.
The type of action here. For example, the motion types include hand motion, leg motion, head motion, and foot motion.
Step S302, low pass filtering
The acceleration data is subjected to a moving average filtering or other low-pass filtering process to remove noise.
Step S303, clustering analysis is carried out to obtain a clustering center;
and performing clustering analysis on all types of action data by adopting a K-means algorithm. K-means clustering is an unsupervised learning algorithm that divides data into K clusters, each cluster having a cluster center and represented by a letter.
Step S304, quantizing and converting the character string;
each acceleration data point of a sample instance is represented by the cluster center closest to it, so that each sample instance is quantized and converted into a string. Because different persons and the same person can not be fixed and unchanged when performing an action at different times, the change of the sensor data when performing the same action is restrained to a certain extent through the processing, and the robustness is improved.
Step S305, training the template.
Here, the similarity between sample instances is measured by the longest common subsequence, LCS, and the greater the LCS value, the higher their similarity. For each action type, calculating the value of LCS between each instance and all other instances under that type, and summing the values of LCS (in one embodiment, the value of LCS may also be averaged), selecting the sample instance with the largest sum (or average) value of LCS as the template for that action type, and selecting the smallest LCS value between the template and the other sample instances of that type as the discrimination threshold for that action type. In addition, the maximum value of the acceleration sequence length of all sample instances in this type needs to be saved for subsequent interception of the observation window.
The template is trained and stored for subsequent processing in FIG. 4.
As shown in fig. 4, step S401, three-axis acceleration data acquisition;
the three-axis acceleration data acquisition is data acquired in the process of monitoring the motion of a user,
step S402, low-pass filtering;
and (3) removing noise interference in the data by low-pass filtering such as moving average and the like on the input triaxial acceleration data.
Step S403, quantizing and converting into character strings;
the filtered data is quantized and converted into a string sequence using K-means cluster centers (suitable cluster centers in fig. 4) obtained during template training. The string sequence and the corresponding filtered acceleration data are stored in a buffer of a suitable length in a first-in first-out (FIFO) manner, and then possible action intervals are detected. The reason for storing the string sequence in a first-in-first-out manner is to ensure that the data in the buffer is the latest user action data.
Step S404, intercepting an observation window;
for each preset action type, the character string sequence is windowed, for example, an interval with the maximum length in the action sample instance of the type until the current time point is intercepted as an observation window.
Step S405, calculating the LCS value between the template and the template;
specifically, the value of the LCS between the sequence in the observation window intercepted in step S404 and the template of the type is calculated, and if the value of the LCS is greater than the discrimination threshold of the type, the action of the user is considered to be matched with the action type.
In step S406, the start and end of the action are acquired.
Here, when the LCS value calculated for the sequence of strings and a template in the observation window is smaller than the decision threshold, the start and end times of the action interval are determined by the matching point. Because the character string sequence corresponds to the acceleration data, when the action (character string sequence form) of the user is matched with the template (character string sequence form) of a type of action, the corresponding acceleration data point, namely the matching point, can be found according to the elements contained in the longest public subsequence, and then the starting time and the ending time of the action interval of the user are detected according to the matching point, so that the action interval is determined.
The value of LCS only needs to be calculated once for each action type and therefore has a lower complexity.
It should be noted that, after calculating and matching all preset action types according to the foregoing steps, if the user action matches with multiple action types, an action interval corresponding to the action type with the largest LCS value, that is, the most similar action type, is selected as a final result of user action detection, and the corresponding filtered acceleration data is used as detected action data to further identify the detected action data.
The further identification comprises the steps of extracting one or more time domain characteristic quantities used for identifying the user action from the motion data corresponding to the action interval to obtain data to be identified; and matching the data to be identified containing one or more time domain characteristic quantities with the stored template data representing the preset action to obtain the template data successfully matched with the data to be identified, and taking the action corresponding to the template data successfully matched as the action corresponding to the data to be identified.
For example, matching the data to be recognized with stored template data representing a predetermined action, and obtaining template data successfully matched with the data to be recognized includes: training an SVM (support vector machine) classifier by utilizing template data, selecting any two types of template data from the template data to train a two-class classifier, and obtaining the trained SVM two-class classifier capable of distinguishing any two types of template data from N types of template data, wherein the template data is generated by collected standard action data of a plurality of users; and respectively matching the data to be recognized with each trained SVM two-class classifier to obtain matching results of the data to be recognized and each SVM two-class classifier, wherein each matching result corresponds to template data, the number of the template data which appears is counted, and the template data which appears most frequently is used as the template data which is successfully matched with the data to be recognized.
In this embodiment, an action recognition method based on a time domain feature and a Support Vector Machine (SVM) is adopted for the detected triaxial acceleration action data. In order to adapt to the resource limitation condition of equipment such as a smart watch, a plurality of limited typical time domain features with good distinguishing capability are extracted, and compared with other frequency domain or time-frequency domain features, the method avoids complex feature calculation and reduces the calculation amount.
On the other hand, the Support Vector Machine (SVM) has good generalization capability, can realize the recognition capability independent of users, namely can well recognize the action types of different users, avoids the condition that each user needs to train independently before use, and is convenient for the users to use. Moreover, the SVM is suitable for a small sample training set, and does not need a too complicated training process. Meanwhile, the classifier generated after the SVM is trained is simple, and only a small amount of sample information is needed compared with the KNN and other identification methods, so that the storage space of the template is saved.
The processing process is as shown in fig. 5, and the action recognition includes two processes, one is training of an SVM classification model, and the other is action recognition of data to be recognized based on the SVM classification model. The SVM classification model training process mainly comprises the following steps: step S501, step S502, step S503, and step S504. The flow of motion recognition mainly includes step S510, step S520, and step S530. Referring to fig. 5, the result of training the SVM classification model is to obtain an SVM classification model, and the result of the action recognition process is to obtain an action recognition result.
Because training of the SVM classification model is the basis of subsequent action recognition, the training of the SVM classification model is explained first below.
Step S501, collecting template acceleration data;
and acquiring acceleration data of a user executing a preset standard action by using an acceleration sensor in advance as a template.
Step S502, feature extraction;
for the triaxial acceleration motion data as the template, time domain features such as a mean, a standard deviation, a minimum, a maximum, skewness, kurtosis, a correlation coefficient and the like are extracted. It should be noted that how to calculate the mean, standard deviation, and time domain feature values of the acceleration data on each axis is the prior art, and the calculation process will not be described too much here. These features are extracted for each axis X, Y, Z in the triaxial acceleration data to form a fixed length feature vector.
Step S503, training;
in training the SVM classification model, a kernel function selects a Radial Basis Function (RBF), and parameters needing to be determined in training the classification model are a kernel function parameter gamma and a penalty factor C for the RBF. In order to determine the optimal parameters C and γ to improve the recognition accuracy of the classifier, a grid search algorithm based on cross-validation (cross-validation) is adopted in the present embodiment. That is, different (γ and C) parameter pair combinations are searched, and a parameter pair having the highest accuracy among them is selected as an optimum result by cross-validation.
In addition, the embodiment identifies invalid actions as a type of pattern, and trains and identifies the invalid actions and the action types to be identified together.
Because the SVM deals with the problem of two classifications, when the SVM is used for identifying various action types, a plurality of classifiers need to be constructed. In the embodiment of the invention, a one-to-one idea is adopted, that is, from N classes, any two types of training samples are selected without repetition to train a two-class classifier, and then N x (N-1)/2 two classes of classifiers need to be trained together, where N is the total number of types. The classification accuracy of the processing is high.
Next, the flow of the primary motion recognition process is started, and step S510 is executed to detect motion data;
as can be seen from the foregoing operation interval detection process, the present embodiment detects an operation interval (start time and start time of the operation data) and operation data corresponding to the operation interval (acceleration data points between the start time and the start time).
In step S520, the feature extraction,
the feature extraction step here is the same as the feature extraction processing step in step S502, so that reference can be made to the foregoing description, and details are not repeated here.
Step S530, SVM identification;
the method comprises the steps of respectively matching data to be recognized with each trained SVM two-class classifier, obtaining matching results of the data to be recognized and each SVM two-class classifier, enabling each matching result to correspond to template data, counting the number of the template data, and taking the template data with the largest occurrence frequency as the template data successfully matched with the data to be recognized.
For example, two types of hand movement and head movement are selected to form a two-class classifier, that is, (head/hand), and then the data a to be recognized is brought into an optimal classification function formula of the two-class classifier, so that a matching result of the data a to be recognized belonging to the head movement or the hand movement can be obtained. And finally, according to the number of template data appearing in the matching result of the data A to be recognized, the template data with the largest number of occurrences is used as the template data matched with the data A to be recognized, namely, the template data is used as the action type corresponding to the data A to be recognized.
In the previous example, after the data a to be recognized is compared with the two classifiers of five SVMs, the head action is found to occur three times, and the hand action and the back action occur once respectively, so that the data a to be recognized is recognized to belong to the head action type. Thus, a plurality of SVM classification models can be obtained through training.
The action recognition result can be obtained through the steps. In this embodiment, after a valid user action is recognized, the detection of the next action interval is started from the end point of the current action, so as to speed up the action recognition and monitoring efficiency.
In addition, after the user action is recognized, other processing can be further performed as needed. For example, after abnormal or dangerous behaviors are detected, the smart watch can perform local reminding, or remotely send out an alarm through a smart phone connected with the smart watch according to preset settings, so that timely help and treatment can be obtained, and the health condition and personal safety of people can be guaranteed.
In an embodiment of the present invention, there is also provided a wearable device, and referring to fig. 6, the wearable device 600 includes a motion sensor 601, a processor 602 connected to the motion sensor 601;
the motion sensor 601 is used for acquiring motion data of a user in real time;
and the processor 602 is configured to perform motion interval detection on the collected motion data according to the longest common subsequence algorithm, extract a motion interval, and perform motion recognition on the motion data corresponding to the extracted motion interval to obtain a recognition result.
In one embodiment of the present invention, the motion sensor 601 includes a three-axis acceleration sensor;
the processor 602 is further configured to, before performing the detection of the action interval, obtain a trained action interval template, where the action interval template is obtained by training through the following steps: for each preset action type, collecting a plurality of groups of triaxial acceleration data as sample examples, performing cluster analysis on the sample examples of all the action types by adopting a K-means algorithm, and expressing each acceleration data point in the sample examples by using a cluster center closest to the acceleration data point to obtain a sample character string sequence corresponding to each sample example; wherein each cluster has a cluster center and is represented by a letter; and calculating the value of the longest common subsequence between each sample instance and other sample instances under the same action type, calculating the sum of the values of the longest common subsequence, and taking the sample instance corresponding to the maximum sum value as an action interval template of the action type.
In an embodiment of the present invention, the processor is specifically configured to quantize and convert acceleration data acquired in a primary monitoring process into a corresponding character string sequence by using a clustering center during motion interval template training; storing the character string sequence and the corresponding filtered acceleration data in a cache; according to a preset action type, intercepting time of the character string sequence until a current time point, taking an interval with the length equal to the maximum length in a sample example of the action as an observation window, and calculating the value of the longest public subsequence between a sequence in the observation window and an action interval template of the action; if the value of the longest public subsequence is larger than a discrimination threshold value corresponding to the type of action, the action of the user is considered to be matched with the type of action, and the starting time and the ending time of an action interval are extracted according to the longest public subsequence to obtain an action interval detection result; and the judgment threshold is equal to the minimum value in the values of the longest common subsequence between the action interval template corresponding to the preset action type and other sample examples of the type.
In an embodiment of the present invention, the processor is further configured to select, when the user's motion matches the multiple types of motions, a motion type corresponding to a maximum value of the values of the longest common subsequence as a final motion interval detection result.
In an embodiment of the present invention, the processor is configured to extract one or more time domain feature quantities used for identifying a user action from motion data corresponding to the action interval, so as to obtain data to be identified; and matching the data to be identified containing one or more time domain characteristic quantities with stored template data representing a preset action to obtain template data successfully matched with the data to be identified, and taking the action type corresponding to the template data successfully matched as the action type corresponding to the data to be identified.
In one embodiment of the invention, a processor trains an SVM classifier by using template data, selects any two types of template data from the template data to train a two-type classifier, and obtains the trained SVM two-type classifier capable of distinguishing any two types of template data from N types of template data, wherein the template data is generated by collected standard action data of a plurality of users; and respectively matching the data to be recognized with each trained SVM two-class classifier to obtain matching results of the data to be recognized and each SVM two-class classifier, wherein each matching result corresponds to template data, the number of the template data which appears is counted, and the template data which appears most frequently is used as the template data which is successfully matched with the data to be recognized.
In summary, the wearable device of this embodiment collects acceleration sensor data, utilizes the longest common subsequence to calibrate an action interval, quantizes and represents the sensor data as a string sequence, and obtains an LCS value between the string sequences, thereby further obtaining start and end points of the action interval. And recognizing the corresponding action type by adopting a machine learning method such as SVM and the like aiming at the extracted action interval. The method and the system identify the types of the user actions, judge whether the actions are abnormal or dangerous actions, and remind a user in time when the abnormal actions occur or send a notice to an emergency contact person to guarantee the safety of the user.
The electronic device comprises a memory and a processor, wherein the memory and the processor are in communication connection through an internal bus, the memory stores program instructions capable of being executed by the processor, and the program instructions, when executed by the processor, can implement the motion monitoring method based on the wearable device.
In addition, the logic instructions in the memory may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Another embodiment of the present invention provides a computer-readable storage medium storing computer instructions that cause the computer to perform the above-described method.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the present invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
While the foregoing is directed to embodiments of the present invention, other modifications and variations of the present invention may be devised by those skilled in the art in light of the above teachings. It should be understood by those skilled in the art that the foregoing detailed description is for the purpose of illustrating the invention rather than the foregoing detailed description, and that the scope of the invention is defined by the claims.

Claims (8)

1. A motion monitoring method based on a wearable device is characterized by comprising the following steps:
acquiring motion data of a user in real time by utilizing a motion sensor on wearable equipment;
detecting an action interval of the collected motion data according to a longest public subsequence algorithm, and extracting the action interval;
performing motion recognition on the motion data corresponding to the extracted motion interval to obtain a recognition result;
when the recognition result indicates that abnormal or dangerous actions occur, alarming by using an alarm on the wearable equipment and/or remotely alarming by using an intelligent terminal connected with the wearable equipment;
the detecting the action interval of the collected motion data according to the longest public subsequence algorithm, and the extracting the action interval comprises the following steps:
and quantizing and representing the motion data as character strings, and acquiring the longest common subsequence between the character strings through dynamic programming so as to further acquire the starting point and the ending point of the action interval.
2. The method of claim 1, wherein the motion sensor comprises a three-axis acceleration sensor;
the method further comprises the step of training an action interval template before action interval detection, and the method specifically comprises the following steps: for each preset action type, collecting a plurality of groups of triaxial acceleration data as sample examples, performing cluster analysis on the sample examples of all the action types by adopting a K-means algorithm, and expressing each acceleration data point in the sample examples by using a cluster center closest to the acceleration data point to obtain a sample character string sequence corresponding to each sample example; wherein each cluster has a cluster center and is represented by a letter;
and calculating the value of the longest common subsequence between each sample instance and other sample instances under the same action type, calculating the sum of the values of the longest common subsequence, and taking the sample instance corresponding to the maximum sum value as an action interval template of the action type.
3. The method of claim 2, wherein the detecting the motion interval of the collected motion data according to the longest common subsequence algorithm, and extracting the motion interval comprises:
for acceleration data acquired in a primary monitoring process, quantizing the acceleration data by using a clustering center during motion interval template training and converting the acceleration data into a corresponding character string sequence;
storing the character string sequence and the corresponding filtered acceleration data in a cache;
according to each preset action type of each action type, intercepting time of the character string sequence until a current time point, taking an interval with the length equal to the maximum length in a sample instance of the action type as an observation window, and calculating the value of the longest common subsequence between a sequence in the observation window and an action interval template of the action type;
if the value of the longest public subsequence is larger than the discrimination threshold value corresponding to the action type, the action of the user is considered to be matched with the action type, and the starting time and the ending time of the action interval are extracted according to the longest public subsequence to obtain an action interval detection result;
and the judgment threshold is equal to the minimum value of the values of the longest common subsequence between the action interval template corresponding to the action type and other sample instances of the action type.
4. The method of claim 3, further comprising:
and when the action of the user is matched with the actions of the multiple types, selecting the action type corresponding to the maximum value in the values of the longest common subsequence as a final action interval detection result.
5. The method according to claim 3, wherein the motion recognition of the motion data corresponding to the extracted motion section to obtain a recognition result comprises:
extracting one or more time domain characteristic quantities used for identifying the user action from the motion data corresponding to the action interval to obtain data to be identified;
and matching the data to be identified containing one or more time domain characteristic quantities with stored template data representing a preset action to obtain template data successfully matched with the data to be identified, and taking the action type corresponding to the template data successfully matched as the action type corresponding to the data to be identified.
6. The method of claim 5, wherein matching the data to be recognized with stored template data representing a predetermined action, and obtaining template data successfully matched with the data to be recognized comprises:
training an SVM (support vector machine) classifier by utilizing template data, selecting any two types of template data from the template data to train a two-class classifier, and obtaining the trained SVM two-class classifier capable of distinguishing any two types of template data from N types of template data, wherein the template data is generated by collected standard action data of a plurality of users;
and respectively matching the data to be recognized with each trained SVM two-class classifier to obtain matching results of the data to be recognized and each SVM two-class classifier, wherein each matching result corresponds to template data, the number of the template data which appears is counted, and the template data which appears most frequently is used as the template data which is successfully matched with the data to be recognized.
7. A wearable device, comprising: a motion sensor, a processor connected to the motion sensor;
the motion sensor is used for acquiring motion data of a user in real time;
the processor is used for detecting an action interval of the collected motion data according to the longest public subsequence algorithm, extracting the action interval, and performing action identification on the motion data corresponding to the extracted action interval to obtain an identification result;
the processor is used for controlling an alarm on the wearable equipment to give an alarm and/or remotely give an alarm through an intelligent terminal connected with the wearable equipment when the identification result indicates that an abnormal or dangerous action occurs;
and the processor is used for quantizing and representing the motion data into character strings, and acquiring the longest common subsequence between the character strings through dynamic programming so as to further acquire the starting point and the ending point of the action interval.
8. The wearable device of claim 7, wherein the motion sensor comprises a three-axis acceleration sensor;
the processor is further configured to acquire a trained action interval template before performing action interval detection, where the action interval template is obtained by training through the following steps: for each preset action type, collecting a plurality of groups of triaxial acceleration data as sample examples, performing cluster analysis on the sample examples of all the action types by adopting a K-means algorithm, and expressing each acceleration data point in the sample examples by using a cluster center closest to the acceleration data point to obtain a sample character string sequence corresponding to each sample example; wherein each cluster has a cluster center and is represented by a letter; and calculating the value of the longest common subsequence between each sample instance and other sample instances under the same action type, calculating the sum of the values of the longest common subsequence, and taking the sample instance corresponding to the maximum sum value as an action interval template of the action type.
CN201811367264.5A 2018-11-16 2018-11-16 Wearable device and motion monitoring method based on same Active CN109620241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811367264.5A CN109620241B (en) 2018-11-16 2018-11-16 Wearable device and motion monitoring method based on same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811367264.5A CN109620241B (en) 2018-11-16 2018-11-16 Wearable device and motion monitoring method based on same

Publications (2)

Publication Number Publication Date
CN109620241A CN109620241A (en) 2019-04-16
CN109620241B true CN109620241B (en) 2021-10-08

Family

ID=66068186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811367264.5A Active CN109620241B (en) 2018-11-16 2018-11-16 Wearable device and motion monitoring method based on same

Country Status (1)

Country Link
CN (1) CN109620241B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695426B (en) * 2020-05-08 2024-01-05 北京邮电大学 Behavior pattern analysis method and system based on Internet of things
CN113288122B (en) * 2021-05-21 2023-12-19 河南理工大学 Wearable sitting posture monitoring device and sitting posture monitoring method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104623910A (en) * 2015-01-15 2015-05-20 西安电子科技大学 Dance auxiliary special-effect partner system and achieving method
CN105046720A (en) * 2015-07-10 2015-11-11 北京交通大学 Behavior segmentation method based on human body motion capture data character string representation
CN105242779A (en) * 2015-09-23 2016-01-13 歌尔声学股份有限公司 Method for identifying user action and intelligent mobile terminal
CN105261058A (en) * 2015-10-10 2016-01-20 浙江大学 Motion labeling method based on motion character strings
CN106097654A (en) * 2016-07-27 2016-11-09 歌尔股份有限公司 A kind of fall detection method and wearable falling detection device
CN106237604A (en) * 2016-08-31 2016-12-21 歌尔股份有限公司 Wearable device and the method utilizing its monitoring kinestate
EP3257437A1 (en) * 2016-06-13 2017-12-20 Friedrich-Alexander-Universität Erlangen-Nürnberg Method and system for analyzing human gait

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI439947B (en) * 2010-11-11 2014-06-01 Ind Tech Res Inst Method for pedestrian behavior recognition and the system thereof
KR20140140677A (en) * 2013-05-29 2014-12-10 한양대학교 산학협력단 Method for extracting longest common sub-sequence in sequence without appearance of duplicated token
CN105893385B (en) * 2015-01-04 2020-10-23 伊姆西Ip控股有限责任公司 Method and apparatus for analyzing user behavior

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104623910A (en) * 2015-01-15 2015-05-20 西安电子科技大学 Dance auxiliary special-effect partner system and achieving method
CN105046720A (en) * 2015-07-10 2015-11-11 北京交通大学 Behavior segmentation method based on human body motion capture data character string representation
CN105242779A (en) * 2015-09-23 2016-01-13 歌尔声学股份有限公司 Method for identifying user action and intelligent mobile terminal
CN105261058A (en) * 2015-10-10 2016-01-20 浙江大学 Motion labeling method based on motion character strings
EP3257437A1 (en) * 2016-06-13 2017-12-20 Friedrich-Alexander-Universität Erlangen-Nürnberg Method and system for analyzing human gait
CN106097654A (en) * 2016-07-27 2016-11-09 歌尔股份有限公司 A kind of fall detection method and wearable falling detection device
CN106237604A (en) * 2016-08-31 2016-12-21 歌尔股份有限公司 Wearable device and the method utilizing its monitoring kinestate

Also Published As

Publication number Publication date
CN109620241A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
Tubaiz et al. Glove-based continuous Arabic sign language recognition in user-dependent mode
Frank et al. Activity and gait recognition with time-delay embeddings
US9402568B2 (en) Method and system for detecting a fall based on comparing data to criteria derived from multiple fall data sets
WO2017050140A1 (en) Method for recognizing a human motion, method for recognizing a user action and smart terminal
CN107688790B (en) Human behavior recognition method and device, storage medium and electronic equipment
CN107320115B (en) Self-adaptive mental fatigue assessment device and method
CN108960430B (en) Method and apparatus for generating personalized classifiers for human athletic activities
Ahmed et al. An approach to classify human activities in real-time from smartphone sensor data
WO2009090584A2 (en) Method and system for activity recognition and its application in fall detection
CN111089604B (en) Body-building exercise identification method based on wearable sensor
CN112464738B (en) Improved naive Bayes algorithm user behavior identification method based on mobile phone sensor
CN108958482B (en) Similarity action recognition device and method based on convolutional neural network
CN109620241B (en) Wearable device and motion monitoring method based on same
Giuffrida et al. Fall detection with supervised machine learning using wearable sensors
CN111643092A (en) Epilepsia alarm device and epilepsia detection method
Sheng et al. An adaptive time window method for human activity recognition
CN110598599A (en) Method and device for detecting abnormal gait of human body based on Gabor atomic decomposition
EP2313814A2 (en) A method, device, and computer program product for event detection while preventing misclassification
JP6458387B2 (en) Bioelectric noise identification system and bioelectric noise removal system
CN111626273A (en) Fall behavior recognition system and method based on atomic action time sequence characteristics
Nguyen et al. The internet-of-things based fall detection using fusion feature
KR102236630B1 (en) System for recognizing scratch motion based on a wearable communications terminal and method therefor
Hussein et al. Robust recognition of human activities using smartphone sensor data
JPWO2019130840A1 (en) Signal processing equipment, analysis system, signal processing method and signal processing program
Malik et al. An efficient vision based elderly care monitoring framework using fall detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20191114

Address after: 266104 Laoshan Qingdao District North House Street investment service center room, Room 308, Shandong

Applicant after: GEER TECHNOLOGY CO., LTD.

Address before: 266061, No. 3, building 18, Qinling Mountains Road, Laoshan District, Shandong, Qingdao 401

Applicant before: Qingdao real time Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant